Been trying to run a consolidation query in AWS Redshift(Type: RA3.4xLarge) where the length of the query is around 6k characters( I know, pretty huge!!).
Now this query fails with Below error.
psycopg2.errors.InternalError_: Value too long for character type DETAIL: ----------------------------------------------- error: Value too long for character type code: 8001 context: Value too long for type character varying(1) query: 388111 location: string.cpp:175 process: query0_251_388111 [pid=13360] ----------------------------------------------- On further digging, found that the stl_query table logs every query run on the cluster and this has a 4k character limit on the column querytxt which leads to the above failure of the entire query.
View "pg_catalog.stl_query" Column | Type | Modifiers ----------------------------+-----------------------------+----------- userid | integer | query | integer | label | character(320) | xid | bigint | pid | integer | database | character(32) | querytxt | character(4000) | starttime | timestamp without time zone | endtime | timestamp without time zone | aborted | integer | insert_pristine | integer | concurrency_scaling_status | integer | So, the question here is (apart from reducing the query length), is there any work-around this situation ? Or am I deducing this whole thing wrong?