In my Postgres database have a large table execution_transcripts with several columns. Two of those columns, task and result are JSONB columns holding values that are often in the kilobytes range and sometimes in the megabytes range.
The table is getting too large, and we are working on making a strategy to reduce its size.
Specifically, SELECT pg_total_relation_size('treeline_schema.execution_transcript') returns 22528457179136 which is about 20 TB.
As an experiment, on a copy of our production database, I ran the following query to null out those two columns in the bottom 2000 rows:
UPDATE execution_transcript SET result = NULL, task = NULL WHERE id <= 2000 and id > 0; Having done this, I see that the total relation size has increased by 319.4 kb to 22528457498624.
Is this because all I've done is append new values to a log and some subsequent compaction process is going to shrink the relation size in the future? Do I need to take additional steps if I want to see the table's footprint shrink?