0

My database has an index on a timestamp column. There are many rows that have similar timestamps — there are 50,000 rows per minute.

Postgres timestamps store microsecond-level precision but I only need coarser precision, say, at the granularity of minutes. Rounding a timestamp down to the nearest minute will still use 8 bytes per timestamp since the microseconds are still stored, they’re just zeros.

I’m wondering if Postgres 13’s B-tree deduplication still makes it worthwhile to round timestamps. Rounding the timestamp column of every 50,000 rows to the same minute would mean every 50,000 index entries could potentially be deduplicated.

0

1 Answer 1

0

Yes, that could make use of index deduplication. Perhaps you can benefit from that, perhaps not. - user176905

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.