Skip to main content
note on xid wraparound
Source Link
Erwin Brandstetter
  • 186.6k
  • 28
  • 465
  • 639

This problem could be easily solved by assigning in step 2 timestamp when update becomes visible for other transactions (transaction commit timestamp in other words) instead of CURRENT_TIMESTAMP or clock_timestamp().

This is logically impossible. Postgres writes new row versions before it finally commits to make them visible. It would require prophetic capabilities to write a future timestamp yet unknown at the time of writing.

However, you can get commit timestamps from a different source: since Postgres 9.5, there is a GUC setting track_commit_timestamptrack_commit_timestamp to start logging commit timestamps globally.

Then you can get commit timestamps with the utility function pg_xact_commit_timestamp(xid). Your query could look like:

SELECT * FROM my_table t WHERE pg_xact_commit_timestamp(t.xmin) > _some_persisted_timestamp; 

Be aware that commit timestamps are not kept around forever. After two billion transactions transactions (2^31), transaction IDs are "frozen". That does not delete it right away, but after 4 billion transactions, the information is gone for certain. That's a big number of transactions, and only very busy databases burn that much over a lifetime. But there can be programming errors burning through transaction numbers more quickly than expected ...

Your step 2 and step 3 trade positions, and you record the commit timestamp instead of CURRENT_TIMESTAMP - or xmin from any freshly updated row to derive the commit timestamp with pg_xact_commit_timestamp() once more.

More:

About xmin:

But I am not completely sure I understand your task. Maybe you need a queuing tool or process rows one by one like discussed here:

This problem could be easily solved by assigning in step 2 timestamp when update becomes visible for other transactions (transaction commit timestamp in other words) instead of CURRENT_TIMESTAMP or clock_timestamp().

This is logically impossible. Postgres writes new row versions before it finally commits to make them visible. It would require prophetic capabilities to write a future timestamp yet unknown at the time of writing.

However, you can get commit timestamps from a different source: since Postgres 9.5, there is a GUC setting track_commit_timestamp to start logging commit timestamps globally.

Then you can get commit timestamps with the utility function pg_xact_commit_timestamp(xid). Your query could look like:

SELECT * FROM my_table t WHERE pg_xact_commit_timestamp(t.xmin) > _some_persisted_timestamp; 

Your step 2 and step 3 trade positions, and you record the commit timestamp instead of CURRENT_TIMESTAMP - or xmin from any freshly updated row to derive the commit timestamp with pg_xact_commit_timestamp() once more.

More:

About xmin:

But I am not completely sure I understand your task. Maybe you need a queuing tool or process rows one by one like discussed here:

This problem could be easily solved by assigning in step 2 timestamp when update becomes visible for other transactions (transaction commit timestamp in other words) instead of CURRENT_TIMESTAMP or clock_timestamp().

This is logically impossible. Postgres writes new row versions before it finally commits to make them visible. It would require prophetic capabilities to write a future timestamp yet unknown at the time of writing.

However, you can get commit timestamps from a different source: since Postgres 9.5, there is a GUC setting track_commit_timestamp to start logging commit timestamps globally.

Then you can get commit timestamps with the utility function pg_xact_commit_timestamp(xid). Your query could look like:

SELECT * FROM my_table t WHERE pg_xact_commit_timestamp(t.xmin) > _some_persisted_timestamp; 

Be aware that commit timestamps are not kept around forever. After two billion transactions transactions (2^31), transaction IDs are "frozen". That does not delete it right away, but after 4 billion transactions, the information is gone for certain. That's a big number of transactions, and only very busy databases burn that much over a lifetime. But there can be programming errors burning through transaction numbers more quickly than expected ...

Your step 2 and step 3 trade positions, and you record the commit timestamp instead of CURRENT_TIMESTAMP - or xmin from any freshly updated row to derive the commit timestamp with pg_xact_commit_timestamp() once more.

More:

About xmin:

But I am not completely sure I understand your task. Maybe you need a queuing tool or process rows one by one like discussed here:

added 43 characters in body
Source Link
Erwin Brandstetter
  • 186.6k
  • 28
  • 465
  • 639

This problem could be easily solved by assigning in step 2 timestamp when update becomes visible for other transactions (transaction commit timestamp in other words) instead of CURRENT_TIMESTAMP or clock_timestamp().

This is logically impossible. Postgres writes new row versions before it finally commits to make them visible. It would require prophetic capabilities to write a future timestamp yet unknown at the time of writing.

However, you can get commit timestamps from a different source: since Postgres 9.5, there is a GUC setting track_commit_timestamp to start logging commit timestamps globally.

Then you can get commit timestamps with the utility function pg_xact_commit_timestamp(xid). Your query could look like:

SELECT * FROM my_table t WHERE pg_xact_commit_timestamp(t.xmin) > _some_persisted_timestamp; 

Of course, inYour step 2 and step 3 trade positions, and you record the commit timestamp instead of CURRENT_TIMESTAMP - or xmin from any freshly updated row andto derive the commit timestamp with pg_xact_commit_timestamp() once more.

More:

About xmin:

But I am not completely sure I understand your task. Maybe you need a queuing tool or process rows one by one like discussed here:

This problem could be easily solved by assigning in step 2 timestamp when update becomes visible for other transactions (transaction commit timestamp in other words) instead of CURRENT_TIMESTAMP or clock_timestamp().

This is logically impossible. Postgres writes new row versions before it finally commits to make them visible. It would require prophetic capabilities to write a future timestamp yet unknown at the time of writing.

However, you can get commit timestamps from a different source: since Postgres 9.5, there is a GUC setting track_commit_timestamp to start logging commit timestamps globally.

Then you can get commit timestamps with the utility function pg_xact_commit_timestamp(xid). Your query could look like:

SELECT * FROM my_table t WHERE pg_xact_commit_timestamp(t.xmin) > _some_persisted_timestamp; 

Of course, in step 2, you record the commit timestamp instead of CURRENT_TIMESTAMP - or xmin from any freshly updated row and derive the commit timestamp with pg_xact_commit_timestamp() once more.

More:

About xmin:

But I am not completely sure I understand your task. Maybe you need a queuing tool or process rows one by one like discussed here:

This problem could be easily solved by assigning in step 2 timestamp when update becomes visible for other transactions (transaction commit timestamp in other words) instead of CURRENT_TIMESTAMP or clock_timestamp().

This is logically impossible. Postgres writes new row versions before it finally commits to make them visible. It would require prophetic capabilities to write a future timestamp yet unknown at the time of writing.

However, you can get commit timestamps from a different source: since Postgres 9.5, there is a GUC setting track_commit_timestamp to start logging commit timestamps globally.

Then you can get commit timestamps with the utility function pg_xact_commit_timestamp(xid). Your query could look like:

SELECT * FROM my_table t WHERE pg_xact_commit_timestamp(t.xmin) > _some_persisted_timestamp; 

Your step 2 and step 3 trade positions, and you record the commit timestamp instead of CURRENT_TIMESTAMP - or xmin from any freshly updated row to derive the commit timestamp with pg_xact_commit_timestamp() once more.

More:

About xmin:

But I am not completely sure I understand your task. Maybe you need a queuing tool or process rows one by one like discussed here:

added 206 characters in body
Source Link
Erwin Brandstetter
  • 186.6k
  • 28
  • 465
  • 639

This problem could be easily solved by assigning in step 2 timestamp when update becomes visible for other transactions (transaction commit timestamp in other words) instead of CURRENT_TIMESTAMP or clock_timestamp().

This is logically impossible. Postgres writes new row versions before it finally commits to make them visible. It would require prophetic capabilities to write a future timestamp yet unknown at the time of writing.

However, you can get commit timestamps from a different source: since Postgres 9.5, there is a GUC setting track_commit_timestamp which instruct Postgres to startsstart logging commit timestamps globally.

Then you can get commit timestamps with the utility function pg_xact_commit_timestamp(xid). Your query could look like:

SELECT * FROM my_table t WHERE pg_xact_commit_timestamp(t.xmin) > _some_persisted_timestamp; 

SeeOf course, in step 2, you record the commit timestamp instead of CURRENT_TIMESTAMP - or xmin from any freshly updated row and derive the commit timestamp with pg_xact_commit_timestamp() once more.

More:

About xmin:

But I am not completely sure I understand your task. Maybe you need a queuing tool or process rows one by one like discussed here:

This problem could be easily solved by assigning in step 2 timestamp when update becomes visible for other transactions (transaction commit timestamp in other words) instead of CURRENT_TIMESTAMP or clock_timestamp().

This is logically impossible. Postgres writes new row versions before it finally commits to make them visible. It would require prophetic capabilities to write a future timestamp yet unknown at the time of writing.

However, you can get commit timestamps from a different source: since Postgres 9.5, there is a GUC setting track_commit_timestamp which instruct Postgres to starts logging commit timestamps globally.

Then you can get commit timestamps with the utility function pg_xact_commit_timestamp(xid). Your query could look like:

SELECT * FROM my_table t WHERE pg_xact_commit_timestamp(t.xmin) > _some_persisted_timestamp; 

See:

But I am not completely sure I understand your task. Maybe you need a queuing tool or process rows one by one like discussed here:

This problem could be easily solved by assigning in step 2 timestamp when update becomes visible for other transactions (transaction commit timestamp in other words) instead of CURRENT_TIMESTAMP or clock_timestamp().

This is logically impossible. Postgres writes new row versions before it finally commits to make them visible. It would require prophetic capabilities to write a future timestamp yet unknown at the time of writing.

However, you can get commit timestamps from a different source: since Postgres 9.5, there is a GUC setting track_commit_timestamp to start logging commit timestamps globally.

Then you can get commit timestamps with the utility function pg_xact_commit_timestamp(xid). Your query could look like:

SELECT * FROM my_table t WHERE pg_xact_commit_timestamp(t.xmin) > _some_persisted_timestamp; 

Of course, in step 2, you record the commit timestamp instead of CURRENT_TIMESTAMP - or xmin from any freshly updated row and derive the commit timestamp with pg_xact_commit_timestamp() once more.

More:

About xmin:

But I am not completely sure I understand your task. Maybe you need a queuing tool or process rows one by one like discussed here:

added 206 characters in body
Source Link
Erwin Brandstetter
  • 186.6k
  • 28
  • 465
  • 639
Loading
Source Link
Erwin Brandstetter
  • 186.6k
  • 28
  • 465
  • 639
Loading