I'm performing schema changes on a large database, correcting ancient design mistakes (expanding primary keys and their corresponding foreign keys from INTEGER to BIGINT). The basic process is:
- Shutdown our application.
- Drop DB triggers and constraints.
- Perform the changes (
ALTER TABLE foo ALTER COLUMN bar TYPE BIGINTfor each table and primary/foreign key). - Recreate the triggers and constraints (
NOT VALID). - Restart the application.
- Validate the constraints (
ALTER TABLE foo VALIDATE CONSTRAINT barfor each constraint).
Note:
- Our Postgres DB (version 11.7) and our application are hosted on Heroku.
- Some of our tables are quite large (millions of rows, the largest being ~1.2B rows).
The problem is in the final validation step. When conditions are just "right", a single ALTER TABLE foo VALIDATE CONSTRAINT bar can create database writes at a pace that exceeds the WAL's write capacity. This leads to varying degrees of unhappiness up to crashing the DB server. (My understanding is that Heroku uses a bespoke WAL plug-in to implement their "continuous backups" and "db follower" features. I've tried contacting Heroku support on this issue -- their response was less than helpful, even though we're on an enterprise-level support contract).
My question: Is there any downside to leaving these constraints in the NOT VALID state?
Related: Does anyone know why validating a constraint generates so much write activity?