I'm working on a migration from a very old server (Ubuntu 10.04/Postgres 8.x, etc) to a new instance (18.04) with up-to-date versions of everything in our web stack.
Our PostgreSQL database uses a few extensions—notably PostGIS and fuzzystrmatch—as well as several custom functions. Probably pretty typical.
I expect a pg_dump like this..
pg_dump -U username -h 127.0.0.1 dbname > dbname.backup.sql ..will include everything: extensions, custom functions, and data. But, I don't want the old extensions.
If I initially create a same-named, but empty db, like this..
createdb -U postgres -T template0 --owner=dbowner dbname ..then manually add the extensions like this..
psql -p 5432 \connect dbname; CREATE EXTENSION postgis; CREATE EXTENSION postgis_topology; CREATE EXTENSION ogr_fdw; CREATE EXTENSION pgrouting; CREATE EXTENSION fuzzystrmatch; CREATE EXTENSION postgis_tiger_geocoder; \q ..so that everything is recent and up-to-date..
Will a db restore from the .sql file created from a vanilla pg_dump, like above, actually overwrite (a'la CREATE OR REPLACE) any same-named functions in the new extensions?
I'm starting to assume so, and I'm re-thinking this.
tl;dr
Is the best way to do this to first invoke the --data-only option in pg_dump for our db backup, then build our new, same-named schema manually, individually add our custom functions, then import the data-only .sql file?
FWIW I'm hoping to avoid that ^ approach, because I don't want to overlook any of our custom functions by moving them manually. While many of them have a prefix signalling our organization, unfortunately, some don't. Furthermore, I'm concerned if I do a whole db restore, then try to upgrade the extensions, the upgrade process will be painful because the source db/extension versions are so out-of-date to begin with.