After our provider upgraded our pg-14 database from 14.17 to 14.19, with timescaledb extension 2.5.2, our app completely broke because of awful database performances.
Our analysis showed that the issue was from our queries that used the timestamp type. The following query takes ages to execute (5-10mn)
We generate them with psycopg cursor as such
QUERY = """SELECT m.time, m.value FROM measurements m WHERE m.series_id = %(series_id)s AND m.time >= %(start)s AND m.time <= %(end)s ORDER BY m.time""" cur.execute(QUERY, {"series_id": SERIES_ID, "start": START, "end": END}) This generates the following query
SELECT m.time, m.value FROM measurements m WHERE m.time >= '2020-01-02T00:00:00+00:00'::timestamp AND m.time <= '2023-01-01T00:00:00+00:00'::timestamp AND m.series_id= %s ORDER BY m.time When we analyze it, we get Query read 1.7 GB from disk (or system disk cache)
Full analyze: https://explain.depesz.com/s/TGIg#stats
I have read that best practice is to use timestamptz (source) but I cannot find what could have caused that change in the release notes for 14.18 nor 14.19.
If this type is not recommended, why does cursor generates queries like that ?
To fix this, we executed a query built with a python string instead of passing arguments to cursor. The following query executes instantaneously.
QUERY = f"""SELECT m.time, m.value FROM measurements m WHERE m.series_id = '{SERIES_ID}' AND m.time >= '{START}' AND m.time <= '{END}' ORDER BY m.time""" cur.execute(QUERY) This generates the following query
SELECT m.time, m.value FROM measurements m WHERE m.time >= '2020-01-02T00:00:00+00:00' AND m.time <= '2023-01-01T00:00:00+00:00' AND m.series_id = %s ORDER BY m.time This query works just fine, do you know what caused this huge performance change between the pg versions ?
When we analyze it we get Query read 4.6 MB from disk (or system disk cache) 
Full analyze: https://explain.depesz.com/s/tVBM#stats
Some information about the table
Column types
column | data type -----------+-------------- time | timestamptz value | float8 series_id | text Indexes
tablename |indexname |indexdef | ------------------------+----------------------------------------------------------+--------------------------------------------------------------------------------------------------------+ measurements |measurements_series_id_time |CREATE INDEX measurements_series_id_time ON public.measurements USING btree (series_id, "time" DESC) measurements |measurements_pk |CREATE UNIQUE INDEX measurements_pk ON public.measurements USING btree ("time", series_id) measurements |measurements_time_idx |CREATE INDEX measurements_time_idx ON public.measurements USING btree ("time" DESC) Some information about the execution environment
- python : ^3.10
- psycopg: ^3.1.4

explain(analyze,buffers,settings,verbose)for each variant of the query. As already pointed out, make sure thetimezoneanddatestylesettings are the same when comparing these - the performance could be different but more importantly, these could actually yield different results, invalidating such test.m.time?explain(analyze,buffers,settings,verbose)for each case, not pictures of summaries. Also, what db vendor is this and do you know their update procedure? Was that a simple in-placepg_upgrade? Did you try runningvacuum,analyze,reindexon the tables in question?measurementstable a TimescaleDB hyper table? 2) If yes to 1) then,timescaledb 2.5.2was released 2022-02-09 and the current version2.22.1. You might think about upgrading. 3) Where are you getting the read from disk numbers? 4) What happens if in the first query you doAND m.time >= %(start)s::timestamptz AND m.time <= %(end)s::timestamptz. 5) Add to question text the Python driver you using to do the queries and it's version.vacuumbut we rananalyze. We did not runreindexbut we tried deleting and recreating the index