I have a table like this:
create table trades ( instrument varchar(20) not null, ts timestamp not null, price double precision not null, quantity double precision not null, direction integer not null, id serial constraint trades_pkey primary key ); I wanted to move the id to bigint, so I did:
ALTER TABLE trades ALTER id TYPE BIGSERIAL;
then, after, I did:
ALTER SEQUENCE trades_id_seq AS BIGINT;
and now, pretty much any large query, using the id in the WHERE expression, will be so slow it will timeout.
The database is AWS RDS Postgres.
Could it be a problem with the index itself?
Here is the query:
EXPLAIN (ANALYZE, BUFFERS) SELECT id, instrument, ts, price, quantity, direction FROM binance_trades WHERE id >= 119655532 ORDER BY ts LIMIT 50; and output:
50 rows retrieved starting from 1 in 1 m 4 s 605 ms (execution: 1 m 4 s 353 ms, fetching: 252 ms)
Limit (cost=0.57..9.86 rows=50 width=44) (actual time=86743.860..86743.878 rows=50 loops=1) Buffers: shared hit=20199328 read=1312119 dirtied=111632 written=109974 I/O Timings: read=40693.524 write=335.051 -> Index Scan using idx_extrades_ts on binance_trades (cost=0.57..8015921.79 rows=43144801 width=44) (actual time=86743.858..86743.871 rows=50 loops=1) Filter: (id >= 119655532) Rows Removed by Filter: 119654350 Buffers: shared hit=20199328 read=1312119 dirtied=111632 written=109974 I/O Timings: read=40693.524 write=335.051 Planning Time: 0.088 ms Execution Time: 86743.902 ms The activity on AWS:
it's a 2 cores, 8gb ARM server. Before I did the alter, the same request was < 1 sec. now, small requests are slow and long ones will timeout.

EXPLAIN (ANALYZE, BUFFERS).