1

I have a table like this:

create table trades ( instrument varchar(20) not null, ts timestamp not null, price double precision not null, quantity double precision not null, direction integer not null, id serial constraint trades_pkey primary key ); 

I wanted to move the id to bigint, so I did:

ALTER TABLE trades ALTER id TYPE BIGSERIAL;

then, after, I did:

ALTER SEQUENCE trades_id_seq AS BIGINT;

and now, pretty much any large query, using the id in the WHERE expression, will be so slow it will timeout.

The database is AWS RDS Postgres.

Could it be a problem with the index itself?


Here is the query:

EXPLAIN (ANALYZE, BUFFERS) SELECT id, instrument, ts, price, quantity, direction FROM binance_trades WHERE id >= 119655532 ORDER BY ts LIMIT 50; 

and output:

50 rows retrieved starting from 1 in 1 m 4 s 605 ms (execution: 1 m 4 s 353 ms, fetching: 252 ms)

Limit (cost=0.57..9.86 rows=50 width=44) (actual time=86743.860..86743.878 rows=50 loops=1) Buffers: shared hit=20199328 read=1312119 dirtied=111632 written=109974 I/O Timings: read=40693.524 write=335.051 -> Index Scan using idx_extrades_ts on binance_trades (cost=0.57..8015921.79 rows=43144801 width=44) (actual time=86743.858..86743.871 rows=50 loops=1) Filter: (id >= 119655532) Rows Removed by Filter: 119654350 Buffers: shared hit=20199328 read=1312119 dirtied=111632 written=109974 I/O Timings: read=40693.524 write=335.051 Planning Time: 0.088 ms Execution Time: 86743.902 ms 

The activity on AWS:

enter image description here

it's a 2 cores, 8gb ARM server. Before I did the alter, the same request was < 1 sec. now, small requests are slow and long ones will timeout.

3
  • I've added details. I don't have the fast case anymore since I've altered the db I'm using (I can lose the data in it since it's for debugging, but I'd prefer not since it's a collection of a week) Commented Aug 25, 2021 at 16:04
  • Ok, but you didn't add the output from EXPLAIN (ANALYZE, BUFFERS). Commented Aug 25, 2021 at 16:11
  • @LaurenzAlbe fixed Commented Aug 25, 2021 at 16:18

1 Answer 1

0

You should try to ANALYZE the table and see if that works.

The problem is that PostgreSQL uses an index to support the ORDER BY, but not the WHERE condition.

To force PostgreSQL to not use that index, use

ORDER BY ts + INTERVAL '0 seconds'; 
5
  • this still gives me an extremely slow result: execution: 49 s 732 ms, fetching: 55 ms; what I do not understand is that the problem happened when I moved the id to a bigint; I will try right now to delete the id and re-create it with an int just to see if it goes back to how it was before. Commented Aug 25, 2021 at 18:10
  • I just moved the id back to int, and the same query is now taking 22secs which is still not right, but much faster. What is odd is that the other tables in the same db are performing very well Commented Aug 25, 2021 at 18:25
  • @Thomas Did you try doing an ANALYZE on the table (or VACUUM ANALYZE)? The ALTER will clear the stats for that column. Not having stats will lead to bad plans. Commented Aug 25, 2021 at 18:52
  • let me do that right now. Commented Aug 25, 2021 at 19:00
  • ANALYZE didn't help, however the universal solution worked: reboot! that's very odd since it's managed by AWS. No matter what I did, this table was insanely slow; I wish I would have been able to understand what happened. At the same time, I saw that AWS had some issue with disk space when I grew the index to big int, it was supposed to autoscale but it somehow didn't so I assume this had some relation. Commented Aug 25, 2021 at 19:13

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.