Your clever solution
###Your clever solution You already found a clever solution forFor your particular case: A, a partial unique index that only covers rare values, so Postgres won't (can't) use the index for the common NULL value.
It's a textbook use-case for a partial index. Literally!Literally! The manual has a similar example and thesethis perfectly matching advice to go with it:
Finally, a partial index can also be used to override the system's query plan choices. Also, data sets with peculiar distributions might cause the system to use an index when it really should not. In that case the index can be set up so that it is not available for the offending query. Normally, PostgreSQL makes reasonable choices about index usage (e.g., it avoids them when retrieving common values, so the earlier example really only saves index size, it is not required to avoid index usage), and grossly incorrect plan choices are cause for a bug report.
Keep in mind that setting up a partial index indicates that you know at least as much as the query planner knows, in particular you know when an index might be profitable. Forming this knowledge requires experience and understanding of how indexes in PostgreSQL work. In most cases, the advantage of a partial index over a regular index will bewill be minimal. There are cases where they are quite counterproductive [...]
You commented: The table has few millions of rows and just few thousands of rows with not null values, so this is
The table has few millions of rows and just few thousands of rows with not null values.
So it's a perfect use-case. It will even speed up queries on non-null values for substitute_confirmation_token because the index is much smaller now.
Answer to question
##Answer to question ToTo answer your original question: it's not possible to "disable" an existing index for a particular query. You would have to drop it, but that's way totoo expensive.
Fake drop index
###Fake drop index YouYou could drop an index inside a transaction, run your SELECT and then, instead of committing, use ROLLBACK. That's fast, but be aware that (per documentationquoting the manual):
So this is no good for regular use in multi-user environments.
###More detailed statistics NormallySee:
More detailed statistics
Normally, though, it should be enough to raise the STATISTICS target for the column, so Postgres can more reliably identify common values and avoid the index for those. Try:
ALTER TABLE booking ALTER COLUMN substitute_confirmation_token SET STATISTICS 2000;1000; Then: ANALYZE booking; before you try your query again. 20001000 is an example value. Related: