Skip to main content
AI Assist is now on Stack Overflow. Start a chat to get instant answers from across the network. Sign up to save and share your chats.
markup, denoise, update links
Source Link
Erwin Brandstetter
  • 668.6k
  • 160
  • 1.2k
  • 1.3k

Your clever solution

###Your clever solution You already found a clever solution forFor your particular case: A, a partial unique index that only covers rare values, so Postgres won't (can't) use the index for the common NULL value.

It's a textbook use-case for a partial index. Literally!Literally! The manual has a similar example and thesethis perfectly matching advice to go with it:

Finally, a partial index can also be used to override the system's query plan choices. Also, data sets with peculiar distributions might cause the system to use an index when it really should not. In that case the index can be set up so that it is not available for the offending query. Normally, PostgreSQL makes reasonable choices about index usage (e.g., it avoids them when retrieving common values, so the earlier example really only saves index size, it is not required to avoid index usage), and grossly incorrect plan choices are cause for a bug report.

Keep in mind that setting up a partial index indicates that you know at least as much as the query planner knows, in particular you know when an index might be profitable. Forming this knowledge requires experience and understanding of how indexes in PostgreSQL work. In most cases, the advantage of a partial index over a regular index will bewill be minimal. There are cases where they are quite counterproductive [...]

You commented: The table has few millions of rows and just few thousands of rows with not null values, so this is

The table has few millions of rows and just few thousands of rows with not null values.

So it's a perfect use-case. It will even speed up queries on non-null values for substitute_confirmation_token because the index is much smaller now.

Answer to question

##Answer to question ToTo answer your original question: it's not possible to "disable" an existing index for a particular query. You would have to drop it, but that's way totoo expensive.

Fake drop index

###Fake drop index YouYou could drop an index inside a transaction, run your SELECT and then, instead of committing, use ROLLBACK. That's fast, but be aware that (per documentationquoting the manual):

So this is no good for regular use in multi-user environments.

###More detailed statistics NormallySee:

More detailed statistics

Normally, though, it should be enough to raise the STATISTICS target for the column, so Postgres can more reliably identify common values and avoid the index for those. Try:

ALTER TABLE booking ALTER COLUMN substitute_confirmation_token SET STATISTICS 2000;1000; 

Then: ANALYZE booking; before you try your query again. 20001000 is an example value. Related:

###Your clever solution You already found a clever solution for your particular case: A partial unique index that only covers rare values, so Postgres won't (can't) use the index for the common NULL value.

It's a textbook use-case for a partial index. Literally! The manual has a similar example and these perfectly matching advice to go with it:

Finally, a partial index can also be used to override the system's query plan choices. Also, data sets with peculiar distributions might cause the system to use an index when it really should not. In that case the index can be set up so that it is not available for the offending query. Normally, PostgreSQL makes reasonable choices about index usage (e.g., it avoids them when retrieving common values, so the earlier example really only saves index size, it is not required to avoid index usage), and grossly incorrect plan choices are cause for a bug report.

Keep in mind that setting up a partial index indicates that you know at least as much as the query planner knows, in particular you know when an index might be profitable. Forming this knowledge requires experience and understanding of how indexes in PostgreSQL work. In most cases, the advantage of a partial index over a regular index will be minimal.

You commented: The table has few millions of rows and just few thousands of rows with not null values, so this is a perfect use-case. It will even speed up queries on non-null values for substitute_confirmation_token because the index is much smaller now.

##Answer to question To answer your original question: it's not possible to "disable" an existing index for a particular query. You would have to drop it, but that's way to expensive.

###Fake drop index You could drop an index inside a transaction, run your SELECT and then, instead of committing, use ROLLBACK. That's fast, but be aware that (per documentation):

So this is no good for multi-user environments.

###More detailed statistics Normally, though, it should be enough to raise the STATISTICS target for the column, so Postgres can more reliably identify common values and avoid the index for those. Try:

ALTER TABLE booking ALTER COLUMN substitute_confirmation_token SET STATISTICS 2000; 

Then: ANALYZE booking; before you try your query again. 2000 is an example value. Related:

Your clever solution

For your particular case, a partial unique index that only covers rare values, so Postgres won't (can't) use the index for the common NULL value.

It's a textbook use-case for a partial index. Literally! The manual has a similar example and this perfectly matching advice to go with it:

Finally, a partial index can also be used to override the system's query plan choices. Also, data sets with peculiar distributions might cause the system to use an index when it really should not. In that case the index can be set up so that it is not available for the offending query. Normally, PostgreSQL makes reasonable choices about index usage (e.g., it avoids them when retrieving common values, so the earlier example really only saves index size, it is not required to avoid index usage), and grossly incorrect plan choices are cause for a bug report.

Keep in mind that setting up a partial index indicates that you know at least as much as the query planner knows, in particular you know when an index might be profitable. Forming this knowledge requires experience and understanding of how indexes in PostgreSQL work. In most cases, the advantage of a partial index over a regular index will be minimal. There are cases where they are quite counterproductive [...]

You commented:

The table has few millions of rows and just few thousands of rows with not null values.

So it's a perfect use-case. It will even speed up queries on non-null values for substitute_confirmation_token because the index is much smaller now.

Answer to question

To answer your original question: it's not possible to "disable" an existing index for a particular query. You would have to drop it, but that's way too expensive.

Fake drop index

You could drop an index inside a transaction, run your SELECT and then, instead of committing, use ROLLBACK. That's fast, but be aware that (quoting the manual):

So this is no good for regular use in multi-user environments.

See:

More detailed statistics

Normally, though, it should be enough to raise the STATISTICS target for the column, so Postgres can more reliably identify common values and avoid the index for those. Try:

ALTER TABLE booking ALTER COLUMN substitute_confirmation_token SET STATISTICS 1000; 

Then: ANALYZE booking; before you try your query again. 1000 is an example value. Related:

replaced http://stackoverflow.com/ with https://stackoverflow.com/
Source Link
URL Rewriter Bot
URL Rewriter Bot

###Your clever solution You already found a clever solution for your particular case: A partial unique index that only covers rare values, so Postgres won't (can't) use the index for the common NULL value.

CREATE UNIQUE INDEX booking_substitute_confirmation_uni ON booking (substitute_confirmation_token) WHERE substitute_confirmation_token IS NOT NULL; 

It's a textbook use-case for a partial index. Literally! The manual has a similar example and these perfectly matching advice to go with it:

Finally, a partial index can also be used to override the system's query plan choices. Also, data sets with peculiar distributions might cause the system to use an index when it really should not. In that case the index can be set up so that it is not available for the offending query. Normally, PostgreSQL makes reasonable choices about index usage (e.g., it avoids them when retrieving common values, so the earlier example really only saves index size, it is not required to avoid index usage), and grossly incorrect plan choices are cause for a bug report.

Keep in mind that setting up a partial index indicates that you know at least as much as the query planner knows, in particular you know when an index might be profitable. Forming this knowledge requires experience and understanding of how indexes in PostgreSQL work. In most cases, the advantage of a partial index over a regular index will be minimal.

You commented: The table has few millions of rows and just few thousands of rows with not null values, so this is a perfect use-case. It will even speed up queries on non-null values for substitute_confirmation_token because the index is much smaller now.

##Answer to question To answer your original question: it's not possible to "disable" an existing index for a particular query. You would have to drop it, but that's way to expensive.

###Fake drop index You could drop an index inside a transaction, run your SELECT and then, instead of committing, use ROLLBACK. That's fast, but be aware that (per documentation):

A normal DROP INDEX acquires exclusive lock on the table, blocking other accesses until the index drop can be completed.

So this is no good for multi-user environments.

BEGIN; DROP INDEX big_user_id_created_at_idx; SELECT ...; ROLLBACK; -- so the index is preserved after all 

###More detailed statistics Normally, though, it should be enough to raise the STATISTICS target for the column, so Postgres can more reliably identify common values and avoid the index for those. Try:

ALTER TABLE booking ALTER COLUMN substitute_confirmation_token SET STATISTICS 2000; 

Then: ANALYZE booking; before you try your query again. 2000 is an example value. Related:

###Your clever solution You already found a clever solution for your particular case: A partial unique index that only covers rare values, so Postgres won't (can't) use the index for the common NULL value.

CREATE UNIQUE INDEX booking_substitute_confirmation_uni ON booking (substitute_confirmation_token) WHERE substitute_confirmation_token IS NOT NULL; 

It's a textbook use-case for a partial index. Literally! The manual has a similar example and these perfectly matching advice to go with it:

Finally, a partial index can also be used to override the system's query plan choices. Also, data sets with peculiar distributions might cause the system to use an index when it really should not. In that case the index can be set up so that it is not available for the offending query. Normally, PostgreSQL makes reasonable choices about index usage (e.g., it avoids them when retrieving common values, so the earlier example really only saves index size, it is not required to avoid index usage), and grossly incorrect plan choices are cause for a bug report.

Keep in mind that setting up a partial index indicates that you know at least as much as the query planner knows, in particular you know when an index might be profitable. Forming this knowledge requires experience and understanding of how indexes in PostgreSQL work. In most cases, the advantage of a partial index over a regular index will be minimal.

You commented: The table has few millions of rows and just few thousands of rows with not null values, so this is a perfect use-case. It will even speed up queries on non-null values for substitute_confirmation_token because the index is much smaller now.

##Answer to question To answer your original question: it's not possible to "disable" an existing index for a particular query. You would have to drop it, but that's way to expensive.

###Fake drop index You could drop an index inside a transaction, run your SELECT and then, instead of committing, use ROLLBACK. That's fast, but be aware that (per documentation):

A normal DROP INDEX acquires exclusive lock on the table, blocking other accesses until the index drop can be completed.

So this is no good for multi-user environments.

BEGIN; DROP INDEX big_user_id_created_at_idx; SELECT ...; ROLLBACK; -- so the index is preserved after all 

###More detailed statistics Normally, though, it should be enough to raise the STATISTICS target for the column, so Postgres can more reliably identify common values and avoid the index for those. Try:

ALTER TABLE booking ALTER COLUMN substitute_confirmation_token SET STATISTICS 2000; 

Then: ANALYZE booking; before you try your query again. 2000 is an example value. Related:

###Your clever solution You already found a clever solution for your particular case: A partial unique index that only covers rare values, so Postgres won't (can't) use the index for the common NULL value.

CREATE UNIQUE INDEX booking_substitute_confirmation_uni ON booking (substitute_confirmation_token) WHERE substitute_confirmation_token IS NOT NULL; 

It's a textbook use-case for a partial index. Literally! The manual has a similar example and these perfectly matching advice to go with it:

Finally, a partial index can also be used to override the system's query plan choices. Also, data sets with peculiar distributions might cause the system to use an index when it really should not. In that case the index can be set up so that it is not available for the offending query. Normally, PostgreSQL makes reasonable choices about index usage (e.g., it avoids them when retrieving common values, so the earlier example really only saves index size, it is not required to avoid index usage), and grossly incorrect plan choices are cause for a bug report.

Keep in mind that setting up a partial index indicates that you know at least as much as the query planner knows, in particular you know when an index might be profitable. Forming this knowledge requires experience and understanding of how indexes in PostgreSQL work. In most cases, the advantage of a partial index over a regular index will be minimal.

You commented: The table has few millions of rows and just few thousands of rows with not null values, so this is a perfect use-case. It will even speed up queries on non-null values for substitute_confirmation_token because the index is much smaller now.

##Answer to question To answer your original question: it's not possible to "disable" an existing index for a particular query. You would have to drop it, but that's way to expensive.

###Fake drop index You could drop an index inside a transaction, run your SELECT and then, instead of committing, use ROLLBACK. That's fast, but be aware that (per documentation):

A normal DROP INDEX acquires exclusive lock on the table, blocking other accesses until the index drop can be completed.

So this is no good for multi-user environments.

BEGIN; DROP INDEX big_user_id_created_at_idx; SELECT ...; ROLLBACK; -- so the index is preserved after all 

###More detailed statistics Normally, though, it should be enough to raise the STATISTICS target for the column, so Postgres can more reliably identify common values and avoid the index for those. Try:

ALTER TABLE booking ALTER COLUMN substitute_confirmation_token SET STATISTICS 2000; 

Then: ANALYZE booking; before you try your query again. 2000 is an example value. Related:

Source Link
Erwin Brandstetter
  • 668.6k
  • 160
  • 1.2k
  • 1.3k

###Your clever solution You already found a clever solution for your particular case: A partial unique index that only covers rare values, so Postgres won't (can't) use the index for the common NULL value.

CREATE UNIQUE INDEX booking_substitute_confirmation_uni ON booking (substitute_confirmation_token) WHERE substitute_confirmation_token IS NOT NULL; 

It's a textbook use-case for a partial index. Literally! The manual has a similar example and these perfectly matching advice to go with it:

Finally, a partial index can also be used to override the system's query plan choices. Also, data sets with peculiar distributions might cause the system to use an index when it really should not. In that case the index can be set up so that it is not available for the offending query. Normally, PostgreSQL makes reasonable choices about index usage (e.g., it avoids them when retrieving common values, so the earlier example really only saves index size, it is not required to avoid index usage), and grossly incorrect plan choices are cause for a bug report.

Keep in mind that setting up a partial index indicates that you know at least as much as the query planner knows, in particular you know when an index might be profitable. Forming this knowledge requires experience and understanding of how indexes in PostgreSQL work. In most cases, the advantage of a partial index over a regular index will be minimal.

You commented: The table has few millions of rows and just few thousands of rows with not null values, so this is a perfect use-case. It will even speed up queries on non-null values for substitute_confirmation_token because the index is much smaller now.

##Answer to question To answer your original question: it's not possible to "disable" an existing index for a particular query. You would have to drop it, but that's way to expensive.

###Fake drop index You could drop an index inside a transaction, run your SELECT and then, instead of committing, use ROLLBACK. That's fast, but be aware that (per documentation):

A normal DROP INDEX acquires exclusive lock on the table, blocking other accesses until the index drop can be completed.

So this is no good for multi-user environments.

BEGIN; DROP INDEX big_user_id_created_at_idx; SELECT ...; ROLLBACK; -- so the index is preserved after all 

###More detailed statistics Normally, though, it should be enough to raise the STATISTICS target for the column, so Postgres can more reliably identify common values and avoid the index for those. Try:

ALTER TABLE booking ALTER COLUMN substitute_confirmation_token SET STATISTICS 2000; 

Then: ANALYZE booking; before you try your query again. 2000 is an example value. Related: