I have a Postgres table that has numerous columns which frequently show up in the where clause of select queries. The table has been indexed accordingly, with indexes on all of these columns (mostly single-column indexes, but with some composite indexes thrown in). However, there is one new kind of query that this indexing isn’t fully supporting: queries with a deleted_at is null
condition in the where clause (we soft-delete records using this column). Some queries with this are running very slowly despite all of the other columns they use being indexed. Naturally, I want to find a way to improve these queries with a change to our indexing.
An example would be:
select count(distinct user_id)
from my_table
where group_id = '123' and deleted_at is null
In this example, both user_id
and group_id
are indexed. Without and deleted_at is null
, the query runs quickly. With it, slowly.
I have four competing solutions. I plan on testing them, but I really want to know if any old hands are able to look at this situation and have a simple explanation for why one should be expected to perform better than the other. I’m just getting the hang of thinking about indexing after being spoiled by Snowflake for years so I’m really looking for how one would reason about this.
My solutions:
- An index on the expression (docs)
deleted_at is null
. Basically:CREATE INDEX deleted_at_is_null ON my_table ((deleted_at is null));
. This is the simplest solution. It’s just one more index, and one with a clear purpose. I’m not sure, though, if it should actually be expected to help in queries where we have other indexed columns in the where clause! Can Postgres use them separately or do they need to be composite? - Replace each of the current indexes (like the ones on
user_id
andgroup_id
above) with composite indexes on that column, plusdeleted_at is null
. - Same as 2, but instead of replacing the indexes, add the composite indexes in addition to the currently-existing indexes. This feels wrong and redundant, but I am not sure.
- Add a new partial index for each of the currently-existing indexes with a
where deleted_at is not null
condition. Like number 3, this feels like too many indexes.
I’m assuming that an index on deleted_at
itself is overkill since I never need to query for specific ranges/values of it – only for whether it is null. Please correct me if I am wrong, though!
One other thing to note is that the vast majority of the records have null deleted_at
.
Any help would be much appreciated! Just looking for some intuition and best practices around this problem.
2
Answers
PostgreSQL will generally only use one index per table. If you have single column indexes, it must choose only one. In your example, the query planner has to choose whether using the
user_id
,group_id
ordeleted_at
index will be most performant. Which it chooses depends on the shape of your data, and whether your table statistics are up to date (runanalyze my_table
to make sure).For example, if half the rows are deleted using an index on
deleted_at
would only reduce the number of rows to search by half. But if only a small fraction are in group 123, it will choose to use the index ongroup_id
and then scan those fordeleted_at is null
anddistinct user_id
.You can be more efficient about creating indexes for multiple columns by taking advantage of how composite indexes work. An index on
(a, b, c)
can cover queries which include a, queries which include a and b, and queries which include a, b, and c. It cannot cover queries which include only b or c.For example, to cover every combination of deleted_at and group_id and user_id in you’d need three indexes.
But you said you had a lot of columns, so the number of indexes can expand rapidly. And since
where deleted_at is null
is likely to be used in most queries it makes more sense to partition your table bydeleted_at is null
. This will create two tables which appear to be one: one table has deleted rows, the other has active rows. If you includewhere deleted_at is null
in a query PostgreSQL will simply query the appropriate partition, leaving it to choose indexes for other columns. It also makes it more efficient to remove "deleted" rows without blocking other queries.You can’t partition an existing table, so you have to make a new table, partition it, and copy your data over.
The downside is if you have a primary key the partition key has to part of it. And nulls aren’t allowed in primary keys. So you’d have to change your strategy to use a special date like 9999-01-01. For convenience and safety, create a view which only selects non-deleted rows.
Demonstration
Not really an answer, just an addition to your list:
(deleted_at is null)
. Otherwise, mismatching the predicate disqualifies the partial index entirely.(deleted_at is null)
as either a key column or a partial index predicate but rather strapdeleted_at
on as payload usinginclude
.The former is a missing combination of those you already established, the latter could work sort of against what the documentation clearly says about non-key column inclusion:
And it is not used in the qualification, but it is used in the scan, speeding things up by saving a whole subsequent heap scan. If you just add
deleted_at
, Postgres still prefers a plain index ongroup_id
, then a re-check on the heap because it needs to consult bothdeleted_at
as well asuser_id
it’s looking for.If you add both as payload:
Everything is in the index. Now Postgres sees
deleted_at
is in the index it’s already using, so both the output and the filter can re-use that:demo at db<>fiddle
That’s on 100k random
group_id
‘s anduser_id
‘s spread over 300k rows with 20%deleted_at IS NULL
.include
doesn’t support expressions, so it might actually get larger than a version with an expression on second position. For not-null
the whole timestamp gets pulled in there.