skip to Main Content

Let’s say I have the following PostgreSQL database schema:

Group
  id: int

Task:
  id: int
  created_at: datetime
  group: FK Group

I have the following Materialized View to calculate the number of tasks and the most recent Task.created_at value per group:

CREATE MATERIALIZED VIEW group_statistics AS (
SELECT
    group.id as group_id,
    MAX(task.created_at) AS latest_task_created_at,
    COUNT(task.id) AS task_count
FROM group
    LEFT OUTER JOIN task ON (group.id = task.group_id)
GROUP BY group.id
);

The Task table currently has 20 million records so refreshing this materialized view takes a long time (20-30 seconds). We’ve also been experiencing some short but major DB performance issues ever since we started refreshing the materialized every 10 min, even with CONCURRENTLY:

REFRESH MATERIALIZED VIEW CONCURRENTLY group_statistics;

Is there a more performant way to calculate these values? Note, they do NOT need to be exact. Approximate values are totally fine, e.g. latest_task_created_at can be 10-20 min delayed.

I’m thinking of caching these values on every write to the Task table. Either in Redis or in PostgreSQL itself.

Update

People are requesting the execution plan. EXPLAIN doesn’t work on REFRESH but I ran EXPLAIN on the actual query. Note, it’s different than my theoretical data model above. In this case, Database is Group and Record is Task. Also note, I’m on PostgreSQL 12.10.

EXPLAIN (analyze, buffers, verbose)
SELECT
    store_database.id as database_id,
    MAX(store_record.updated_at) AS latest_record_updated_at,
    COUNT(store_record.id) AS record_count
FROM store_database
    LEFT JOIN store_record ON (store_database.id = store_record.database_id)
GROUP BY store_database.id;

Output:

HashAggregate  (cost=1903868.71..1903869.22 rows=169 width=32) (actual time=18227.016..18227.042 rows=169 loops=1)
"  Output: store_database.id, max(store_record.updated_at), count(store_record.id)"
  Group Key: store_database.id
  Buffers: shared hit=609211 read=1190704
  I/O Timings: read=3385.027
  ->  Hash Right Join  (cost=41.28..1872948.10 rows=20613744 width=40) (actual time=169.766..14572.558 rows=20928339 loops=1)
"        Output: store_database.id, store_record.updated_at, store_record.id"
        Inner Unique: true
        Hash Cond: (store_record.database_id = store_database.id)
        Buffers: shared hit=609211 read=1190704
        I/O Timings: read=3385.027
        ->  Seq Scan on public.store_record  (cost=0.00..1861691.23 rows=20613744 width=40) (actual time=0.007..8607.425 rows=20928316 loops=1)
"              Output: store_record.id, store_record.key, store_record.data, store_record.created_at, store_record.updated_at, store_record.database_id, store_record.organization_id, store_record.user_id"
              Buffers: shared hit=609146 read=1190704
              I/O Timings: read=3385.027
        ->  Hash  (cost=40.69..40.69 rows=169 width=16) (actual time=169.748..169.748 rows=169 loops=1)
              Output: store_database.id
              Buckets: 1024  Batches: 1  Memory Usage: 16kB
              Buffers: shared hit=65
              ->  Index Only Scan using store_database_pkey on public.store_database  (cost=0.05..40.69 rows=169 width=16) (actual time=0.012..0.124 rows=169 loops=1)
                    Output: store_database.id
                    Heap Fetches: 78
                    Buffers: shared hit=65
Planning Time: 0.418 ms
JIT:
  Functions: 14
"  Options: Inlining true, Optimization true, Expressions true, Deforming true"
"  Timing: Generation 2.465 ms, Inlining 15.728 ms, Optimization 92.852 ms, Emission 60.694 ms, Total 171.738 ms"
Execution Time: 18229.600 ms

Note, the large execution time. It sometimes takes 5-10 minutes to run. I would love to bring this down to consistently a few seconds max.

Update #2

People are requesting the execution plan when the query takes minutes. Here it is:

HashAggregate  (cost=1905790.10..1905790.61 rows=169 width=32) (actual time=128442.799..128442.825 rows=169 loops=1)
"  Output: store_database.id, max(store_record.updated_at), count(store_record.id)"
  Group Key: store_database.id
  Buffers: shared hit=114011 read=1685876 dirtied=367
  I/O Timings: read=112953.619
  ->  Hash Right Join  (cost=15.32..1874290.39 rows=20999810 width=40) (actual time=323.497..124809.521 rows=21448762 loops=1)
"        Output: store_database.id, store_record.updated_at, store_record.id"
        Inner Unique: true
        Hash Cond: (store_record.database_id = store_database.id)
        Buffers: shared hit=114011 read=1685876 dirtied=367
        I/O Timings: read=112953.619
        ->  Seq Scan on public.store_record  (cost=0.00..1862849.43 rows=20999810 width=40) (actual time=0.649..119522.406 rows=21448739 loops=1)
"              Output: store_record.id, store_record.key, store_record.data, store_record.created_at, store_record.updated_at, store_record.database_id, store_record.organization_id, store_record.user_id"
              Buffers: shared hit=113974 read=1685876 dirtied=367
              I/O Timings: read=112953.619
        ->  Hash  (cost=14.73..14.73 rows=169 width=16) (actual time=322.823..322.824 rows=169 loops=1)
              Output: store_database.id
              Buckets: 1024  Batches: 1  Memory Usage: 16kB
              Buffers: shared hit=37
              ->  Index Only Scan using store_database_pkey on public.store_database  (cost=0.05..14.73 rows=169 width=16) (actual time=0.032..0.220 rows=169 loops=1)
                    Output: store_database.id
                    Heap Fetches: 41
                    Buffers: shared hit=37
Planning Time: 5.390 ms
JIT:
  Functions: 14
"  Options: Inlining true, Optimization true, Expressions true, Deforming true"
"  Timing: Generation 1.306 ms, Inlining 82.966 ms, Optimization 176.787 ms, Emission 62.561 ms, Total 323.620 ms"
Execution Time: 128474.490 ms

5

Answers


  1. Your MV currently has 169 rows, so write costs are negligible (unless you have locking issues). It’s all about the expensive sequential scan over the big table.

    Full counts are slow

    Getting exact counts per group ("database") is expensive. There is no magic bullet for that in Postgres. Postgres has to count all rows. If the table is all-visible (visibility map is up to date), Postgres can shorten the procedure somewhat by only traversing a covering index. (You did not provide indexes …)

    There are possible shortcuts with an estimate for the total row count in the whole table. But the same is not easily available per group. See:

    But not that slow

    That said, your query can still be substantially faster. Aggregate before the join:

    SELECT id AS database_id
         , r.latest_record_updated_at
         , COALESCE(r.record_count, 0) AS record_count
    FROM   store_database d
    LEFT   JOIN (
       SELECT r.database_id AS id
            , max(r.updated_at) AS latest_record_updated_at
            , count(*) AS record_count
       FROM   store_record r
       GROUP  BY 1
       ) r USING (id);
    

    See:

    And use the slightly faster (and equivalent in this case) count(*). Related:

    Also – visibility provided – count(*) can use any non-partial index, preferably the smallest, while count(store_record.id) is limited to an index on that column (and has to inspect values, too).

    I/O is your bottleneck

    You added the EXPLAIN plan for an expensive execution, and the skyrocketing I/O cost stands out. It dominates the cost of your query.
    Fast plan:

    Buffers: shared hit=609146 read=1190704
    I/O Timings: read=3385.027
    

    Slow plan:

    Buffers: shared hit=113974 read=1685876 dirtied=367
    I/O Timings: read=112953.619
    

    Your Seq Scan on public.store_record spent 112953.619 ms on reading data file blocks. 367 dirtied buffers represent under 3MB and are only a tiny fraction of total I/O. Either way, I/O dominates the cost.
    Either your storage system is abysmally slow or, more likely since I/O of the fast query costs 30x less, there is too much contention for I/O from concurrent work load (on an inappropriately configured system). One or more of these can help:

    • faster storage
    • better (more appropriate) server configuration
    • more RAM (and server config that allows more cache memory)
    • less concurrent workload
    • more efficient table design with smaller disk footprint
    • smarter query that needs to read fewer data blocks
    • upgrade to a current version of Postgres

    Hugely faster without count

    If there was no count, just latest_record_updated_at, this query would deliver that in close to no time:

    SELECT d.id
        , (SELECT r.updated_at
           FROM   store_record r
           WHERE  r.database_id = d.id
           ORDER  BY r.updated_at DESC NULLS LAST
           LIMIT  1) AS latest_record_updated_at
    FROM   store_database d;
    

    In combination with a matching index! Ideally:

    CREATE INDEX store_record_database_id_idx ON store_record (database_id, updated_at DESC NULL LAST);
    

    See:

    The same index can also help the complete query above, even if not as dramatically. If the table is vacuumed enough (visibility map up to date) Postgres can do a sequential scan on the smaller index without involving the bigger table. Obviously matters more for wider table rows – especially easing your I/O problem.
    (Of course, index maintenance adds costs, too …)

    Upgrade to use parallelism

    Upgrade to the latest version of Postgres if at all possible. Postgres 14 or 15 have received various performance improvements over Postgres 12. Most importantly, quoting the release notes for Postgres 14:

    Allow REFRESH MATERIALIZED VIEW to use parallelism (Bharath Rupireddy)

    Could be massive for your use case. Related:

    Estimates?

    Warning: experimental stuff.

    You stated:

    Approximate values are totally fine

    I see only 169 groups ("databases") in the query plan. Postgres maintains column statistics. While the distinct count of groups is tiny and stays below the "statistics target" for the column store_record.database_id (which you have to make sure of!), we can work with this. See:

    Unless you have very aggressive autovacuum settings, to get better estimates, run ANALYZE on database_id to update column statistics before running below query. (Also updates reltuples and relpages in pg_class.):

    ANALYZE public.store_record(database_id);
    

    Or even (to also update the visibility map for above query):

    VACUUM ANALYZE public.store_record(database_id);
    

    This was the most expensive part (with collateral benefits). And it’s optional.

    WITH ct(total_est) AS ( 
       SELECT reltuples / relpages * (pg_relation_size(oid) / 8192)
       FROM   pg_class
       WHERE  oid = 'public.store_record'::regclass  -- your table here
       )
    SELECT v.database_id, (ct.total_est * v.freq)::bigint AS estimate
    FROM   pg_stats s
        ,  ct
        ,  unnest(most_common_vals::text::int[], most_common_freqs) v(database_id, freq)
    WHERE  s.schemaname = 'public'
    AND    s.tablename = 'store_record'
    AND    s.attname = 'database_id';
    

    The query relies on various Postgres internals and may break in the future major versions (though unlikely). Tested with Postgres 14, but works with Postgres 12, too. It’s basically black magic. You need to know what you are doing. You have been warned.
    But the query costs close to nothing.

    Take exact values for latest_record_updated_at from the fast query above, and join to these estimates for the count.

    Basic explanation: Postgres maintains column statistics in the system catalog pg_statistic. pg_stats is a view on it, easier to access. Among other things, "most common values" and their relative frequency are gathered. Represented in most_common_vals and most_common_freqs. Multiplied with the current (estimated) total count, we get estimates per group. You could do all of it manually, but Postgres is probably much faster and better at this.

    For the computation of the total estimate ct.total_est see:

    (Note the "Safe and explicit" form for this query.)

    Login or Signup to reply.
  2. Given the explain plan, the sequential scan seems to be causing the slowness. An index can definitely help there.

    You can also utilize index-only scans as there are few columns in the query. So you can use something like this for store_record table.

    Create index idx_store_record_db_id btree(database_id) include (id, updated_at); 
    

    An index on id column on the store_database table is also needed.

    Create index idx_db_id on store_database btree(id)
    
    Login or Signup to reply.
  3. Parallel plan

    If you first collect the store_record statistics and then join that with the store_database, you’ll get a better, parallelisable plan.

    EXPLAIN (analyze, buffers, verbose)
    SELECT
        store_database.id          as database_id,
        s.latest_record_updated_at as latest_record_updated_at,
        coalesce(s.record_count,0) as record_count
    FROM store_database
        LEFT JOIN 
        (   SELECT
                store_record.database_id     as database_id,
                MAX(store_record.updated_at) as latest_record_updated_at,
                COUNT(store_record.id)       as record_count
            FROM store_record
            GROUP BY store_record.database_id)
        AS s ON (store_database.id = s.database_id);
    

    Here’s a demo – at the end you can see both queries return the exact same results, but the one I suggest runs faster and has a more flexible plan. The number of workers dispatched depends on your max_worker_processes, max_parallel_workers, max_parallel_workers_per_gather settings as well as some additional logic inside the planner.

    With more rows in store_record the difference will be more pronounced. On my system with 40 million test rows it went down from 14 seconds to 3 seconds with one worker, 1.4 seconds when it caps out dispatching six workers out of 16 available.


    Caching

    I’m thinking of caching these values on every write to the Task table. Either in Redis or in PostgreSQL itself.

    If it’s an option, it’s worth a try – you can maintain proper accuracy and instantly available statistics at the cost of some (deferrable) table throughput overhead. You can replace your materialized view with a regular table or add the statistics columns to store_database

    create table store_record_statistics(
      database_id smallint unique references store_database(id) 
        on update cascade,
      latest_record_updated_at timestamptz,
      record_count integer default 0);
    
    insert into store_record_statistics --initializes table with view definition
    SELECT g.id, MAX(s.updated_at), COUNT(*)
    FROM store_database g LEFT JOIN store_record s ON g.id = s.database_id
    GROUP BY g.id;
    
    create index store_record_statistics_idx 
      on store_record_statistics (database_id) 
      include (latest_record_updated_at,record_count);
    
    cluster verbose store_record_statistics using store_record_statistics_idx;
    

    And leave keeping the table up to date to a trigger that fires each time store_record changes.

    CREATE FUNCTION maintain_store_record_statistics_trigger() 
      RETURNS TRIGGER LANGUAGE plpgsql AS
    $$ BEGIN
      IF TG_OP IN ('UPDATE', 'DELETE') THEN --decrement and find second most recent updated_at
        UPDATE store_record_statistics srs
        SET (record_count,
             latest_record_updated_at) 
        =   (record_count - 1,
             (SELECT s.updated_at
              FROM   store_record s
              WHERE  s.database_id = srs.database_id
              ORDER  BY s.updated_at DESC NULLS LAST
              LIMIT  1))
        WHERE database_id = old.database_id;
      END IF;
      IF TG_OP in ('INSERT','UPDATE') THEN --increment and pick most recent updated_at
        UPDATE store_record_statistics 
        SET (record_count,
             latest_record_updated_at) 
        =   (record_count + 1,
             greatest(
                latest_record_updated_at,
                new.updated_at))
        WHERE database_id=new.database_id;
      END IF;
      RETURN NULL;
    END $$;
    

    Making the trigger deferrable decouples its execution time from the main operation but it’ll still infer its costs at the end of the transaction.

    CREATE CONSTRAINT TRIGGER maintain_store_record_statistics
    AFTER INSERT OR UPDATE OF database_id OR DELETE ON store_record 
    INITIALLY DEFERRED FOR EACH ROW
    EXECUTE PROCEDURE maintain_store_record_statistics_trigger();
    

    TRUNCATE trigger cannot be declared FOR EACH ROW with the rest of events, so it has to be defined separately

    CREATE FUNCTION maintain_store_record_statistics_truncate_trigger() 
    RETURNS TRIGGER LANGUAGE plpgsql AS
    $$ BEGIN
      update store_record_statistics 
      set (record_count, latest_record_updated_at)
        = (0           , null);--wipes/resets all stats
      RETURN NULL;
    END $$;
    
    CREATE TRIGGER maintain_store_record_statistics_truncate
    AFTER TRUNCATE ON store_record
    EXECUTE PROCEDURE maintain_store_record_statistics_truncate_trigger();
    

    In my test, an update or delete of 10000 random rows in a 100-million-row table run in seconds. A single insert of 1000 new, randomly generated rows took 25ms without and 200ms with the trigger. A million was 30s and 3 minutes correspondingly.

    A demo.


    Partitioning-backed parallel plan

    store_record might be a good fit for partitioning:

    create table store_record(
      id serial not null,
      updated_at timestamptz default now(),
      database_id smallint references store_database(id)
      ) partition by range (database_id);
    
    DO $$
    declare 
      vi_database_max_id           smallint:=0;
      vi_database_id               smallint:=0;
      vi_database_id_per_partition smallint:=40;--tweak for lower/higher granularity
    begin
      select max(id) from store_database into vi_database_max_id;
      for vi_database_id in 1 .. vi_database_max_id by vi_database_id_per_partition loop
        execute format ('
          drop table if exists store_record_p_%1$s;
          create table store_record_p_%1$s
          partition of store_record
          for VALUES from (%1$s) to (%1$s + %2$s)
          with (parallel_workers=16);
          ', vi_database_id, vi_database_id_per_partition);
      end loop;
    end $$ ;
    

    Splitting objects in this manner lets the planner split their scans accordingly, which works best with parallel workers, but doesn’t require them. Even your initial, unaltered query behind the view is able to take advantage of this structure:

    HashAggregate  (cost=60014.27..60041.47 rows=2720 width=18) (actual time=910.657..910.698 rows=169 loops=1)
      Output: store_database.id, max(store_record_p_1.updated_at), count(store_record_p_1.id)
      Group Key: store_database.id
      Buffers: shared hit=827 read=9367 dirtied=5099 written=4145
      ->  Hash Right Join  (cost=71.20..45168.91 rows=1979382 width=14) (actual time=0.064..663.603 rows=1600020 loops=1)
            Output: store_database.id, store_record_p_1.updated_at, store_record_p_1.id
            Inner Unique: true
            Hash Cond: (store_record_p_1.database_id = store_database.id)
            Buffers: shared hit=827 read=9367 dirtied=5099 written=4145
            ->  Append  (cost=0.00..39893.73 rows=1979382 width=14) (actual time=0.014..390.152 rows=1600000 loops=1)
                  Buffers: shared hit=826 read=9367 dirtied=5099 written=4145
                  ->  Seq Scan on public.store_record_p_1  (cost=0.00..8035.02 rows=530202 width=14) (actual time=0.014..77.130 rows=429068 loops=1)
                        Output: store_record_p_1.updated_at, store_record_p_1.id, store_record_p_1.database_id
                        Buffers: shared read=2733 dirtied=1367 written=1335
                  ->  Seq Scan on public.store_record_p_41  (cost=0.00..8067.36 rows=532336 width=14) (actual time=0.017..75.193 rows=430684 loops=1)
                        Output: store_record_p_41.updated_at, store_record_p_41.id, store_record_p_41.database_id
                        Buffers: shared read=2744 dirtied=1373 written=1341
                  ->  Seq Scan on public.store_record_p_81  (cost=0.00..8029.14 rows=529814 width=14) (actual time=0.017..74.583 rows=428682 loops=1)
                        Output: store_record_p_81.updated_at, store_record_p_81.id, store_record_p_81.database_id
                        Buffers: shared read=2731 dirtied=1366 written=1334
                  ->  Seq Scan on public.store_record_p_121  (cost=0.00..5835.90 rows=385090 width=14) (actual time=0.016..45.407 rows=311566 loops=1)
                        Output: store_record_p_121.updated_at, store_record_p_121.id, store_record_p_121.database_id
                        Buffers: shared hit=826 read=1159 dirtied=993 written=135
                  ->  Seq Scan on public.store_record_p_161  (cost=0.00..29.40 rows=1940 width=14) (actual time=0.008..0.008 rows=0 loops=1)
                        Output: store_record_p_161.updated_at, store_record_p_161.id, store_record_p_161.database_id
            ->  Hash  (cost=37.20..37.20 rows=2720 width=2) (actual time=0.041..0.042 rows=169 loops=1)
                  Output: store_database.id
                  Buckets: 4096  Batches: 1  Memory Usage: 38kB
                  Buffers: shared hit=1
                  ->  Seq Scan on public.store_database  (cost=0.00..37.20 rows=2720 width=2) (actual time=0.012..0.021 rows=169 loops=1)
                        Output: store_database.id
                        Buffers: shared hit=1
    Planning Time: 0.292 ms
    Execution Time: 910.811 ms
    

    Demo. It’s best to test what granularity gives the best performance on your setup. You can even test sub-partioning, giving each store_record.database_id a partition, that is then sub-partitioned into date ranges, simplifying access to most recent entries.

    Login or Signup to reply.
  4. Sometimes in such cases it is necessary to think of completely different business logic solutions.
    For example, the count operation is a very slow query. This cannot be accelerated by any means in DB. What can be done in such cases? Since I do not know your business logic in full detail, I will tell you several options. However, these options also have disadvantages. For example:

    group_id    id
    ---------------
    1           12
    1           145
    1           100
    3           652
    3           102
    

    We group it once and insert the numbers into a table.

    group_id    count_id
    --------------------
    1           3
    3           2
    

    After then, when each record is inserted into main table then we update the group table using with triggers. Like as this:

    update group_table set count_id = count_id + 1 where group_id = new.group_id
    

    Or like that:

    update group_table set count_id = (select count(id) from main_table where group_id = new.group_id)
    

    I am not talking about small details here. For updating row properly, we can use clause for update, so for update locks row for other transactions.

    So, the main solution is that: Functions like count need to be executed separately on grouped data, not on the entire table at once. Similar solutions can be applied. I explained it for general understanding.
    The disadvantage of this solution is that: if you have many inserting operations on this main table, so performance of inserting will be decrease.

    Login or Signup to reply.
  5. MATERIALIZED VIEW is not a good idea for that …

    If you just want to "calculate the number of tasks and the most recent Task.created_at value per group" then I suggest you to simply :

    Add two columns in the group table :

    ALTER TABLE IF EXISTS "group" ADD COLUMN IF NOT EXISTS task_count integer SET DEFAULT 0 ;
    ALTER TABLE IF EXISTS "group" ADD COLUMN IF NOT EXISTS last_created_date timestamp ; -- instead of datetime which does not really exist in postgres ...
    

    Update these 2 columns from trigger fonctions defined on table task :

    CREATE OR REPLACE FUNCTION task_insert() RETURNS trigger LANGUAGE plpgsql AS $$
    BEGIN
      UPDATE "group" AS g
         SET task_count = count + 1
           , last_created_at = NEW.created_at -- assuming that the last task inserted has the latest created_at datetime of the group, if not, then reuse the solution proposed in task_delete()
       WHERE g.id = NEW.group ;
      RETURN NEW ;
    END ; $$ ;
    
    CREATE OR REPLACE TRIGGER task_insert AFTER INSERT ON task
    FOR EACH ROW EXECUTE FUNCTION task_insert () ;
    
    CREATE OR REPLACE FUNCTION task_delete () RETURNS trigger LANGUAGE plpgsql AS $$
    BEGIN
      UPDATE "group" AS g
         SET task_count = count - 1
           , last_created_at = u.last_created_at
        FROM 
           ( SELECT max(created_at) AS last_created_at
               FROM task
              WHERE t.group = OLD.group
           ) AS u
       WHERE g.id = OLD.group ;
       RETURN OLD ;
    END ; $$ ;
    
    CREATE OR REPLACE TRIGGER task_insert AFTER DELETE ON task
    FOR EACH ROW EXECUTE FUNCTION task_delete () ;
    

    You will need to perform a setup action at the beginning …

    UPDATE "group" AS g
       SET task_count = ref.count
         , last_created_date = ref.last_created_at
      FROM
         ( SELECT group
                , max(created_at) AS last_created_at
                , count(*) AS count
             FROM task
            GROUP BY group
         ) AS ref
     WHERE g.id= ref.group ;
    

    … but then you will have no more performance issue with the queries !!!

    SELECT * FROM "group"
    

    and you will optimize the size of your database …

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search