skip to Main Content

There is a table, has 10 million records, and it has a column which type is array, it looks like:

id |  content |  contained_special_ids 
----------------------------------------
1  |  abc     |  { 1, 2 }
2  |  abd     |  { 1, 3 }
3  |  abe     |  { 1, 4 }
4  |  abf     |  { 3 }
5  |  abg     |  { 2 }
6  |  abh     |  { 3 }

and I want to know that how many records there is which contained_special_ids includes 3, so my sql is:

select count(*) from my_table where contained_special_ids @> array[3]

It works fine when data is small, however it takes long time (about 30+ seconds) when the table has 10 million records.

I have added index to this column:

"index_my_table_on_contained_special_ids" gin (contained_special_ids)

So, how to optimize this select count query?

Thanks a lot!

UPDATE

below is the explain:


Finalize Aggregate  (cost=1049019.17..1049019.18 rows=1 width=8) (actual time=44343.230..44362.224 rows=1 loops=1)
  Output: count(*)
  ->  Gather  (cost=1049018.95..1049019.16 rows=2 width=8) (actual time=44340.332..44362.217 rows=3 loops=1)
        Output: (PARTIAL count(*))
        Workers Planned: 2
        Workers Launched: 2
        ->  Partial Aggregate  (cost=1048018.95..1048018.96 rows=1 width=8) (actual time=44337.615..44337.615 rows=1 loops=3)
              Output: PARTIAL count(*)
              Worker 0:  actual time=44336.442..44336.442 rows=1 loops=1
              Worker 1:  actual time=44336.564..44336.564 rows=1 loops=1
              ->  Parallel Bitmap Heap Scan on public.my_table  (cost=9116.31..1046912.22 rows=442694 width=0) (actual time=330.602..44304.221 rows=391431 loops=3)
                    Recheck Cond: (my_table.contained_special_ids @> '{12511}'::bigint[])
                    Rows Removed by Index Recheck: 501077
                    Heap Blocks: exact=67496 lossy=109789
                    Worker 0:  actual time=329.547..44301.513 rows=409272 loops=1
                    Worker 1:  actual time=329.794..44304.582 rows=378538 loops=1
                    ->  Bitmap Index Scan on index_my_table_on_contained_special_ids  (cost=0.00..8850.69 rows=1062465 width=0) (actual time=278.413..278.414 rows=1176563 loops=1)
                          Index Cond: (my_table.contained_special_ids @> '{12511}'::bigint[])
Planning Time: 1.041 ms
Execution Time: 44362.262 ms

2

Answers


  1. There is no way to optimize such a query due to 2 factors :

    1. The use of a non atomic value that violate the FIRST NORMAL FORM
    2. The fact that PostGreSQL is unable to perform quickly aggregate computation

    On the first problem… 1st NORMAL FORM
    each data in table’s colums must be atomic…. Of course an array containing multiple value is not atomic.

    Then no index would be efficient on such a column due to a type that violate 1FN

    This can be reduced by using a table instaed of an array

    On the poor performance of PG’s aggregate

    PG use a model of MVCC that combine in the same table data pages with phantom records and valid records, so to count valid record, that’s need to red one by one all the records to distinguish wich one are valid to be counted from the other taht must not be count…
    Most of other DBMS does not works as PG, like Oracle or SQL Server that does not keep phantom records inside the datapages, and some others have the exact count of the valid rows into the page header…

    As a example, read the tests I have done comparing COUNT and other aggregate functions between PG and SQL Server, some queries runs 1500 time faster on SQL Server…

    Login or Signup to reply.
  2. Increase work_mem until the lossy blocks go away. Also, make sure the table is well vacuumed to support index-only bitmap scans, and that you are using a new enough version (which you should tell us) to support those. Finally, you can try increasing effective_io_concurrency.

    Also, post plans as text, not images; and turn on track_io_timing.

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search