skip to Main Content

A parallel sequential scan is chosen, although an index exists on the column cpv.value. The value column has data type `text. This is the most costly step of the query plan. How do I tune this to reduce the cost/time taken?

->  Parallel Seq Scan on c_parameter_values cpv  (cost=0.00..143904.60 rows=692147 width=49) (actual time=0.007..988.860 rows=1187002 loops=3)
      Output: cpv.value, cpv.c_instance_id, cpv.parameter_id
      Filter: ((cpv.value IS NOT NULL) AND (length(cpv.value) > 0))
      Rows Removed by Filter: 470863
      Buffers: shared hit=112758
      Worker 0:  actual time=0.006..1002.245 rows=920578 loops=1
        Buffers: shared hit=29448
      Worker 1:  actual time=0.005..962.916 rows=922578 loops=1
        Buffers: shared hit=29286

2

Answers


  1. Since almost all rows satisfy the condition, a sequential scan is faster than using an index. Since you don’t show more of the query, more cannot be answered.

    Login or Signup to reply.
  2. Even if the condition removed most of the rows (it doesn’t, as Laurenz said) it still couldn’t be used because an index on "value" can’t be used efficiently to satisfy length(value) .... So the index could only be used for the IS NOT NULL part.

    When length(value) is tested specifically against zero, it could maybe be special-cased to treat that the same "value" > '', but that has not been done and I don’t see that type of trick ever being implemented.

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search