skip to Main Content

Got this question for a while and wondering if there is any faster query.

I have a table with multiple entries per ID, and would like to list all columns with different values for the same ID.

ID Brand Type
1 Honda Coupe
1 Jeep SUV
2 Ford Sedan
2 Ford Crossover

Example for above table:
Rows with ID = 1 have different Brand and Type values, so I want one result row for each column.
For ID = 2 there is only one brand, but multiple types, so only one result row for type.

The desired result would be like this.

ID Difference
1 Brand
1 Type
2 Type

I solved it with below query checking each column with one SELECT statement and then UNION it all:

SELECT ID, 'Brand' AS Discrepancy
FROM table
GROUP BY ID
HAVING COUNT(DISTINCT Brand) > 1

UNION 

SELECT ID,'Type' AS Discrepancy
FROM table
GROUP BY ID
HAVING COUNT(DISTINCT Type) > 1;

Is there any faster query or optimization?

2

Answers


  1. Your query is good for few rows per ID (except for UNION where it should be UNION ALL).
    This one is better (improved with a hint from Charlieface):

    SELECT t.id, c.difference
    FROM  (
       SELECT id
            , min(brand) <> max(brand) AS b_diff
            , min(type)  <> max(type)  AS t_diff
       FROM   tbl
       GROUP  BY id
       ) t
    JOIN   LATERAL (
       VALUES
         ('Brand', t.b_diff)
       , ('Type' , t.t_diff)
       ) c(difference, diff) ON c.diff
    ORDER  BY 1, 2;  -- optional
    

    fiddle

    A single sequential scan should bring the cost down by almost half.
    Plus, avoiding the expensive count(DISTINCT ...) should help some more.
    Test with EXPLAIN ANALYZE. See:

    Note that null values are ignored by either query.

    If there are many rows per ID (and an index on each of the tested columns), there are (much) faster options.
    If so, and it matters, start a new question providing info for questions as instructed in the tag description. Postgres version, exact table definition, test case. Most importantly, rough stats about data distribution. And post a comment here to link to the follow-up.

    Login or Signup to reply.
  2. You can use to_json() to avoid hardcoding field names: it fetches your column names to use them as keys in JSON. Then you can unpivot that with json_each_text(), spitting them back out with their corresponding values, to be checked by count(distinct v). Demo:

    select id, column_name as "Difference"
    from test, json_each_text(to_json(test)) as a(column_name, v)
    group by id, column_name
    having min(v) <> max(v);
    

    It also does the job in a single pass and you can keep @Charlieface’s trick with min<>max alternative to counting. Keep in mind that regardless of the number of columns, it only wins the "brevity" and "it’s dynamic" participation awards, getting easily outperformed by the variant that doesn’t multiply the set by unpivoting.

    Choice boils down to speed at the cost of maintenance effort (column alteration needs to be cascaded wherever they are hardcoded), or set-it-and-forget-it convenience if you can afford the price in performance.

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search