Got this question for a while and wondering if there is any faster query.
I have a table with multiple entries per ID, and would like to list all columns with different values for the same ID.
ID | Brand | Type |
---|---|---|
1 | Honda | Coupe |
1 | Jeep | SUV |
2 | Ford | Sedan |
2 | Ford | Crossover |
Example for above table:
Rows with ID = 1
have different Brand
and Type
values, so I want one result row for each column.
For ID = 2
there is only one brand, but multiple types, so only one result row for type.
The desired result would be like this.
ID | Difference |
---|---|
1 | Brand |
1 | Type |
2 | Type |
I solved it with below query checking each column with one SELECT
statement and then UNION
it all:
SELECT ID, 'Brand' AS Discrepancy
FROM table
GROUP BY ID
HAVING COUNT(DISTINCT Brand) > 1
UNION
SELECT ID,'Type' AS Discrepancy
FROM table
GROUP BY ID
HAVING COUNT(DISTINCT Type) > 1;
Is there any faster query or optimization?
2
Answers
Your query is good for few rows per ID (except for
where it should beUNION
UNION ALL
).This one is better (improved with a hint from Charlieface):
fiddle
A single sequential scan should bring the cost down by almost half.
Plus, avoiding the expensive
count(DISTINCT ...)
should help some more.Test with
EXPLAIN ANALYZE
. See:Note that null values are ignored by either query.
If there are many rows per ID (and an index on each of the tested columns), there are (much) faster options.
If so, and it matters, start a new question providing info for postgresql-performance questions as instructed in the tag description. Postgres version, exact table definition, test case. Most importantly, rough stats about data distribution. And post a comment here to link to the follow-up.
You can use
to_json()
to avoid hardcoding field names: it fetches your column names to use them as keys in JSON. Then you can unpivot that withjson_each_text()
, spitting them back out with their corresponding values, to be checked bycount(distinct v)
. Demo:It also does the job in a single pass and you can keep @Charlieface’s trick with
min<>max
alternative to counting. Keep in mind that regardless of the number of columns, it only wins the "brevity" and "it’s dynamic" participation awards, getting easily outperformed by the variant that doesn’t multiply the set by unpivoting.Choice boils down to speed at the cost of maintenance effort (column alteration needs to be cascaded wherever they are hardcoded), or set-it-and-forget-it convenience if you can afford the price in performance.