Is it advised or even possible to select from multiple tables, and assign the results from some of the tables as new columns in the query result?
Lets say I have 3 tables: cars
, records
, maintenance
If given a cars id as ‘1234’, I want to select *
from cars
with that id and create two new columns (only in result) called race_records
and maintenance_history
which will be an arrays of all matching rows from records
and maintenance
Or do you typically make multiple separate queries in this case?
2
Answers
You can use string_agg() to perform aggregation of row values in single column.
May below is the example could help you.
Output:
If you attempt a triple join, you’ll multiply your
records
rows by yourmaintenance
rows (a car with 2 records and 2 maintenances spawns 4 rows, each with a combination of the two). You can use a scalar subquery to aggregate in – technically, these aren’t separate queries. Demo:Array_agg()
results in adequately typed arrays of all matching rows that let you subscript by index and address by field (column) name:Depending on what you do with them later, you might consider
jsonb_object_agg()
instead.Creating a new column based on a some sort of transformation of input values and giving it a new alias is a pretty common thing to do.
SQL is declarative. The main idea is that you tell the system what exactly it is that you want, and it’s the system that decides how exactly to deliver it – an attempt to split the operation into "steps" goes against that idea. Ideally, if you can state your business with a single statement, you do it: the more context you give to the planner about your goal, the better it can select methods to achieve it, and optimise the operation.
You could do it step-by-step, imperatively, but at best you’d end up manually laying out the same logic the planner can come up with on its own – you can play around with
explain analyze
and see the underlying mechanics of how your data gets processed and delivered. You can of course try and come up with faster/cheaper methods than that, in which case you’re encouraged to share them and see if they can get built into PostgreSQL.