I have a script.sh which executes 1 script.sql.
It will look for the data in a data table.
I have a loop and in this one I have other loops which will look for information in the data table and insert them into the correct tables.
At the beginning of my project my script ran in 45 minutes.
I made several modifications and I am at 8 p.m. I have a problem somewhere without really understanding (I put the indexes unless I forgot some).
How can I, in Postgres, analyze each ‘sub loop’ to know the execution time in order to understand why it has become long?
example:
begin
for query
loop
-- loop 2
begin
for query
[...]
end loop;
-> raise notice 'duration or explanation';
end;
-- loop 3
begin
for query
[...]
end loop;
-> raise notice 'duration or explanation';
end;
end loop;
end;
2
Answers
You can see the run time for the whole query in pgAdmin or by using the EXPLAIN before your query. This is probably not enought to know how to improve the query
To measure runtime, you can use
inside plpgsql function.
Sometimes this can be useful but I don’t think this is the way to improve runtime.
You should be aware of the parameters that have the main impact the runtime of a query:
Reducing any amount will improve the runtime.
Usually, one query is much faster than many queries in a loop. You can create chunks just by combining the query string with ; between the individual queries, it will be faster. this is because all of this will be in one tracsaction. and open and close of the traction takes time.
PL/pgSQL functions are black boxes to the query planner. Nested statements are not covered separately in
EXPLAIN
output. The additional moduleauto_explain
lets you log execution plans including nested statements.You must be superuser.
See:
Basics about timing: