We have a function that builds a summary table and takes about 50-130 secs on average depending on data like so
Truncate table summaryTable ;
-- multiple insert statements and couple of loops...
insert into summaryTable .....
The process fails at times too, so changing it to
Truncate table tempsummaryTable ;
-- multiple insert statements and couple of loops...
insert into tempsummaryTable .....
-- if all goes well
Truncate table summaryTable ;
insert into summaryTable SELECT * FROM tempsummaryTable ;
The docs here state there’s no begin transaction in PostgreSQL, so would this work as-is, or is there another way to ensure the integrity of data in summarytable ?
To reiterate – goal is to have a working copy of the data in summarytable at all times, rebuild the contents as needed in a temp table and replace the contents in a quick operation with minimal downtime, if the process fails for any reason then the summarytable data remains intact
Again – the goal is not to drop / alter / rename table but to switch the dataset with minimal issues
Currently using 16.2, will move to 17 when available on AWS
2
Answers
Thanks to @atmo for the idea about using views
so all queries go against the view on summarytable and meanwhile the function is changed to
and the view will always point to valid data
I would keep two tables with identical definition around:
summarytable
andsummarytable_x
. The latter is always empty. Then the operation would work like this:When you rename
summarytable
, make sure that there are no long-running statements or transactions using it, because they would block the renaming, which in turn would block all other concurrent access to the table.