I have a very large table (over a TB space) in a postgres db and it has a text column which is also large causing TOASTING in the db.
When I try do drop the column it times out after 20 mins. The col doesnt have index
I am trying to drop it using the code below:
ALTER TABLE a DROP COLUMN b;
And I am getting an error stating the statement is canceled due to timeout.
SQL Error [57014]: ERROR: canceling statement due to statement timeout
How can I speed up the dropping of the column?
I tried to increase statement_timeout but it still times out even after 30 mins
2
Answers
That query should work nearly instantly because it only hides the column, without trying to do any actual work dropping it. From the
ALTER TABLE
doc:If there’s partitioning and/or inheritance set up on it, unless you
ALTER TABLE
ONLY
a DROP COLUMN b;
, it’ll try to cascade through the whole structure. In that case, drop it on just the parent, then process all descendants. Similar story with foreign key references to this column, if there are any: you might want to drop those FK columns one by one, then drop this one without having toCASCADE
.You didn’t explicitly exclude these so I’m just making sure – if you had partitioning, I guess you’d mention it, and FK is unlikely on a long text column. It’d also require a
unique
constraint, which would automatically create an index to enforce it – and you did say there are none on that column.Other than that, check
pg_locks
and/orpg_stat_activity
to see what’s blocking you. First thingALTER TABLE
does is request and wait for a lock:So it’s probably sitting there staring at a crowd of sessions doing something on that table, waiting for them to finish, while new queries begin to queue up behind it with any other lock request. This also means you might want to take a look at
auto_explain
and see who’s patient enough to wait you out and who’s probably wasting your resources on queries that routinely fail on timeout. If you waited 30min for the lock, it means someone’s been holding it for 30min.It’s more likely you’re dealing with long transactions holding on to locks longer than they need to, while none of the individual statements inside them takes that long. There’s
transaction_timeout
,lock_timeout
andidle_in_transaction_session_timeout
that you can tweak to clear those off automatically but ideally, it’s the app that should be designed to keep client transactions as short as possible. If it works withautocommit
off, someone, somewhere probably forgot tocommit
explicitly. Some pools even go beyonddiscard all
and routinely close and re-establish connections upon release – if yours doesn’t, it might be a good idea to enable that.To hunt down the blocking sessions, you can use the example from the PostgreSQL Wiki:
If you’re impatient, you can resort to
pg_cancel_backend()
or evenpg_terminate_backend()
.Update the column to NULL to reduce its size, then vacuum the table to reclaim space, Copy the data to a new table without the column, then replace the old table – if your using version 13 or later– you can try ALTER TABLE a DROP COLUMN b;try Use pg_repack (optional)