This is the table schema:
CREATE TABLE public.page_by_category
(
id integer NOT NULL DEFAULT nextval('page_by_category_id_seq'::regclass),
page_id bigint NOT NULL,
category_id bigint NOT NULL,
weight integer NOT NULL,
CONSTRAINT id_pk PRIMARY KEY (id),
CONSTRAINT category_id_fkey FOREIGN KEY (category_id)
CONSTRAINT page_id_fkey FOREIGN KEY (page_id)
)
This is the query that takes long: UPDATE page_by_category SET weight=0 on 3m rows.
looking at pg_stat_activity this is the result:
30366 "2 days 18:32:12.141453" "user" "UPDATE page_by_category SET weight=0"
how can i check if the query is stuck? since no IO, CPU heavy usage … on my centos i’m using top, iotop to see if cpu is used or disk but nothing more then 5% usage…
PostgreSQL version: ""PostgreSQL 10.7 on x86_64-pc-linux-gnu"
2
Answers
This should not take much time. Kill the process and execute the query again and check whether it’s getting stuck again.
If you don’t know the pid then you can try below query:
Then find the process you want to kill.
If the process is still there then try:
(If possible restart the DB after killing the query and then try it again.)
Since the process ID of your hanging update is 30366, you should look for open transactions that hold a lock that blocks the statement:
Figure out what is wrong with these connections and why they hold locks for such a long time.
To kill them, run
where
?????
is one of the blocking process IDs found with the above query.If it is not a lock that blocks your query, the following possibilities remain:
you have terribly slow storage
you have expensive row level triggers
you have quite a lot of indexes