skip to Main Content

From inside a procedure I need to separate 4k records of one big table to process it each one in a separate cursor.

UPDATE queue
SET `status` = 'enqueued', 
    `hash` = new_hash  -- prev generated hash
WHERE 
    `status` = 'new'
ORDER BY `column` LIMIT 4000;

But some times I got a deadlock and loss the "enqueued" records. Because it is running from multiple parallel connections.

How can I make sure update unique 4k records from different threads?

2

Answers


  1. Deadlocks are hard to prevent if you are executing locking statements on overlapping sets of rows in concurrent sessions.

    To make sure you can do updates in multiple threads without deadlocks, you would have to force all the threads to go one at a time. For example, by using a global write lock. You could use a table lock with LOCK TABLES, or a user lock with GET_LOCK() in MySQL.

    Another solution is to do the work in a single thread, so there is no contention. This ends up taking longer, probably as long as if you had to serialize access for multiple threads.

    In theory, you can avoid deadlocks by making sure concurrent sessions work in the same direction. That is, all of them should be working on batches of primary keys in ascending order. But this is usually more difficult than it sounds.

    The final strategy is don’t avoid the deadlocks. Instead, trap the exceptions when a session has its transaction rolled back, and re-try as needed.

    Login or Signup to reply.
  2. if you have to fight with large tables only the practice will let you know the best way to deal with "deadlocks"..especially in a dynamic environment.
    A good way for me it could be to wrap the query into a possible retry loop while checking the status of the query like here

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search