skip to Main Content

there is a small table with 10 rows. and sometimes a simple request can take more than 1 second.

Query_time: 1.874935 Lock_time: 1.370841

UPDATE table SET 
      value = value + 1 
   WHERE 
      key = operation_key 
   LIMIT 1;

the request is wrapped in a transaction. update is done by primary key.

Please tell me what could be the reason that the request can be executed for so long. We usually notice this with an increased testing load. Execution is delayed for a fraction of a second, after which it returns to normal.

this causes an error "Lock wait timeout exceeded; try restarting transaction"

I noticed that this behavior occurs when load testing begins, and when creating pages, this field is incremented in the transaction. It often works quickly.

2

Answers


  1. Well, a simple request on a small table in MySQL may take more than 1 second due to various reasons such as a slow network connection, insufficient resources, an inefficient query, locking and blocking, or software bugs. It is important to identify the root cause and take appropriate measures to optimize query performance, such as optimizing resources, query structure, indexes, database design, isolation level, or upgrading to a newer MySQL version.

    Login or Signup to reply.
  2. The root cause seems to be in your question itself…

    Query_time: 1.874935 Lock_time: 1.370841

    That means, most of the time is taken in waiting to get the lock to update the records, because the lock is already acquired by some other process.

    this causes an error "Lock wait timeout exceeded; try restarting transaction"

    Points to the same thing – your UPDATE query doesn’t get the required lock to update the data within the set timeout. A lot of processes seem to be fighting and waiting to acquire the lock.

    I noticed that this behavior occurs when load testing begins

    When load testing starts, whatever operations are being performed from within those load testing scripts would be updating the same table heavily (a lot of parallel user sessions, all doing simultaneous operations). So all those would be trying to lock the same table and update the same records. So effectively most of the time is spent in trying to acquire the locks.

    There is no direct solution to this. Why does everything need to update the same table?
    Instead of updating an existing record with an updated count per operation key, can there be a separate table where a record is always inserted with an operation key.
    Later you can get a COUNT of records for a specific operation key from that table to determine the total count.

    This will take more space on the disk, but this should improve the performance.

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search