skip to Main Content

I have a requirement recently, which is as follows.

There is a table with utmost 50000 rows, each of which has to be read, run a process depending on the data (which takes approx. 30-40 secs for each row.).

Since this is a time consuming task, and also cannot run on a single thread or process (The process ends abruptly after an hour or so, with no reason whatsoever), I have employed multiple processes.

It works as follows. There is a processcounter variable set at 30. The system calls the process 30 times with a time gap of 2 seconds at once. Each process reads 30 rows from the table (LIMIT 30), and updates a flag so that no other process reads the same. This then waits for 10 minutes until another batch call is made.
I employed c# lock() method to isolate each call and works fairly well.

But last day, the NW team added a LB into the hosting system. Now there are multiple server instances created when load is high. Also all these servers take part in the above mentioned tasks. There is no way but to lock the table at each process call.
I want to Lock the table, read 30 rows, update a flag, and then unlock the Table.
I tried IsolationLevel RepeatableRead and seems to work. I there a better way to Lock the table, read, update and Unlock ?

Any help would be greatly appreciated ! Thanks

2

Answers


  1. How to Explicitly lock a MySQL Table

    Is there a better way to Lock the table, read, update and Unlock ?

    You can do it using LOCK TABLES statement.
    This statement can be used to acquire different types of locks on one or more tables, such as read locks, write locks or a combination of both.

    LOCK TABLES explicitly acquires table locks for the current client session. Table locks can be acquired for base tables or views.
    You must have the LOCK TABLES privilege and the SELECT privilege for each object to be locked.

    The syntax goes like this :

    LOCK TABLES tablename WRITE;
    
    # perform other queries / tasks
    
    UNLOCK TABLES;
    

    As per the MySQL Documentation

    — If you lock a table explicitly with LOCK TABLES, any tables used in triggers are also locked implicitly.

    — If you lock a table explicitly with LOCK TABLES, any tables related by a foreign key constraint are opened and locked implicitly.

    — UNLOCK TABLES explicitly releases any table locks held by the current session. LOCK TABLES implicitly releases any table locks held by the current session before acquiring new locks.

    — Another use for UNLOCK TABLES is to release the global read lock acquired with the FLUSH TABLES WITH READ LOCK statement, which enables you to lock all tables in all databases.

    Login or Signup to reply.
  2. You shouldn’t lock the table especially for this long running process. It’s going to be prone to errors.

    You need

    1. A queue holding an identifier for each row, run a job for each item in the queue. In case a job fails you can re-run it. Just go through the queue 1 by 1 so you don’t have a 40 hour process. You now have 50,000 40 second processes that you can retry in case one fails.

    2. A replication database. Your replication DB should be read only on the tables you want. That way no one can mess with the table you’re writing to and you get high availability.

    Here’s what a summary of the process would look like.

    1. Create the queue with IDs for each row.
    2. Fire off a job to read the first entry from the queue.
    3. Run the 40s job on that one row and save the results
    4. If there was an error retry that row, or push that row to the back of the queue and retry later.
    5. Your replication DB should pick up the changes so you don’t have to worry about the other table being messed with.
    6. The next job fires going through the queue, rinse and repeat.
    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search