skip to Main Content

We’re running a custom ad server solution and, in some cases, we store and increment/decrement counts and aggregated values in MySQL, for optimization.

I understand keeping counts in MySQL is not ideal, although I’m not 100% sure why. My guess is that writes on disk are slower than to memory, so keeping counts in Redis would work better than keeping them in MySQL. But this would add a few extra layers of complexity to our system and would imply serious changes on multiple components.

In the past, we’ve saw the connection count piling up and the CPU utilization going up, but after caching most of the requests and switching to faster disks we’ve managed to move up this limit. We’re currently handling a 10-20 million requests per day with a master-slave MySQL configuration, where the writes are sent to the master and most reads are done from the slave. Also, adding another slave server would not be a problem at all.

My concern is related to the writes on master, although they are just tiny operations of incrementing a row in each of 5 tables.

So my question is: how many such small writes per second can I expect to safely perform in MySQL before hitting a limit, based on your experience? I hear people handling 100k connections simultaneously, but what if all these connections try to increment a row in the same second? What would be a good strategy to scale this up when needed?

Thanks!

2

Answers


  1. IF you are insisting to use MySql then I suggest you to use In Memory Tables

    CREATE TABLE TableName (Column1 INT) ENGINE = MEMORY;
    

    Reference

    BTW

    So my question is: how many such small writes per second can I expect
    to safely perform in MySQL before hitting a limit, based on your
    experience?

    I don’t think that anyone here can give you clear answer.

    Login or Signup to reply.
  2. For InnoDB, you will probably be OK with SSDs.

    20M UPDATEs per day = 230/second; more if there are spikes.

    Spinning disk (HDD): 100 writes per second.

    SSD: Maybe 1000/sec.

    Batched INSERTs run faster.

    Some variants of RAID run faster.

    A RAID controller with Battery Backed Write Cache runs really fast (for writes) (until the cache is saturated).

    For keeping "likes" or "views" or "clicks" it is probably better to keep the counters in a separate table from the rest of the columns about the page/product/whatever. This way, updating the counter and dealing with the other data interfere less.

    There are several settings to consider tuning:

    innodb_flush_log_at_trx_commit = 2 (Tradeoff for speed instead of security)

    sync_binlog = OFF

    Combine multiple actions into a single BEGIN..COMMIT transaction. But be cautious about deadlocks, etc.

    Perhaps you did not mean 100K connections? That is a huge number. More than a few dozen running threads (irrespective of the number of sleeping threads) is quite a lot.

    Another approach to counting clicks is not to record them in real time, but scrape the web server log every, say, hour. Have a separate process to count all the clicks for the hour (or each 5 minutes or whatever) into a single number to feed to an UPDATE. That would be more work for a separate server, but very little effort for the database.

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search