skip to Main Content

I have 1,000 components each has a separate table row that it update every 10 seconds.
I currently use redis with HSET and HGET.
Is it reasonable to use ysql for this?
Will frequent updates create bloat?
Is automatic full compaction enabled on stable and will take care of that?

2

Answers


  1. Chosen as BEST ANSWER

    I did a quick test by running:

    create table counter(name text primary key, value bigint default 0);
    insert into counter(name) select generate_series(1,100000);
    select pg_size_pretty(pg_table_size('counter')) ;
    explain (analyze, costs off, dist) update counter set value=value+1 ;
    watch 0.001
    

    Continuously updating 100000 counters every 5 seconds since you asked. The time to update didn't increase. The time to query one row stays the same (10 milliseconds). The size stays with 5 SST files (rocksdb_level0_file_num_compaction_trigger) for 150MB which shows that there's no need for full compaction. The WAL stays at 600MB as it keeps 15 minutes of changes (timestamp_history_retention_interval_sec) in case a replica is temporarily down and has to re-synchronize.

    enter image description here


  2. I did some test some times ago:
    https://dev.to/yugabyte/in-memory-counters-with-yugabytedb-2p54
    It shows the same: universal compaction works well with this pattern

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search