skip to Main Content

In this config :
64 Gb, 16 cores, Linux CentOS with Cassandra 3.1

row_cache_size_in_mb is set to zero now (cassandra.yaml)
It seems working well since the OS Page cache is used for caching read.

So, is there any benefits/risks (JVM heap) to increase this number
vs using Linux page caching?

2

Answers


  1. Row cache is used only for the tables that explicitly enable caching of the rows data, and not used by default. Row cache usually is used only for most read data that doesn’t change very often, otherwise, change of the data will lead to an additional performance overhead from invalidating cache data & re-populating of cache entries from disk. You can read more in the following document from the "best practices" series published by DataStax.

    Regarding relation between row cache and Linux’s buffer cache – the main distinction is that row cache keeps the full rows that potentially could be assembled from multiple SSTables, while buffer cache keeps the chunks of the SSTables, that are often compressed, and Cassandra will need to decompress them again and again. Also, if partition is scattered over multiple SSTables, then Cassandra will need to check them when reading the row.

    Login or Signup to reply.
  2. Its all about the workload and the application query pattern.

    If you application frequently reads a small subset of rows (hot) and each row in its entirety, enabling this can bring in a significant performance benefit by avoiding a disk read. There are some row cache hit rate JMX metrics available which can inform about any performance variation between row and key cache sizes for your application load.

    If you haven’t manually configured row cache a table description should look like below.

    Default: { 'keys': 'ALL', 'rows_per_partition': 'NONE' }.
    

    If enabled the size should be proportional to in memory size of a row data and its column values over the hot subset. For a rough estimate use nodetool cfstats, multiply the Row cache size which is the number of rows in the cache, by the Compacted row mean size and sum them.

    As with any memory allocation it has impact on garbage collection though there are some partial or complete off heap implementation classes available. From Datastax docs :

    row_cache_class_name
    Default: disabled. note The classname of the row cache provider to use. Valid values: OHCProvider (fully off-heap) or SerializingCacheProvider (partially off-heap).
    

    As the entire row is cached it can be expensive. One thing to note is if rows are frequently evicted from the row cache (size is set too low or row data frequently change), the garbage collector will definitely have more to do.

    Bottomline : For an ideal row cache use, a small set of rows must be hot. Row cache provides benefit when the entire row is accessed at once. If an off-heap implementation is used it poses little risk to heap. In the end do some load testing and capture some latency metrics to determine the size of cache that best fits your need and is adequate.

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search