skip to Main Content

I am using redis as a datastore rather than cache, but there is a maxmemory limit set, In my understanding the maxmemory specifies the RAM that redis can use, should it not swap the data back to disk once the memory limit is reached.
I have a mixture of keys while some have their expiry set and others don’t
I have tried both volatile-lru and allkeys-lru, as specified in the documentation both remove the old keys based on the property.
What configuration should I use to avoid data loss? Should I set an expiry on all keys and use volatile-lru? What am I missing?

3

Answers


  1. In general as a rule of thumb:

    1. Use the allkeys-lru policy when you expect a power-law distribution in
      the popularity of your requests, that is, you expect that a subset of
      elements will be accessed far more often than the rest. This is a good
      pick if you are unsure
      .
    2. Use the allkeys-random if you have a cyclic
      access where all the keys are scanned continuously, or when you expect
      the distribution to be uniform (all elements likely accessed with the
      same probability).
    3. Use the volatile-ttl if you want to be able to
      provide hints to Redis about what are good candidate for expiration by
      using different TTL values when you create your cache objects.
    4. The volatile-lru and volatile-random policies are mainly useful when you want to use a single instance for both caching and to have a set of persistent keys. However it is usually a better idea to run two Redis instances to solve such a problem.

    As given in the documentation

    Using Redis as an LRU cache

    Login or Signup to reply.
  2. Swapping out memory to disk (virtual memory) was deprecated/deleted in Redis 2.4/2.6. Most likely, you are not using such an old version.

    You control what Redis does when memory is exhausted with maxmemory and maxmemory-policy. Both are settings in redis.conf. Take a look. Swapping memory out to disk is not an option in recent Redis versions.

    If Redis can’t remove keys according to the policy, or if the policy is
    set to ‘noeviction’, Redis will start to reply with errors to commands
    that would use more memory, like SET, LPUSH, and so on, and will continue
    to reply to read-only commands like GET.

    If maxmemory is reached, you lose data only if the eviction policy set in maxmemory-policy indicates Redis to evict some keys and how to select these keys (volatile or all, lfu/lru/ttl/random). Otherwise, Redis start rejecting write commands to preserve the data already in memory. Read commands continue to be served.

    You can run Redis without a maxmemory setting (default), so it will continue using up memory until the OS memory is exhausted.

    If your operating system has virtual memory enabled, and the maxmemory setting allows Redis to go over the physical memory available, then your OS (not Redis) starts to swap out memory to disk. You can expect a performance drop then.

    Login or Signup to reply.
  3. Do not set that param if you’re use redis as a datastore, it is used for cache scenario.

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search