skip to Main Content

MemoryCache has a Set method that lets me specify a delegate that is called before a cache entry is removed from the cache via the CacheItemPolicy parameter.

This can be used to auto refresh the cache at regular intervals without employing Hangfire or some other task runner.

How can I implement this in .NET using StackExchange.Redis ?

I have not been able to find any methods in the Redis command reference that would suit my purpose and all the implementations of ObjectCache that I have found online throw a NotSupportedException in their implementations:

https://github.com/justinfinch/Redis-Object-Cache/blob/master/src/RedisObjectCache/RedisCache.cs
https://www.leadtools.com/help/sdk/v20/dh/to/azure-redis-cache-example.html
https://github.com/Azure/aspnet-redis-providers/pull/72/commits/2930ede272fe09abf930208dfe935c602c1bb510

2

Answers


  1. There is no such thing built in Redis.

    But, Redis does support keyspace notifications which provides a way to register for Expired event.
    You can register a client that will react on such event and refresh the cache.

    Another option is to use RedisGears and register on expired–> register(eventTypes=['exired']) event such that each time an expire event is triggered your function that runs embedded in Redis will refresh the data.

    Login or Signup to reply.
  2. The typical pattern in cache is to call cached data and when not found, load it.

    public T Get<T>(string key)
    {
        T ret = GetFromRedis<T>(key); // I am not going to go into implementation of GetFromRedis for purpose of this question
        if (ret == null)
        {
            // in the multi-server situation, this lock can be 
            // implemented as UPDATE record lock on DB transaction ***
            lock(_loadLock) 
            {
                ret = GetFromRedis<T>(key);
                if (ret == null)
                {
                    ret = DataProvider.Load<T>();
                    SetInRedis(key, ret);   
                }
            }
        }
        return ret; 
    }
    

    *** I wouldn’t put too much emphasis on that lock for !!multiple!! servers. Yea, you might end-up loading data twice or more in the first load but your redis SET can use When.NotExists. This way, actual cache will not be replaced when multiple threads try to do it. So, this is on-demand caching vs constant caching. For example, in large applications, there are parts that not being used. Why populate the cache? Then, users start hitting the part and voila!

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search