skip to Main Content

The function sends a request to a downstream service to retrieve some data; and the main service heavily relies on Redis. I want to implement a strategy to reduce the impact on the main service if both downstream service and redis encounter failures. Currently, my idea is to use a fallback redis cache instance with a longer TTL time to reduce the negative impact on the main service if both downstream service and redis has :

var SendDownstreamServiceTypeRequest = func(ctx context.Context, param1 int64) ([]ds.ServiceType, error) {
    var rsp *Response
    cachedResult, err := getCachedInfo(ctx, param1)
    if err == nil {
        logger.SetLoggerContext(ctx, "UsingCache", true)
        rsp = &Response{Data: cachedResult}
    } else {


        fallbackCachedResult, fallbackErr := getFallbackCachedInfo(ctx, param1)
        if fallbackErr == nil {
            logger.SetLoggerContext(ctx, "UsingFallbackCache", true)
            rsp = &Response{Data: fallbackCachedResult}
        } else {
            // ... request from downstream service
            // ... 
            // add response data from downstream service to primary cache and fallback cache
            errPrimaryCache := setCachedInfo(ctx, param1, rsp.Data)
            if errPrimaryCache != nil {
                logger.SetLoggerContext(ctx, "UsingCache", err.Error())
            }
            errFallbackCache := setFallbackCachedInfo(ctx, param1, rsp.Data)
            if errFallbackCache != nil {
                logger.SetLoggerContext(ctx, "UsingFallbackCache", err.Error())
            }
        }


    }
    var result ds.Response
    err = json.Unmarshal(rsp.Data, &result.Data)
    if err != nil {
        return nil, err
    }
    return result.Data, nil
}

Is there any better fallback caching strategy that can be implemented into the main service in order to add more redundancies? Thanks

2

Answers


  1. A fallback Redis instance would work, but I think that’d be more costly as you’d have to maintain another Redis instance as well as pay the costs for hosting it.

    One alternative to that is to implement an in-memory cache on the service itself; storing the data in the memory of the main service. Some Go libraries that allow for an easy implementation of this is https://github.com/patrickmn/go-cache and https://github.com/allegro/bigcache. So the data flow would be first retrieved from the instance’s memory -> cache -> database.

    A downside of using an in-memory cache on the service, is if you were to have multiple instances of the same service, since you would have to properly sync the data between them. So you might need to implement a cron that cleans the in-memory cache every X interval, or, cleans the data stored whenever an event happens. This can be done through the use of asynchronous messaging (e.g. NSQ, RabbitMQ, Kafka, etc.).

    Hope this helps :)!

    Login or Signup to reply.
  2. If you have a low ttl duration, please analyze it first to set its optimum value(monitoring is essential).

    Other than that, you can have a local cache first approach. Instead of storing all the data in a local cache (if your data is too big), you can only store data availability information using https://redis.io/docs/manual/keyspace-notifications/. You will see that the x keyword will notify you about key expirations on Redis, and the s keyword notify you about set operations. You can listen to those events in a different goroutine to check if the key expired or not in Redis (if it expired, don’t forget to atomic delete it). You might not want to call the fallback Redis cache for unnecessary io operations if it’s already expired. Therefore, you can do atomic operations like isExist("param1", struct{}/*don't use bool for memory optimizations*/) in your local first, and if it does not exist, you can just set param1 to the cache.

    By the way, if your stored data is not huge, you can also consider storing it locally as well, but again, please monitor it first to avoid a memory storage cost increase

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search