What is the advantage and disadvantage of using redis as a sidecar in kubernetes?Is it possible to have persistence cache when redis container is added in each app pod? Will that affect the availability and scalability of cache?
What is the advantage and disadvantage of using redis as a sidecar in kubernetes?Is it possible to have persistence cache when redis container is added in each app pod? Will that affect the availability and scalability of cache?
2
Answers
I’m hard-pressed to think of any advantages to running Redis as a sidecar. I would always run it as a separate deployment (or stateful set if persistence is enabled) with a separate service.
If Redis is in its own pod then:
Given Redis’s overall capabilities (principally in-memory storage, limited data-type support), simply storing this cache data in singleton objects in your application would be more or less equivalent to running Redis as a sidecar (one copy of the cache data per pod, data is lost when the pod is deleted).
I agree with David Maze answer. In this op, REDIS is a long term cache which is the usual. All pods go to the same cache so they reuse it and it has a consistent output.
On the other hand, I am also evaluating a sidecar model for a redis cache, and in a nutshell, it all depends on your needs of consistency.
This sidecar redis means that each microservice pod would have its own redis. So when each microservice goes to the database, it then stores the object in its own redis. Whenever the same object is read, the microservice goes to redis, instead of database. This saves a lot of database readings but it penalizes the consistency. And it saves me $$ on cloud database readings.
In my case, the cache would expire in 1 minute or so, so the lack of consistency in-between is ok in my case.
In the case of scalability and availability I would even say it can increase it, since you can easily have many pods running. And even by putting a max memory on the redis pod, then you can easily restart it when reaching a limit (150mb?).