We have decided to introduce Redis to our microservice system as a distributed caching and in memory database. The initial decision is to create a service wrapping Redis, and a Client library will be created for other services to consume the Models.
So, a lot of models will be moved from other microservices to the "caching service". The service has web api for populate/update the data in Redis. The Client class library will export methods such as
ModelA GetModelA(...)
ModelB GetModelB(...)
And other services call the method above to get the data.
It seems there are two issues for this design?
- It seems the "Caching service" will be the sigle point of failure
- Other microservices are sharing the libary of the "caching service" project.
2
Answers
If you have just one server with Redis, then, yes, Redis is a single point of failure. However, you can use multiple Redis servers. In addition, it is necessary to have a backup strategy.
This is a link to read about clustering Redis.
This is a link to read about backups.
In my view, it is really not good idea to share library in microservices.
There is a very good resource which describes microservices and how they can communicate between each other.
The big problem is that the caching service is itself effectively a shared database masquerading as a microservice, especially if there’s no way to be reasonably sure that only the service which previously "owned" model A can interact with this service for model A.
Additionally, if virtually any change a service wants to make to its model is going to require a change to the caching service, you’re giving up a lot of the benefits of microservices in terms of org structure and evolvability.
Cache coherency is also a major problem: if this caching service is addressing that, just be aware that you’re effectively building the "universal data platform" of which so many enterprises speak (but few in my experience achieve). In that case, you’re almost surely going to want the services to be able to define their models and dynamically update them.
As an aside: it may be worth looking at Adya 2019 and considering a LInK-style architecture vs. a RInK-style: in the former, the cache is integrated into each respective service and effectively becomes authoritative: the backing durable datastore is just to allow for resilience.