I have an interesting conundrum; I use Redis PubSub pattern to emit events within my network of NodeJS microservices. I moved away from RabbitMQ because I needed the messages to be multi-cast, so that multiple microservices could receive and handle the same event. So if I emit a LOGOUT event, both the UserService and the WebSocketService should hear it.
This worked fine until my client deployed in an environment in which multiple instances of each microservice run. Now, given 3 instances of a UserService, all instances connect to Redis and they all receive and handle the LOGOUT event, which is bad.
So on one hand I need the distinct microservices to hear the event, but on the other I need to prevent the duplicate copies of the same microservice from all handling the event. Rock, meet hard place.
My best thinking so far is a bit hacky, but something like:
- Instead of raising events, write to the Redis cache a list of events
- The UserService and WebSocketService could read that list once every 3 seconds and check for new events that need handling
- When a relevant event is found, UserService would add its "name" to the list of services that is handling the event
- When WebSocketService sees the event, it will still be able to handle it and add its "name" to the handlers list
- When a duplicate instance of UserService sees the event, it will see its "name" already in the list of handlers and ignore that event
I don’t love this solution because the list will be ever-growing in memory rather than in a transient message. Also I’ll have to start adding code to manage which events have already been checked; otherwise every cycle, the whole list would have to be parsed again by all instances of all services.
Ideas welcome.
4
Answers
Upon being notified, consume the list of events on Redis using LPOP and only act if there’s actually something on that list.
This will both cleanup the event list and guarantee that only one consumer will actually execute an action.
For the scenario you described I would use both the pattern: an events queue (or more specifically an events queue for each cluster of consumers) implemented with redlock and the PubSub pattern to alert all the consumers that a new event is ready to be consumed. Respect to your proposed solution this should be a bit more reactive because there is no polling but the access to the queue(s) is event driven.
Redis streams seems to be helpful for your use case. Did you check that?.
We can have multiple consumers groups. Like pub/sub, Each consumer group is expected to receive the message from the stream. (Here stream is like pub/sub channel/topic). However, when there are multiple nodes within a consumer group, only one node will process the event. Not all. Node can acknowledge the message it has processed.
Hey I faced the same issue after running servers on multiple instances.
keySpace notification I created a write-lock to my db for that
particular subscription.
You can do the same by creating your own write-lock.
Hail Coding