skip to Main Content

I have an interesting conundrum; I use Redis PubSub pattern to emit events within my network of NodeJS microservices. I moved away from RabbitMQ because I needed the messages to be multi-cast, so that multiple microservices could receive and handle the same event. So if I emit a LOGOUT event, both the UserService and the WebSocketService should hear it.

This worked fine until my client deployed in an environment in which multiple instances of each microservice run. Now, given 3 instances of a UserService, all instances connect to Redis and they all receive and handle the LOGOUT event, which is bad.

So on one hand I need the distinct microservices to hear the event, but on the other I need to prevent the duplicate copies of the same microservice from all handling the event. Rock, meet hard place.

My best thinking so far is a bit hacky, but something like:

  1. Instead of raising events, write to the Redis cache a list of events
  2. The UserService and WebSocketService could read that list once every 3 seconds and check for new events that need handling
  3. When a relevant event is found, UserService would add its "name" to the list of services that is handling the event
  4. When WebSocketService sees the event, it will still be able to handle it and add its "name" to the handlers list
  5. When a duplicate instance of UserService sees the event, it will see its "name" already in the list of handlers and ignore that event

I don’t love this solution because the list will be ever-growing in memory rather than in a transient message. Also I’ll have to start adding code to manage which events have already been checked; otherwise every cycle, the whole list would have to be parsed again by all instances of all services.

Ideas welcome.

4

Answers


  1. Upon being notified, consume the list of events on Redis using LPOP and only act if there’s actually something on that list.

    This will both cleanup the event list and guarantee that only one consumer will actually execute an action.

    Login or Signup to reply.
  2. For the scenario you described I would use both the pattern: an events queue (or more specifically an events queue for each cluster of consumers) implemented with redlock and the PubSub pattern to alert all the consumers that a new event is ready to be consumed. Respect to your proposed solution this should be a bit more reactive because there is no polling but the access to the queue(s) is event driven.

    Login or Signup to reply.
  3. Redis streams seems to be helpful for your use case. Did you check that?.

    We can have multiple consumers groups. Like pub/sub, Each consumer group is expected to receive the message from the stream. (Here stream is like pub/sub channel/topic). However, when there are multiple nodes within a consumer group, only one node will process the event. Not all. Node can acknowledge the message it has processed.

    • Upon restart, nodes will not process already delivered message
    • Stream will keep on growing. We can trim based on size. We can keep the latest N elements.

    enter image description here

    Login or Signup to reply.
  4. Hey I faced the same issue after running servers on multiple instances.

    • In my case I used mongodb as database so whenever I received a
      keySpace notification I created a write-lock to my db for that
      particular subscription.

    So I created a write-lock itself.Whenever I get the subscription of
    that particular key that you’ve subscribed I incremented the key
    regarding that particular document by 1. And by updating I get
    nModified:1 in my case once it got modified by creating some
    conditions accordingly.

    So once the server sends multiple request by creating some conditions
    I managed to enter that condition and update that key and once and
    perform whatever task I wanted too.

    You can do the same by creating your own write-lock.

    Hail Coding

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search