skip to Main Content

I am working on Microservice architecture. One of my service is exposed to source system which is used to post the data. This microservice published the data to redis. I am using redis pub/sub. Which is further consumed by couple of microservices.

Now if the other microservice is down and not able to process the data from redis pub/sub than I have to retry with the published data when microservice comes up. Source can not push the data again. As source can not repush the data and manual intervention is not possible so I tohught of 3 approaches.

  1. Additionally Using redis data for storing and retrieving.
  2. Using database for storing before publishing. I have many source and target microservices which use redis pub/sub. Now If I use this approach everytime i have to insert the request in DB first than its response status. Now I have to use shared database, this approach itself adding couple of more exception handling cases and doesnt look very efficient to me.
  3. Use kafka inplace if redis pub/sub. As traffic is low so I used Redis pub/sub and not feasible to change.

In both of the above cases, I have to use scheduler and I have a duration before which I have to retry else subsequent request will fail.
Is there any other way to handle above cases.

2

Answers


  1.  For the point 2, 
     - Store the data in DB. 
     - Create a daemon process which will process the data from the table.
     - This Daemon process can be configured well as per our needs.
     - Daemon process will poll the DB and publish the data, if any. Also, it will delete the data once published.
    
    Not in micro service architecture, But I have seen this approach working efficiently while communicating 3rd party services.
    
    Login or Signup to reply.
  2. At the very outset, as you mentioned, we do indeed seem to have only three possibilities

    This is one of those situations where you want to get a handshake from the service after pushing and after processing. In order to accomplish the same, using a middleware queuing system would be a right shot.

    Although a bit more complex to accomplish, what you can do is use Kafka for streaming this. Configuring producer and consumer groups properly can help you do the job smoothly.

    Using a DB to store would be a overkill, considering the situation where you “this data is to be processed and to be persisted”

    BUT, alternatively, storing data to Redis and reading it in a cron-job/scheduled job would make your job much simpler. Once the job is run successfully, you may remove the data from cache and thus save Redis Memory.

    If you can comment further more on the architecture and the implementation, I can go ahead and update my answer accordingly. 🙂

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search