skip to Main Content

I am trying to implement serialization in the consumers of a SQS queue, which runs in multiple nodeJs instances. i.e. what I am trying to do is ensure that stuff is done in the same order they came in, one thing at a time. My initial thoughts were to put in some kind of semaphore mechanism potentially using a lock document in documentDb (mongo), which is the services main repository, by waiting on changes to that document using change stream mechanism. Each instance will have their own identifier and the lock file will contain the id of whoever owns the lock. Not sure if this is the best way to go about doing this, so I am asking, is there a better way to achieve this? Or would this work however with slight modifications? My main concern is instances potentially getting starved which would defeat the purpose of all of this, since one instance might get the lock twice in a row if they are lucky. A sequence counter could also be put in place with each instance keeping track of how far they are in the sequence and when they attempt to grab the lock they could compare their sequence counter with what is on the lock, and will keep waiting if they are behind. Though that seems like it would be pretty fragile and could stall everything if an instance died not completing the sequence step. Any ideas?

Code looks something like this

const LockSchema = new Schema({
  id: String,
  owner: String,
  createdAt: { type: Date, required: true, default: Date.now },
});

LockSchema.index({ createdAt: 1 }, { expireAfterSeconds: 600 });

const Lock = mongoose.model("queue-lock", LockSchema, "locks");


const filter = [{
    $match: {
            { operationType: "delete" }
        }
    }];

const options = { fullDocument: 'updateLookup' };

let lockStream = Lock.watch(filter, options)

lockStream.on('change', event => {
  if (!event.fullDocument) {// deleted
    result = await Lock.findOneAndUpdate(
      { id: "migration-lock" },
      { $setOnInsert: { owner: whoAmI } },
      { upsert: true, new: true }
    ).lean();
    if (result.owner === whoAmI) {
      // proud owner of lock, do stuff
      await Lock.deleteOne({ id: "queue-lock" }); // when done

    } else {
      // keep waiting
    }
  }

});

2

Answers


  1. Chosen as BEST ANSWER

    For anybody being confronted by the same issue I'll post this in the answer section (since it works for me). After some more research, I decided to leverage an existing npm package called node-raft-redis It lets you have instances decide on a vote amongst themselves and elect a "leader" which will then in my case be the only one to consume message. These instances does so much more work than this, so it will skew things very little if one instance is doing a little more than the rest.


  2. What I am trying to do is ensure that stuff is done in the same order they came in, one thing at a time

    You can use FIFO queues.

    Messages posted to the same group in a FIFO queue are read in the same order as they are posted. No further messages from the same group are posted before the ones already requested are deleted from the queue.

    My main concern is instances potentially getting starved which would defeat the purpose of all of this, since one instance might get the lock twice in a row if they are lucky.

    Do you care about the order of the messages or about which instance processes what?

    Messages being in the same group in the same queue assume that it doesn’t matter what processes them.

    If you have instances which are intrinsically different (have different compute power, different specs etc.), then you should route your messages into different queues.

    If the order of the processing matters, you should introduce a casual dependency to your messages (only post the follow-up processing messages after their originators have been processed).

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search