How do I prevent this senario
Request 1:I fetch document A and change something
Request 2:I fetch document A and change something else
Request 1:I save document A
Request 2:I save document A
Now the changes from Request 1 will be overwritten How do I prevent this senario
if this request was called often asyncly a change wouldn’t be saved its just a example
I have more complex code in the application were the chance is higher
exports.readInfo = async (req, res, n) => {
const user = req.user;
const data = req.data;
const doc = await Doc.findOne({ _id: new ObjectId(data._id) });
//changes from request 1 might be saved now
doc.infos[data.key][data.skey][data.i].read.push(user._id.toString());
doc.markModified("infos." + data.key + "." + data.skey + "." + data.i);
await doc.save();
return res.end("success");
};
2
Answers
This is more of a systems design issue than anything else, as JavaScript doesn’t really have anything built in (that I’m aware of anyway) to deal with it.
Your issue is basically a race condition, as described here: https://stackoverflow.com/a/34550/8346513
In short, two processes are trying to modify the same piece of data at the same time, and which one wins is a little arbitrary and hard to pin down.
There are a few ways to handle this problem.
Use a mutex, or lock, on the resource.
This is a simple way of preventing this sort of issue from happening. It would look something like this:
This is relatively cheap, and because JavaScript is single threaded, works fairly well for this case. However it has some downsides, namely that you have to have this same lock in every place this resource might be used (which can be a lot of boilerplate) and also if your process crashes in the middle of working, it never lets up the lock, which could cause your application to deadlock.
Use a history queue to process changes
This is similar to the above example, where only one thing can make changes at a time. But now instead of putting that thing inside your API like this, you would have a system that runs through the following steps:
This ensures that only one place is ever writing your data, and allows much less boilerplate to be needed. However this also has some issues. Namely it is much more complex to setup, and also all your operations will run with out of date data.
You could also use a simpler in-memory caching layer to keep a live copy of your data. If you modify that, then enqueue that something changed, then you can update data without needing to do it async, which means your requests don’t need to block anymore, but then periodically send that data to the database. This is also complex and does fix some issues of the above, but also runs the risk of not fully writing all data if the program crashes.
Conclusion
So as you can see, it’s not a simple thing to fix. I would recommend the simple mutex as it’s probably the easiest thing to get up and running for an application that doesn’t need to be perfect. But if you’re building an enterprise grade application it might be good to look into one of the other methods or something else (probably an off the shelf solution) to ensure that as many bases are covered as possible.
you can use this solution
But returns nothing when the status is locked
For a better solution you need to implement lock functionality in the application layer (I think Mongodb not implemented this, But Postgres did)
Simple example
The Result: