I am trying to run a test case and trying to pass two schedules inside like below:
var itemToAdd = new ScheduleInputItemDto
{
Start = DateTime.UtcNow,
End = DateTime.UtcNow.AddHours(1),
ProdType = "Prod1"
};
var response = await Task.WhenAll(addItemRequest.Post(itemToAdd, false), addItemRequest.Post(itemToAdd, false));
This will post two items with same start and end time, which is causing a race condition
AddItem calls the DAO layer like below.
public Schedule()
{
ScheduleItems = new List<ScheduleItem>();
}
public ICollection<ScheduleItem> ScheduleItems { get; set; }
public void AddItem(DateTime start, DateTime end, string cementType, DateTime now)
{
var item = new ScheduleItem(start, end, cementType, now);
ScheduleItems.Add(item);
ConcurrentBag<ScheduleItem> concurrentBag = new ConcurrentBag<ScheduleItem>(ScheduleItems.ToList());
item.ValidateDoesNotOverlapWithItems(concurrentBag);
}
But for my case, it is inserting both of them instead of the following check I made:
The validation code is below:
public static void Validate(this ScheduleItem currentItem, ConcurrentBag<ScheduleItem> scheduleItems)
{
if (scheduleItems.Any(scheduleItem => currentItem.Start < scheduleItem.End && scheduleItem.Start < currentItem.End))
{
throw new ValidationException("A conflict happened.");
}
}
The ScheduleItem model has a property named UpdatedOn which can be used as a concurrency token.
After debugging the test case, I saw both of the items posted from inside .WhenAll()
have exact same DateTime
. How can I prevent the later item to be inserted? Optimistic or Pessimistic concurrency control should be used in this case?
2
Answers
List<ScheduleItem> scheduleItems
is not thread-safe. Try using aConcurrentBag<ScheduleItem>
In summary, when using a database with multiple actors (e.g. threads, requests), it is hard to ensure that the data doesn’t change between a read (required for validation) and a write (insert/update).
In my opinion, the only sure way to handle race conditions is to try the insert/update and act on failures.
There is a path of using consistency levels and transactions, where the row – or the whole table – is locked for the whole unit-of-work (read, do things, write) but for this to work a strong change control is required so that the system not inadvertently broken by changing one piece and not knowing another part depends on it.
Optimistic concurrency model
One simple way to with concurrency for database updates is to use a token to help detecting conflicts. For example:
If number of updated rows is 0 then you know someone updated the row and you need to retry, report error, or do whatever else is required. You don’t need to use a timestamp; the token can be anything, even all the values – the key element is
where something = something_value_when_I_last_read_this_row
.This method is called optimistic because it assumes things will be OK and you react on failures rather than assume that things will be bad from the start.
Some ORMs, including Entity Framework, natively support for this kind of concurrency handling. Please see EF Core’s Handling Concurrency Conflicts.
Data store in memory of a single process
If your data store is in memory of a single process you can protect against race conditions with locking.
Let’s consider this method
As long as:
Insert
methodcollectionLock
when interacting with the collectionthen it is guaranteed that the collection is not modified between
GetCollection
and.Insert
.