skip to Main Content

I am trying to write some unit tests to make sure my app can handle running multi-threaded.

The main reason is because it will be a web server and many requests could be processed at the same time.

But my tests fail when I introduce Thread.Sleep, which I’m doing to simulate doing database or other background tasks.

I have re-created the essentials down below.

Inside the function PerformTask is a call to Thread.Sleep.
If I comment it out, the tests pass and if I put it back, the tests fail.

Am I doing something wrong?
Also, with the sleep, the tests fail without showing why. Is there a way to catch any exceptions?

using NUnit.Framework;

namespace MultiThreadedUnitTest
{
    [TestFixture]
    [Parallelizable(ParallelScope.None)]
    public class Tests
    {
        private readonly int NumberOfForLoops = 100;
        private int Counter { get; set; } = 0;
        private object CounterLock = new object();

        private void addToCounter(int value)
        {
            lock (CounterLock)
            {
                Counter += value;
            }
        }

        [Test, Timeout(5000)] // Must complete within 5 seconds
        [Parallelizable(ParallelScope.None)]
        public void RunTest()
        {
            var threads = new Thread[NumberOfForLoops];
            for (int i = 0; i < threads.Length; i++)
            {
                threads[i] = new Thread(() => PerformTask());
            }

            foreach (var thread in threads)
            {
                thread.Start();
            }

            foreach (var thread in threads)
            {
                thread.Join();
            }

            Assert.AreEqual(NumberOfForLoops, Counter);

            var task = AsyncTest();
            task.Wait();

            Assert.AreEqual(NumberOfForLoops, task.Result);

            Assert.AreEqual(NumberOfForLoops * 3, Counter);

            // The "performTask" method was called 30 times but this test completed in under 5 seconds.
            // That means some of the Thread.Sleep calls had to have been made at the same time.
        }

        private async Task<int> AsyncTest()
        {
            var tasks = new List<Task<int>>();
            for (var i = 0; i < NumberOfForLoops; i++)
            {
                tasks.Add(Task.Run(() => Test()));
                /* 
                 * tasks.Add(Test());  --> This makes it run in sequence, one after the other
                 * */
            }
            await Task.WhenAll(tasks);

            await Task.Run(() => Parallel.For(0, NumberOfForLoops, s =>
            {
                var res = Test();
            }));

            return tasks.Sum(x => x.Result);
        }

        private async Task<int> Test()
        {
            PerformTask();
            return 1;
        }

        private object Lock = new object();
        private void PerformTask()
        {
            //Thread.Sleep(1000); // Problem Number 2
            try
            {
                lock (Lock)
                {
                    //TODO: Simulate DB lookup
                    var rnd = new Random();
                    var number = rnd.Next(1, 50) * 100;
                    
                    //Task.Delay(number).Wait();
                    var x = "Thread id = " + Thread.CurrentThread.ManagedThreadId;
                    Console.WriteLine(x);

                    Console.WriteLine("Sleeping for : " + number);

                    /* THE FOLLOWING LINE BREAKS MY UNIT TESTS */
                    //Thread.Sleep(number);

                    addToCounter(1);
                    
                }
            }
            catch (Exception ex)
            {
                Console.WriteLine(ex.ToString());
            }
        }
    }
}

Another weird thing is when the number of for loops was low, i.e. 10, and I had another sleep (indicated as problem number 2), then the unit tests all passed, but only on windows.

If I ran those unit tests on Ubuntu, they always failed.
That took me ages to figure out as the unit tests would fail without showing any line numbers or reasons and I had to add a bunch of console.writeline’s.

If anyone knows why there is this difference between Ubuntu and .NET that would be useful to know too.

2

Answers


  1. Chosen as BEST ANSWER

    As KlausGütter commented, removing the Timeout(5000) made the test succeed. It's taking around 30 seconds and the largest contributor to that is the Parallel.For part.

    Probably because as someone else pointed out, my PC may not be able to run 100 threads.

    My build pipeline was able to complete in 1 minute running a similar test to this as well as about 20 other tests, so for now I'm pretty relieved.


  2. First of all, Fix your test so that it only tests a single thing. I count no less than three different parallel invocations of PerformTask. So, if each PerformTask takes up to 500ms, you have a runtime up to 3 * 100 * 500ms = 150s, way longer than the timeout for your test.

    So start by separating the three different methods of doing things in parallel. Then decrease either the max wait time, the number of threads, or both, possibly in combination with increasing the timeout.

    To actually wait I would suggest just using Thread.Sleep(). Task.Delay(number).Wait() should also work, but is more complicated for no real benefit.

    They you should ask yourself what the test is for. If it is to reproduce a bug? Gain confidence during development? Either is fine, but I would argue tests like this are not very suitable as unit tests, since they are not very representative for real world scenarios, and will be slow to execute.

    I would recommend to ensure that your developers are trained in thread safe development and you have some code review policy to catch mistakes early. To ensure correctness I would suggest having automated tests using an actual database instead, this will catch more kinds of errors than using "sleep" to simulate delays. This should probably be put into a separate test category to avoid slowing down other kinds of unit tests. To ensure proper scaling and response times, use some form of automated load testing that makes large amounts of requests to a complete test system. This will hopefully catch things like deadlocks or queries interfering with each other in the database. And as a last line of defense should probably also have some manual testing.

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search