skip to Main Content

I’m using the following logic that seems to run in parallel when running locally, but when deployed on azure functions, it’s running sequencially:

var allRecordTasks = new List<Task<DataRecord>>();
for (int i = 0; i < 100; i++)
{
    allRecordTasks.Add(Task.Run(() => MyTask(cancellationToken)));
}

await Task.WhenAll(allRecordTasks);

I’m running under S1 plan and I was under the assumption that a single core could run multiple threads.

Is there some setting to make this work, is it possible when running a plan with multiple cores or is it simply not possible without using durable functions?

private async Task<DataRecord> MyTask(CancellationToken cancellationToken)
        {
            var timeSeriesType = await GetTimeSeriesTypeAsync();
            var dataRecord = new DataRecord(timeSeriesType);
            return dataRecord;
        }

Update: Simply using

allRecordTasks.Add(MyTask(cancellationToken));

ran in parallel. Other issues in my code caused the CPU core to be busy, which didn’t cost much locally (quad-core), but prevented performance on a single core. Thanks Peter Bons and Stephen Cleary for clearing things up, pointing me in the right direction.

2

Answers


  1. async await is not the same thing as multithreading. They are somewhat related, but putting things in tasks in no way constitutes multithreading.

    Use Parallel Tasks instead, see for example the 2nd answer to this question which is similar to yours.

    Specifically for Azure Functions I would not implement parallelism like this, since the ecosystem offers proper way of doing that by using queues.
    So you have one function with your current trigger that retrieves the list of DataRecords, then puts each one on a queue, then a 2nd function with a QueueTrigger that handles items on the queue. The 2nd Function will then execute several times in parallel as long as the app service plan allows. Note I’m not talking about 2 separate function apps but 2 methods with different triggers.

    Using custom multithreading or parallelism could also cause issues on dynamic hosting plans (Y1) where the function execution could be halted as soon as the function returns a result.

    Login or Signup to reply.
  2. I’m running under S1 plan and I was under the assumption that a single core could run multiple threads.

    Well, kinda. Any core can "run" any number of threads. But of course each core is only one core and only executes one CPU instruction at a time. So if you’re talking about threads doing CPU work, then it would only be one at a time.

    (Most likely, the CPU is actually switching between the tasks periodically, but the overall time will be essentially the same as if it just ran them sequentially).

    Is there some setting to make this work, is it possible when running a plan with multiple cores or is it simply not possible without using durable functions?

    It should parallelize nicely with multiple cores.

    Pro tip: You can use Process Explorer to set the Processor Affinity on your locally-running instance to simulate one (or two, or …) cores.

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search