skip to Main Content

I have a project with one API (httpTrigger) function and one queueTrigger.

When jobs are processing in the queueTrigger, the API becomes slow/unavailable. Probably my function only accepts one job simultaneously.

Not sure why. Must be a setting somewhere.

hosts.json:

{
  "version": "2.0",
  "extensions": {
    "queues": {
        "batchSize": 1,
        "maxDequeueCount": 2,
        "newBatchThreshold": 0,
        "visibilityTimeout" : "00:01:00"
    }
  },
  "logging": ...
  "extensionBundle": ...
  "functionTimeout": "00:10:00"
}

The batchSize is set to 1. I only want one job to process simultaneously. But this should not affect my API? The setting is only for the queues trigger?

functions.json for API:

{
  "bindings": [
    {
      "authLevel": "function",
      "type": "httpTrigger",
      "direction": "in",
      "name": "req",
      "route": "trpc/{*segments}"
    },
    {
      "type": "http",
      "direction": "out",
      "name": "$return"
    }
  ],
  "scriptFile": "../dist/api/index.js"
}

functions.json for queueTrigger:

{
  "bindings": [
    {
      "name": "import",
      "type": "queueTrigger",
      "direction": "in",
      "queueName": "process-job",
      "connection": "AZURE_STORAGE_CONNECTION_STRING"
    },
    {
      "type": "queue",
      "direction": "out",
      "name": "$return",
      "queueName": "process-job",
      "connection": "AZURE_STORAGE_CONNECTION_STRING"
    }
  ],
  "scriptFile": "../dist/process-job.js"
}

Other settings in azure that may be relevant:

FUNCTIONS_WORKER_PROCESS_COUNT = 4

Scale out options in azure (sorry about the swedish)

enter image description here

Update
I tried to update maximum burst to 8.
Tried to change to dynamicConcurrency.

No success.

Feeling is the jobs occupy 100% of the CPU and API then becomes slow/times out. Regardless of concurrency settings etc.

2

Answers


  1. If your queue trigger runs multiple invocations of Functions, It is possible that your Function is reaching maximum concurrency limit, If you are running multiple functions in a single Function app, And, As your have kept the scaling only for 1 instance. According to the default behaviour of concurrency, both the functions are running on the same instance, And As the batch size of Queue trigger is set to 1 , The queue processing is taking time and hampering the overall performance of the function running on one instance. You can enable Dynamic concurrency in your Function app, So the Function app invocations will scale dynamically to meet your Function triggers performance.

    Add this setting in your host.json:-

    { 
            "version": "2.0", 
            "concurrency": { 
                "dynamicConcurrencyEnabled": true, 
                "snapshotPersistenceEnabled": true 
            } 
        }
    
    

    As I have enabled Dynamic concurrency, I do not need to add the Batch size or other settings for my queue trigger as they are ignored, Refer below:-

    enter image description here

    As you have set the Batch size to 1, Your queue trigger uses static concurrency, So you need to set the concurrency with MaxConcurrentSessions for your Function app to scale according to the Triggers.

    You can also increase the number of worker processes with the setting below:-

    FUNCTIONS_WORKER_PROCESS_COUNT 
    
    

    enter image description here

    And try increasing the maxDequeueCount in your host.json as this setting determines the number of times the message can be dequeued before moving it to poison state. Setting this value too low hampers your function performance.

    Also, Try to increase your function app to more instance than 1 and try running the functions again.

    Additionally you can visit Diagnose and solve problems of your function app and select availability and performance to get insights on your Function app performance like below:-

    enter image description here

    enter image description here

    Refer this MS document here:- Concurrency in Azure Functions | Microsoft
    Learn

    Login or Signup to reply.
  2. Firstly, I would suggest you put each of your functions in their own Function app so that they’re isolated and then one can’t impact the other. You don’t have to mess with settings.

    If you’re a glutton for punishment and are resolute in keeping these in the same Function app then I just have one comment

    FUNCTIONS_WORKER_PROCESS_COUNT should be set to lower values, not higher, if you think your processes are causing the underlying VMs to run out of resources. This is the limit before the host starts a new instance. If this is set to a low number then you’ll get more instances rather than having your existing instances becoming oversaturated with things to do.

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search