skip to Main Content

The scenario is this: When my process start I send around 3k messages to a queue in SQS. From there I have a lambda that I want to pick 500 of those messages and process them, then take the next 500 messages and so on until the queue is empty. Since the lambda that processes the messages has to talk to a very slow API I can’t use concurrent lambda executions, otherwise I will reach the limits on that API.
The problem that I’m seeing is that sometimes the lambda starts picking 500 messages but in the next execution picks less than 200 and the next execution it picks even less and so on, instead of picking 500 everytime.
Maybe I misunderstanding something in the settings. How can I achieve this behaviour?
These are my settings:

Lambda:
ProcessMessages:
    Type: AWS::Serverless::Function
    Properties:
      Timeout: 600
      CodeUri: process_messages/
      Role: !Sub ${ConsumerLambdaRole.Arn}
      Handler: app.lambda_handler
      Runtime: python3.8
      ReservedConcurrentExecutions: 1
      Architectures:
        - x86_64
Queue:
  ProcessMessagesQueue:
    Type: AWS::SQS::Queue
    Properties:
      QueueName: 'processMessages'
      DelaySeconds: 0
      VisibilityTimeout: 900
      RedrivePolicy:
        deadLetterTargetArn: !GetAtt DLmessages.Arn
        maxReceiveCount: 10

EventSource:
  SendMessageToLambda:
    Type: AWS::Lambda::EventSourceMapping
    Properties:
      BatchSize: 500
      Enabled: true
      EventSourceArn: !GetAtt ProcessMessagesQueue.Arn
      FunctionName: !GetAtt ProcessMessages.Arn
      FunctionResponseTypes: 
        - "ReportBatchItemFailures"
      MaximumBatchingWindowInSeconds: 20

2

Answers


  1. Yes, that is the behaviour of AWS Lambda and Amazon SQS. You won’t be guaranteed that Lambda will receive the maximum number of messages from the SQS queue.

    It is not possible to change this behaviour.

    Login or Signup to reply.
  2. If your issue is to limit the number of requests for an AWS Lambda function, you can adjust the concurrency settings.
    Each instance of your execution environment can serve an unlimited number of requests. In other words, the total invocation limit is based only on the concurrency available to your function.
    The reserved concurrency restricts the maximum number of concurrent invocations for that function. Synchronous requests arriving in excess of the reserved concurrency limit will fail with a throttling error.

    Configuring reserved concurrency for a function

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search