skip to Main Content

I receive from provider APIs multiple json item update at once and do not control how much can be received in the sense that sometimes there can be 1 item and sometimes 100k items.

these items are put in a SQS queue. Then I have some code processing it for insert within dynamoDB (mostly update as the items already exist but changed)

This items will then be inserted into a DynamoDB and you can imagine that when it gets 100k items at once then the provisioned capacity is not enough and it needs to autoscale which cost me time and money.

Do you know if there is any kind of architecture mechanism I could put in place to smooth the transactions in a way that I do not need 100K items at once which will save me paying for autoscaling and more provisioned capacity?

2

Answers


  1. You could use on-demand mode so you didn’t need to autoscale up (or after the burst autoscale down). If the items are below 1kb then writing 100,000 of them would cost about $0.12.

    On-demand was designed for bursty workloads like you’re describing.

    Login or Signup to reply.
  2. Use your SQS queue as a buffer, don’t try to write all items at once.

    Set batch size and max concurrency to ensure you don’t overload your DynamoDB table by smoothing the traffic across a longer duration. You can play around with the other configurations of your event source mapping until you get the correct results.

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search