skip to Main Content

If we are trying to insert many records in dynamoDb using batchWrite, then DynamoDb may optimize the BatchWrite operation such that the total size WCU consumed would be less than the total sum of each item combined.

For Example:
Assume I’m trying to insert 1000 records, each record of 1 KB Size (1 WCU = 1 KB). If I insert each item individually using PUTItem, It would cost 1000 WCU. However, if I use batchWrite, then dynamoDb may optimize the batch operation such that total cost would be less than 1000 WCU.

However, I haven’t found any details on how does dynamoDb optimize WCU during the batch write operation.

2

Answers


  1. You haven’t found any details because the pricing for batch writes is the same as if you did individual writes. The batch feature is a speed optimization.

    Login or Signup to reply.
  2. The premise is flawed. That’s not what BatchWriteItems does. It’s intended to speed up processing by allowing you to parallelize write operations. It is not a way to save on WCUs.

    From the docs, emphasis mine.

    If you use a programming language that supports concurrency, you can use threads to write items in parallel. Your application must include the necessary logic to manage the threads. With languages that don’t support threading, you must update or delete the specified items one at a time. In both situations, BatchWriteItem performs the specified put and delete operations in parallel, giving you the power of the thread pool approach without having to introduce complexity into your application.

    but:

    Parallel processing reduces latency, but each specified put and delete request consumes the same number of write capacity units whether it is processed in parallel or not. Delete operations on nonexistent items consume one write capacity unit.

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search