skip to Main Content

I’m encountering an issue with CosmosDB while performing a batch upsert operation. The error code 16500 indicates a "TooManyRequests" (429) status, which suggests that the request rate is too high for the provisioned throughput. The full error message is as follows:

{'writeErrors': [{'index': 0, 'code': 16500, 'errmsg': 'Error=16500, RetryAfterMs=82, Details='Response status code does not indicate success: TooManyRequests (429); Substatus: 3200; ActivityId: 2575d098-ea8f-4bf2-a9d3-0da63bdd4d76; Reason: (rnErrors : [rn  "Request rate is large. More Request Units may be needed, so no changes were made. Please retry this request later. Learn more: http://aka.ms/cosmosdb-error-429"rn]rn);', 'op': {'q': {'reviewId': '684f0cdb-8823-4e29-a2f5-58980d275ebe'}, 'u': {'$setOnInsert': {'reviewId': '684f0cdb-8823-4e29-a2f5-58980d275ebe', 'org_id': '', 'content': 'Good', 'comment': '', 'usefulness': '', 'date_time': '2024-08-14 12:51:09', 'user_name': 'Asmini Kumar Sahu', 'rating': 5, 'source': 'play_store', 'type': 'review', 'ai_analysis': {}, 'rca': {}, 'emotional_analysis': '', 'sentimental_analysis': '', 'theme': '', 'analyzed': False, 'appVersion': '476.0.0.49.74', 'thumbsUpCount': 0, 'batch_id': 22, 'toxicity_level': '', 'dashboard_id': '66be2bd29587dbbd8762dc13'}}, 'multi': False, 'upsert': True}}], 'writeConcernErrors': [], 'nInserted': 0, 'nUpserted': 0, 'nMatched': 0, 'nModified': 0, 'nRemoved': 0, 'upserted': []}

This error is occurring while trying to save 100 items in a single batch operation. My CosmosDB account is configured with 1000 Request Units (RUs) per second.

I am attempting to upsert 100 items at a time into a CosmosDB container, expecting the operation to complete successfully without any errors. However, I’m receiving the "TooManyRequests" (429) error, which implies that the request exceeds the available RUs.

I expected CosmosDB to handle the batch operation within the provisioned throughput, but it seems that the operation requires more RUs than available. I haven’t yet tried to reduce the batch size or increase the RUs.

2

Answers


  1. It’s a batch transaction so it either succeeds or fails together and can’t be split up into separate transactions by the SDK. So yes; you’ll need to use smaller batches or increase the RU size.

    You could also go to Features on your Cosmos resource and turn on Burst capacity. It’ll allow you to accumulate unused RU’s for 5 minutes and allow bursts of up to 3000 RU/s. That’ll give you a bit more wiggle room, but you’ll just have to test if it’s enough for your use case.

    Login or Signup to reply.
  2. 1000 RU/sec isn’t enough for your use case. If you have multiple physical partitions, that number gets split across physical partitions. If you assume ~15 RU per write, you’re at 1500 RU to store a 100-document batch, which would likely lead to the throttling you’re seeing.

    When you exceed your RU threshold, Cosmos DB will typically complete the current operation and then put you "in debt" where you have to "pay back" that RU debt during the next time period (which could be several milliseconds, or even several seconds, depending on the RU cost of the operation that put you in debt).

    Note how the error result contains RetryAfterMs=82 – this is the time period for which you are "paying off" your "Request Unit debt. For the next 82ms, every operation will be rejected, so this is a great hint for you to not-run-queries for the next 82ms.

    There’s no single "right answer" to avoid throttling, but you’ll need to choose one (or more) of the following:

    • increase RU/sec
    • shift to autoscale, setting maximum RU/sec (and minimum = maximum / 10)
    • enable burst capacity
    • enable server-side retries
    • reduce batch size
    • accept that you will be occasionally throttled, and wait out your debt-payback period
    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search