skip to Main Content

I’m using CosmosDB with MongoDB API and I currently use a database instance to save IoT time series data from multiple machines in a dedicated collection per machine.

Recently, our Azure Function (which processes and saves the machine data) fails to create new collections with the following error:

Your account is currently configured with a total throughput limit of 3200 RU/s. 
This operation failed because it would have increased the total throughput to 3600 RU/s. 
See https://aka.ms/cosmos-tp-limit for more information.

Unfortunately, this doesn’t make sense for me since:
I initially configured the database to use «Database-level throughput».
The database already has 25 collections (which would require 10’000 RU/s according to the logic above – but currently only 3200 are activated)

It’s also weird that I can’t even manually create a new collection for this database:
image1

So my 2 questions are:

  1. Where can i check (and if necessary set) the following setting for an existing database?
    image2
  2. If the above setting should should already be set: Why is my azure function not creating new collections anymore? Neither the code nor the database settings have changed.

I tried turning off the azure function to make sure it wasn’t hogging all the RU/s of the Cosmos DB.
Still, even after having turned off the azure function I wasn’t able to add more collections.

2

Answers


  1. Chosen as BEST ANSWER

    I found the solution to the problem in the documentation:

    Containers in a shared throughput database share the throughput (RU/s) allocated to that database. With standard (manual) provisioned throughput, you can have up to 25 containers with a minimum of 400 RU/s on the database. With autoscale provisioned throughput, you can have up to 25 containers in a database with autoscale minimum 1000 RU/s (scales between 100 - 1000 RU/s).

    This was added in February 2020.


  2. Looks like there were a total throughput limit set, which is a cost management feature meant to enforce that no more than certain set number of RU/s (in this case 3,200) is used by all billable resources – databases or collections – that are created within the account. Typically, a FinOps person or an Enterprise admin would set such limit so as to prevent incurring unexpected costs or exceeding budget.

    You are using a shared database throughput and creating containers underneath the shared throughput database. In such case, the containers themselves do not incur cost, rather it is the database. In other words, adding an extra container to shared throughput database does not result in cost increase. Dozens or hundreds of containers can be created underneath any database, but only up to 25 containers can share the throughput from the database containing them. 26th container and any additional ones will have to have their own dedicated throughput and will carry cost.

    If you are trying to create the 26th container – such container will need its own RU/s and as mentioned, will carry cost. However, given the total throughput limit they set, you are unable to create it as the sum of all RU/s used in the account would exceed the limit you set.

    You can raise this self-imposed limit or remove it altogether by navigating to your Cosmos DB account and clicking on Cost Management tab. Here’s a screenshot:

    enter image description here

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search