skip to Main Content

I am trying to understand any limits (either exact or practical) that exist on the number of partitions for a single Postgres database, and any performance concerns that arise as you approach these limits. To this end, please answer the following related questions:

 

  1. What is the maximum total number of partitions a database can hold?

  2. If I am planning to have more than 10,000 partitions in my database, is there anything I should be concerned about?

  3. Is there any number of partitions at which concerns arise solely because of the sheer number of partitions and, if so, approximately what number of partitions do these concerns arise at?

2

Answers


    1. About 1,431,650,303. That’s the limit of how many relations PostgreSQL can hold, but you need to subtract the ones that are built-in. Still, about 1.4 billion.
    2. Nothing you wouldn’t have to be concerned about when preparing to accommodate 10’000 independent tables.
    3. That number is only speculated about in the wider context of how many relations is too many. 10’000 is nowhere near what’s speculated. Here‘s @Laurenz Albe casually testing on a table with 66’000 partitions.

    It’s best to test. You can use a simple PL/pgSQL loop to spawn 10’000 partitions in seconds, then a single generate_series() can populate all of them in seconds. After that, stress-test with pgbench and see if its performance is satisfactory.

    Login or Signup to reply.
  1. Don’t look for exact limits. The biggest problem is that thhe performance will suck in direct proportion to the number of partitions involved. How much suckiness you can tolerate depends on your use case.

    To give you a data point: I once created an empty table with 10000 partitions, and selecting all 0 rows took 0.2 seconds. And that was a trivial query.

    You will start to suffer long before you hit any hard limits.

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search