I am trying to understand any limits (either exact or practical) that exist on the number of partitions for a single Postgres database, and any performance concerns that arise as you approach these limits. To this end, please answer the following related questions:
-
What is the maximum total number of partitions a database can hold?
-
If I am planning to have more than 10,000 partitions in my database, is there anything I should be concerned about?
-
Is there any number of partitions at which concerns arise solely because of the sheer number of partitions and, if so, approximately what number of partitions do these concerns arise at?
2
Answers
1,431,650,303
. That’s the limit of how many relations PostgreSQL can hold, but you need to subtract the ones that are built-in. Still, about 1.4 billion.It’s best to test. You can use a simple PL/pgSQL loop to spawn 10’000 partitions in seconds, then a single
generate_series()
can populate all of them in seconds. After that, stress-test withpgbench
and see if its performance is satisfactory.Don’t look for exact limits. The biggest problem is that thhe performance will suck in direct proportion to the number of partitions involved. How much suckiness you can tolerate depends on your use case.
To give you a data point: I once created an empty table with 10000 partitions, and selecting all 0 rows took 0.2 seconds. And that was a trivial query.
You will start to suffer long before you hit any hard limits.