When using dynamodb it used to be that one has to take care of distributing items evenly between partition keys and make sure that no key is accessed significantly more than others. That was (is?) because read and write capacity is also split between partitions and if one partition gets “hot” the application gets throttled even so there is capacity left.
Now there is “adaptive capacity” and according to this blog post it is event instant and free now.
Does this mean I basically do not have to care anymore if my workload is unbalanced?
It seems to me that this is the case, but it nowhere explicitly says so.
Can I just forget everything I learned about evenly distributing my partition keys because it does not matter?
2
Answers
No, adaptive capacity can help you to evenly distribute the load across the keyspace but it’s not going to fix a badly designed schema.
For example, adaptive capacity cannot fix a hot key issue, where you try to write to a single item more than 1000 times per second.
Moreover, depending on the sort key defined for the schema, adaptive capacity may not be able to help. For example, if you use a monotonically increasing timestamp as the sort key.
In summary, don’t disregard NoSQL fundamentals, and look at adaptive capacity as an added benefit should your access patterns be slightly skewed.
The doc states fairly explicitly: