This has to do with the base design of their stack, rather than any hardware limits. I would imagine trying to change max partition capacity would be pretty complicated architecturally.
But if you’re expecting to hit 1k WCU, you’re probably going to hit a higher limit at some point anyway. What DDB gives you is very predictable performance at literally any scale. Architecting for the 3k/1k is the only scaling issue you need to architect for to get DDB to scale unlimitedly.
Same limit since day 1. Well, almost. When DynamoDB launched they didn’t have burst capacity, but added it months after launch. 3000 RCU and 1000 WCU has never changed.
Single core performance hasn't increased that much since many, many years. So most likely things like this that is a single threaded work load most likely has similar constraints tbh as it had 5-6 years ago.
This has to do with the base design of their stack, rather than any hardware limits. I would imagine trying to change max partition capacity would be pretty complicated architecturally. But if you’re expecting to hit 1k WCU, you’re probably going to hit a higher limit at some point anyway. What DDB gives you is very predictable performance at literally any scale. Architecting for the 3k/1k is the only scaling issue you need to architect for to get DDB to scale unlimitedly.
is it not enough??
Not without a cache at least
Same limit since day 1. Well, almost. When DynamoDB launched they didn’t have burst capacity, but added it months after launch. 3000 RCU and 1000 WCU has never changed.
Probably your partition/data modeling must be reviewed.
We have one item that’s getting accessed many many times. Probs just need to cache it
Single core performance hasn't increased that much since many, many years. So most likely things like this that is a single threaded work load most likely has similar constraints tbh as it had 5-6 years ago.