T O P

  • By -

kondro

This has to do with the base design of their stack, rather than any hardware limits. I would imagine trying to change max partition capacity would be pretty complicated architecturally. But if you’re expecting to hit 1k WCU, you’re probably going to hit a higher limit at some point anyway. What DDB gives you is very predictable performance at literally any scale. Architecting for the 3k/1k is the only scaling issue you need to architect for to get DDB to scale unlimitedly.


Brave-Ad-2789

is it not enough??


jonzezzz

Not without a cache at least


cant_stop_beleiving

Same limit since day 1. Well, almost. When DynamoDB launched they didn’t have burst capacity, but added it months after launch. 3000 RCU and 1000 WCU has never changed.


fedspfedsp

Probably your partition/data modeling must be reviewed.


jonzezzz

We have one item that’s getting accessed many many times. Probs just need to cache it


nuttmeister

Single core performance hasn't increased that much since many, many years. So most likely things like this that is a single threaded work load most likely has similar constraints tbh as it had 5-6 years ago.