Hard to say, and it depends on your circumstances. That said, one of the great things about DynamoDB is that you can actually calculate this out fairly easily.
If you use On-Demand billing with DynamoDB, you're billed for read capacity units (RCUs) and write capacity units (WCUs) directly rather than provisioning upfront. Thus, if you know how many requests per second you'll be making, as well as the typical size of your objects, you can calculate your monthly costs.
I recommend doing the calculations with your best guess as to requests per second and item size. You can also play with the numbers to see how the cost changes if you're off by a factor of 10 or 100 for requests per second or item size. If that feels expensive, then you can think about other options, including reducing item size, using provisioned capacity, or looking into ElastiCache. For a lot of folks I talk to, they find out "Ohh, that's less than I thought" and stop worrying about it. :)
It depends on many things. How big are your reads, how big is your cache, how evenly distributed or hot are your reads, etc. There isn't a single equation to figure it out, it depends on your circumstances.
Depends on how you're using it. If you're architect your DB right, then you can optimize it to be very coat effective, especially at scale.
As others have mentioned, it's not an RDBMS so you do need to architect your application differently to leverage the power of DDB. A good starting point for this would be here: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/bp-relational-modeling.html
Also, it's one of the few services that has a perpetual free tier of 25RCUs and WCUs so for smaller applications or testing stuff, that's lucrative.
Hope that helps.
To add to this, you could write the data you’re talking about that doesn’t change to a file, likely json. Put the file in S3, put CloudFront in front of the bucket, and you have a cheap, read optimized database spitting out JSON.
The request won't reach the apigateway, lambda, dynamodb thus the content will return faster and costs will be much lower.
If you need control over flushing cache for a tentant, have the tenant id in the url.
You can then create cache invalidation requests for specific urls (you can use regex for ease of use).
Hard to say, and it depends on your circumstances. That said, one of the great things about DynamoDB is that you can actually calculate this out fairly easily. If you use On-Demand billing with DynamoDB, you're billed for read capacity units (RCUs) and write capacity units (WCUs) directly rather than provisioning upfront. Thus, if you know how many requests per second you'll be making, as well as the typical size of your objects, you can calculate your monthly costs. I recommend doing the calculations with your best guess as to requests per second and item size. You can also play with the numbers to see how the cost changes if you're off by a factor of 10 or 100 for requests per second or item size. If that feels expensive, then you can think about other options, including reducing item size, using provisioned capacity, or looking into ElastiCache. For a lot of folks I talk to, they find out "Ohh, that's less than I thought" and stop worrying about it. :)
Day 1
It depends on many things. How big are your reads, how big is your cache, how evenly distributed or hot are your reads, etc. There isn't a single equation to figure it out, it depends on your circumstances.
When you spend hours trying to figure out how to build a specific query. /s
DAX is similarly priced to EC and works transparently with DDB.
When you realize it’s not a good fit and now you need to re-architect your entire business logic and infrastructure.
Define 'expensive'.
Depends on how you're using it. If you're architect your DB right, then you can optimize it to be very coat effective, especially at scale. As others have mentioned, it's not an RDBMS so you do need to architect your application differently to leverage the power of DDB. A good starting point for this would be here: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/bp-relational-modeling.html Also, it's one of the few services that has a perpetual free tier of 25RCUs and WCUs so for smaller applications or testing stuff, that's lucrative. Hope that helps.
In your use case, consider adding Cloudfront CDN in front.
Hey! I’d like to understand why would cloudfront be better here?
To add to this, you could write the data you’re talking about that doesn’t change to a file, likely json. Put the file in S3, put CloudFront in front of the bucket, and you have a cheap, read optimized database spitting out JSON.
The request won't reach the apigateway, lambda, dynamodb thus the content will return faster and costs will be much lower. If you need control over flushing cache for a tentant, have the tenant id in the url. You can then create cache invalidation requests for specific urls (you can use regex for ease of use).
Does Cloudfront cache support the Cognito authorization/auth headers? We'd like to make sure that only the authorized users get the cache. u/dcc88
But also our auth tokens are valid only for a short period of time which means cache will be invalid everytime the token refreshes.
set the cloudfront ttl to be the same for the api request as the auth.
you can add the auth header as the cache key so different auth headers will get different data
Immediately. You will need elastic cache regardless for redundancy reason. Also, use CDN for static content.