It would be really great to get more context on what a DPU is for pricing: https://aws.amazon.com/rds/aurora/pricing/
I understand that AWS did one TPC-C 95/5 read/write benchmark and got 700k transactions for 100k DPUs, but that’s not nearly enough context.
There either needs to be a selection of other benchmark-based pricing (especially for a primarily 50/50 read/write load), actual information on how a DPU is calculated or a way to return DPU per query executed, not just an aggregate CloudWatch figure.
We were promised DSQL pricing similar to DynamoDB and insofar as it’s truly serverless (i.e. no committed pricing) they’ve succeeded, but one of the best parts of DynamoDB is absolute certainty on cost, even if that can sometimes be high.
> one of the best parts of DynamoDB is absolute certainty on cost
That depends if its On Demand or Provisioned, even if they recently added On Demand limits.
You still have absolute certainty. Read or write x amount of data and it will use exactly y R/WCU.
It then just becomes a modeling problem allowing you to determine your costs upfront during design. That’s one of the most powerful features of the truly serverless products in AWS in my opinion.
Absolute certainty is challenging with a cost-based optimizer in the mix. DDB doesn't face this challenge. Although, cost for some query patterns in DDB would shift into your application layer - so you may not have exactly the cost certainty you imagine?
Would you be willing to pay more for certainty? E.g. rent the full server at peak + 20% and run at 15% utilization some of the time? Provisioned capacity or pre-committed spend seem like reasonable, but perhaps more costly, ways to get certainty.