DynamoDB Saved Us Thousands and Cost Us Months
The tech lead on my team at AWS once wrote an internal memo: DynamoDB should be our default database. The reasoning was hard to argue with. Serverless. Cheap. Blazingly fast. Scalable. Simple APIs. No maintenance windows. Schemaless. I remember thinking to myself, “Well, Mr. Tech Lead, why did you need to write a memo and schedule a meeting just to state something this obvious?”
The memo (and I) weren’t wrong about DynamoDB’s strengths. We were wrong about making any database a default.
“It’s Serverless”
The traditional case for DynamoDB-as-default went something like this: with RDS, you’re managing servers. You pick an instance size and hope it’s right. You schedule maintenance windows and pray nothing breaks. You configure failover, manage connection pools, plan for capacity months ahead. With DynamoDB, all of that disappears. You create a table and start writing to it. AWS handles the rest.
That was the pitch, and our tech lead bought it. None of these problems had actually bitten the team. It was a preventative argument, made at the height of the serverless hype cycle. The fear of managing servers outweighed the cost of not having a relational engine.
That argument was strong in 2020. It’s weaker now.
Aurora Serverless v2 is not RDS. There is no instance to pick. There is no maintenance window to schedule. There is no failover to configure manually. You set a minimum and maximum ACU, and Aurora scales compute automatically based on load. At idle, it sits at 0.5 ACUs, about $43/month, or pauses entirely to zero, where you pay only for storage. Reads and writes go through the same endpoint. AWS handles patching, backups, and replication. You get a relational engine, ACID transactions, and strong consistency by default, with the same operational model that made DynamoDB attractive in the first place.
The Engineering Cost
DynamoDB’s infrastructure cost is genuinely low. No instance costs, pay-per-request in on-demand mode. In some cases it could be 50% or 75% cheaper than RDS. That part of the memo was right.
But infrastructure cost is a rounding error next to engineering cost. And DynamoDB-as-default made engineering expensive.
Follow the best practices and the absurdity compounds.
You adopt single-table design. Everything goes in one table. Your schema has 30+ fields, but thankfully DynamoDB is “schemaless”, per AWS. Your partition key becomes objectId#recordType#lifecycleStatus#version because that’s the best practice for partition keys in single tables. But now you can’t look up a record by objectId alone, because the partition key is a composite. So you create a GSI to support that access pattern. But GSIs only support eventually consistent reads. There is no option for strong consistency on a GSI. So the access pattern you were forced into by single-table design now routes through an index that cannot guarantee you’re reading current data.
We adopted the best practices. They prevented us from delivering the capability we needed.
Our canaries failed dozens of times because of this. Lifecycle state transitions needed strong consistency. The state machine would read a stale lifecycle status from the GSI and act on it. Our integration tests had await waitForMs(EVENTUAL_CONSISTENCY_WAIT_TIME) scattered throughout them. When a test failed, sometimes the fix was just duplicating that line, adding another wait.
And when two waits weren’t enough, we added another.
With Aurora or RDS, this problem does not exist. Reads from the writer instance are read-after-write consistent by default. You don’t opt into strong consistency. It’s just how relational databases work.
Every design doc that modified the data model included hours of calculation to make sure theoretical record sizes stayed under DynamoDB’s 400KB record size limit. That’s engineering time spent on database accounting, not features.
Each GSI replicates every write to the base table. 5 GSIs means 6x the WCU at $1.25 per million write request units on-demand. But the operational cost was worse than the dollar cost. The team had 3 GSIs that nobody was using, for an entire year. It took an engineer two weeks to deprecate them: trace every dependency, confirm nothing is live, coordinate the change, monitor the rollout. Two weeks of engineering time to remove three unused indexes.
Meanwhile, SQL is old, universal, and with AI, easier to write than ever.
The infrastructure savings attributable to Dynamo were real. The engineering costs dwarfed them.
Why I Use Dynamo
I’m building a side project right now. A CRUD app with some AI data generation. DynamoDB is perfect for it. Simple data model that won’t evolve. No analysts, no stakeholders, no ETL pipeline. Cheap, scalable if I need it. Global Tables give you multi-region, multi-writer replication with almost no setup, and AWS cut replicated write pricing by 67% in late 2024. Nothing else on AWS makes multi-region this easy.
I barely think about the database layer anymore. Dynamo is saving me both infrastructure and engineering cost. You need to ask if it’ll do the same for you.
The Takeaway
There is no default database.
The cheapest database is the one your team doesn’t have to think about.