
PostgreSQL vs MySQL in 2026: Pick Your Failure Mode
I’ve watched “database selection” turn into an on-call problem six months later.
PostgreSQL vs MySQL in 2026 doesn’t come down to a checklist of features. It comes down to which kind of pain you can live with under load: lock waits, replication lag, vacuum or purge pressure, connection storms during deploys, or schema changes that block writes at the worst moment.
The 10-minute decision: what will break first?
Pick the database by answering one question.
What gets painful first as your service grows: correctness under concurrency, or operations under churn? Both engines store JSON. Both run OLTP. The split shows up when you mix workloads, evolve schemas every week, and ship code that restarts pods all day.
- Default to PostgreSQL: when you expect complex queries, queryable JSON, database-enforced tenant isolation, or “reporting on prod” creep.
- Default to MySQL (InnoDB): when you can keep the workload clean OLTP, your team already runs MySQL well, and you will push analytics elsewhere on purpose.
If you cannot describe your workload in one sentence, you probably want PostgreSQL. If you can, and it’s pure CRUD, MySQL stays boring in a good way.
Workload archetypes (the stuff I see teams trip on)
This bit me when a “simple CRUD” service turned into a dashboard factory.
The team started with clean endpoints and tight indexes. Six product requests later, they added rollups, ad-hoc filters, and JSON flags. The database never “got slow” in a benchmark. It got weird at p99 during peak traffic.
- Straight OLTP, simple joins: PostgreSQL usually hits connection pressure and vacuum tuning earlier. MySQL usually stays smooth until query shapes drift into complex joins and reporting.
- OLTP plus analytics-ish reads on the primary: PostgreSQL gives you more SQL tools, but heavy queries can spike latency and push replicas behind. MySQL works if you offload analytics early and actually follow through.
- JSON-heavy where JSON appears in WHERE clauses: PostgreSQL tends to win on queryability, but GIN indexes can bloat and vacuum has to keep up. MySQL tends to push you toward generated columns and “real columns,” which is boring and often correct for OLTP.
- Multi-tenant SaaS: PostgreSQL’s row-level security can enforce tenant rules in the database. MySQL can work if you isolate tenants by schema or database, but app-layer mistakes cause the ugliest incidents.
- High-write ingestion: both engines hate “too many secondary indexes.” PostgreSQL adds vacuum and bloat into the mix. MySQL adds replica lag and secondary index cost under sustained writes.
Concurrency and performance: ignore microbenchmarks
Fast queries don’t save you.
Most greenfield services lose to one of four patterns: 500 pods reconnecting at once, one hot row everybody updates, an index explosion on a write-heavy table, or a “helpful” transaction that holds locks far longer than anyone admits.
Hot rows: fix the pattern before you blame the engine
So.
If you run a global counter, or you update “last_seen” on every request, you will create a tiny little traffic jam inside any relational database. PostgreSQL and MySQL both fight you here. The fix usually lives in your data model, not in a config file.
- Prefer append-only writes: write an event row, aggregate later. Your disks stay busy, but your locks stay calmer.
- Shard counters: keep N counters per tenant and sum them. It looks silly until you see the p99 line flatten.
- Keep transactions tiny: one request, one short transaction. Do not hold locks while you call another service.
Isolation: check defaults, then choose on purpose
Defaults vary more than people admit.
PostgreSQL typically runs READ COMMITTED by default, and InnoDB commonly runs REPEATABLE READ by default, but managed services sometimes change settings. Pick isolation levels per workload, then write tests that try to break your invariants under concurrency. If you do not test it, you do not have it.
JSON in 2026: “payload” vs “query model”
The thing nobody mentions is the maintenance bill.
Queryable JSON feels cheap on day 1. On day 90, your index grows teeth, autovacuum works overtime, and the team starts arguing about whether that JSON field “really needs to be updated so often.”
- Pick PostgreSQL for queryable JSON: when you filter, join, or aggregate on nested JSON fields and need serious indexing options.
- Pick MySQL when JSON stays a payload: when you store JSON for convenience but promote the few real filter/sort fields into generated or extracted columns early.
My strong opinion: JSON should earn its place in your WHERE clause. Otherwise, keep it as a blob and move on.
Ops: the boring stuff that pages you
Failover rarely fails in the database.
Failover fails in clients. Pools keep dead connections, retries stampede the new primary, and your “idempotent” handler double-charges someone. Practice failover with the real app, not just a database health check.
- Replication lag: plan for it. If you need read-your-writes, pin to the primary or build session consistency.
- Schema changes: test them on production-sized data. Some DDL takes locks. Some “online” changes still hurt depending on the exact operation and version.
- Maintenance reality: PostgreSQL needs vacuum to keep up. MySQL needs disciplined indexing and InnoDB tuning. Neither is “set and forget.”
What to measure before you commit
Run one realistic load test.
Use production-like concurrency, data size, and a deploy-style reconnect storm. Then measure p95 and p99 latency, lock waits, deadlocks, and replica lag. Watch what breaks first. That’s your answer.
- Workload shape: read/write ratio by endpoint, rows touched per transaction, longest transaction duration.
- Contention: lock wait time, deadlock rate, hot indexes that get hammered by updates.
- Client churn: peak concurrent connections during deploy, pool exhaustion rate, retry storm behavior during failover.
Deployment: managed vs Kubernetes (pick your battles)
Managed databases usually win.
If you run Postgres or MySQL inside Kubernetes, you are building a database platform. Operators help, but you still own volume performance, backups that actually restore, split-brain avoidance, and client reconnection behavior during node maintenance. I don’t trust “Kubernetes makes it portable” for stateful data. Incident response eats that story alive.
- High churn apps: cap connections and fail fast. Add a proxy or pooler if you need it.
- Node/Python/Go: pools matter more than driver choice. Forking models matter. Timeouts matter.
Migration and making the choice reversible
Don’t start with a benchmark suite.
Start by keeping the decision reversible. Keep SQL conservative, isolate dialect-specific bits behind a module, and plan for data export. If you bet on PostgreSQL features like RLS or heavy jsonb indexing, migration gets expensive. If you bet on MySQL staying pure OLTP, do not let the workload quietly turn into a mini-warehouse.
Other stuff in this release: dependency bumps, some image updates, the usual.
Bottom line
Pick PostgreSQL when you expect your workload to evolve into complex queries, queryable JSON, or database-enforced multi-tenant controls.
Pick MySQL when you can keep it disciplined OLTP, you have MySQL muscle memory, and you will offload analytics instead of pretending you will later.
Answer two questions after one real load test: what breaks first under concurrency, and what will your team debug at 02:00 during failover?