Kubernetes v1.34.3 release notes: what I’d check before upgrading
I do not trust “just a patch.”
I’ve watched “tiny” Kubernetes patch upgrades knock over admission, storage, or a node pool in ways nobody predicted, usually because an addon lagged behind or a rollback plan lived in someone’s head.
TL;DR: should you apply v1.34.3?
Probably yes if you run Kubernetes 1.34.x and you patch regularly.
If you run a managed control plane (EKS, GKE, AKS), you usually wait for your provider’s 1.34.3 roll-out and follow their upgrade path, not upstream kubeadm steps.
- Upgrade soon (most production clusters): You want the latest 1.34 patch for routine risk reduction, but you still stage it first, because CNIs and CSIs love to surprise you.
- Upgrade now (if symptoms match): You see weird admission rejects, storage attach flakiness, or node-level device issues that look like fixes mentioned in the v1.34.3 changelog.
- Wait a bit: You cannot test your CNI/CSI/operator stack in staging, or your provider has not published 1.34.3 yet. Some folks skip canaries for patch releases. I don’t, but I get it.
What actually changed in Kubernetes v1.34.3
This is a patch release, so you should expect targeted fixes, not new features.
The thing that trips people up is the GitHub UI. The release page shows a big commit number, but that number usually means “how far master moved since this tag,” not “how many commits this patch contains.”
Start here and read the real list of cherry-picks:
- Official tag and assets: https://github.com/kubernetes/kubernetes/releases/tag/v1.34.3
- Changelog file for 1.34: https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.34.md
If you only do one thing: open the v1.34.3 section in CHANGELOG-1.34.md and skim for the components you run hot (apiserver, admission, CSI, kubelet).
The failure mode I see most on patch upgrades
🔔 Never Miss a Breaking Change
Get weekly release intelligence — breaking changes, security patches, and upgrade guides before they break your build.
✅ You're in! Check your inbox for confirmation.
Admission bites first.
You roll the control plane, someone creates a namespace, and suddenly a policy that “always worked” starts rejecting requests with a blunt “namespace not found,” even though kubectl shows the namespace. I’ve seen teams chase DNS for an hour because the error message smells like eventual consistency, not admission.
I cannot promise v1.34.3 fixes your exact bug.
But if your incidents rhyme with that story, you should scan the v1.34.3 changelog for admission and namespace lookup fixes and decide fast.
Upgrade checklist (self-managed kubeadm clusters)
Take a backup.
Yes, even for a patch release. The fastest rollback I know still starts with a clean etcd snapshot and a clear “who pushes the button” plan.
- Back up etcd: Take an etcd snapshot and verify you can read it back. Store it off the node, not in /tmp.
- Confirm version skew: Check your kubelet and kubectl versions match the Kubernetes skew policy for your target. Do not guess.
- Stage it: Apply v1.34.3 to one canary control-plane node and one canary worker node first, then run your smoke tests for 30 minutes.
Control plane upgrade flow (high level)
Plan the upgrade.
Run your usual kubeadm plan/apply flow, or your distro’s equivalent, and watch the apiserver and controller-manager logs while you do it. That’s where the truth lives.
Worker node flow (high level)
Drain one node.
Upgrade kubelet and any node components your setup requires, restart services, then uncordon. Repeat across the pool. If your pods use emptyDir heavily, be honest about what “delete-emptydir-data” will do to your night.
Post-upgrade validation (the stuff I actually look at)
Run smoke tests.
Not “poke the UI.” I mean quick, ugly checks that catch the common breakages: scheduling, DNS, CNI, CSI, and admission.
- Cluster basics: Confirm all nodes return Ready and core system pods stop restarting.
- DNS and service routing: Resolve a service name from a fresh pod and hit a ClusterIP endpoint.
- Storage: Create a small PVC, mount it, write a file, delete the pod, recreate it, and confirm the data persists.
- Admission sanity: Create a namespace and a trivial deployment right after. Watch for surprise rejects.
- API latency: Compare apiserver request latency before/after. If your P95 jumps, stop and dig before you roll the rest.
Known issues
I haven’t found a trustworthy “known issues: none” line for this patch yet.
Check the v1.34.3 section in CHANGELOG-1.34.md and the release page notes before you copy that statement into a change ticket.
References
Use the upstream sources, not my paraphrase:
- GitHub release: https://github.com/kubernetes/kubernetes/releases/tag/v1.34.3
- CHANGELOG-1.34.md: https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.34.md
Other stuff in this release: dependency bumps, some image updates, the usual.