Kubernetes v1.34.3 release notes: what I’d check before upgrading
I do not trust “just a patch.”
I’ve watched “tiny” Kubernetes patch upgrades knock over admission, storage, or a node pool in ways nobody predicted, usually because an addon lagged behind or a rollback plan lived in someone’s head.
TL;DR: should you apply v1.34.3?
Probably yes if you run Kubernetes 1.34.x and you patch regularly.
If you run a managed control plane (EKS, GKE, AKS), you usually wait for your provider’s 1.34.3 roll-out and follow their upgrade path, not upstream kubeadm steps.
- Upgrade soon (most production clusters): You want the latest 1.34 patch for routine risk reduction, but you still stage it first, because CNIs and CSIs love to surprise you.
- Upgrade now (if symptoms match): You see weird admission rejects, storage attach flakiness, or node-level device issues that look like fixes mentioned in the v1.34.3 changelog.
- Wait a bit: You cannot test your CNI/CSI/operator stack in staging, or your provider has not published 1.34.3 yet. Some folks skip canaries for patch releases. I don’t, but I get it.
What actually changed in Kubernetes v1.34.3
This is a patch release, so you should expect targeted fixes, not new features.
The thing that trips people up is the GitHub UI. The release page shows a big commit number, but that number usually means “how far master moved since this tag,” not “how many commits this patch contains.”
Start here and read the real list of cherry-picks:
- Official tag and assets: https://github.com/kubernetes/kubernetes/releases/tag/v1.34.3
- Changelog file for 1.34: https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.34.md
If you only do one thing: open the v1.34.3 section in CHANGELOG-1.34.md and skim for the components you run hot (apiserver, admission, CSI, kubelet).
The failure mode I see most on patch upgrades
Admission bites first.
You roll the control plane, someone creates a namespace, and suddenly a policy that “always worked” starts rejecting requests with a blunt “namespace not found,” even though kubectl shows the namespace. I’ve seen teams chase DNS for an hour because the error message smells like eventual consistency, not admission.
I cannot promise v1.34.3 fixes your exact bug.
But if your incidents rhyme with that story, you should scan the v1.34.3 changelog for admission and namespace lookup fixes and decide fast.
Upgrade checklist (self-managed kubeadm clusters)
Take a backup.
Yes, even for a patch release. The fastest rollback I know still starts with a clean etcd snapshot and a clear “who pushes the button” plan.
- Back up etcd: Take an etcd snapshot and verify you can read it back. Store it off the node, not in /tmp.
- Confirm version skew: Check your kubelet and kubectl versions match the Kubernetes skew policy for your target. Do not guess.
- Stage it: Apply v1.34.3 to one canary control-plane node and one canary worker node first, then run your smoke tests for 30 minutes.
Control plane upgrade flow (high level)
Plan the upgrade.
Run your usual kubeadm plan/apply flow, or your distro’s equivalent, and watch the apiserver and controller-manager logs while you do it. That’s where the truth lives.
Worker node flow (high level)
Drain one node.
Upgrade kubelet and any node components your setup requires, restart services, then uncordon. Repeat across the pool. If your pods use emptyDir heavily, be honest about what “delete-emptydir-data” will do to your night.
Post-upgrade validation (the stuff I actually look at)
Run smoke tests.
Not “poke the UI.” I mean quick, ugly checks that catch the common breakages: scheduling, DNS, CNI, CSI, and admission.
- Cluster basics: Confirm all nodes return Ready and core system pods stop restarting.
- DNS and service routing: Resolve a service name from a fresh pod and hit a ClusterIP endpoint.
- Storage: Create a small PVC, mount it, write a file, delete the pod, recreate it, and confirm the data persists.
- Admission sanity: Create a namespace and a trivial deployment right after. Watch for surprise rejects.
- API latency: Compare apiserver request latency before/after. If your P95 jumps, stop and dig before you roll the rest.
Known issues
I haven’t found a trustworthy “known issues: none” line for this patch yet.
Check the v1.34.3 section in CHANGELOG-1.34.md and the release page notes before you copy that statement into a change ticket.
References
Use the upstream sources, not my paraphrase:
- GitHub release: https://github.com/kubernetes/kubernetes/releases/tag/v1.34.3
- CHANGELOG-1.34.md: https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.34.md
Other stuff in this release: dependency bumps, some image updates, the usual.
Keep Reading
- Kubernetes Upgrade Checklist: the runbook for minor version upgrades
- Kubernetes EOL policy explained for on-call humans
- Kubernetes 1.35 release: the stuff that can break your cluster
Related Reading
- Kubernetes Upgrade Checklist — The runbook for safe minor version upgrades
- Kubernetes EOL Policy Explained — Know when your version loses support
- Gateway API vs Ingress vs LoadBalancer — The networking decision for your cluster
- Popular Kubernetes Distributions Compared — Which K8s flavour fits your use case
Pre-upgrade validation
Patch releases feel safe. They are not always safe. Run these before touching production:
# Check current cluster version
kubectl version --short 2>/dev/null || kubectl version
# Verify all nodes are Ready before starting
kubectl get nodes -o wide
kubectl get nodes -o jsonpath='{range .items[*]}{.metadata.name} {.status.conditions[?(@.type=="Ready")].status}{"
"}{end}'
# Check for any pending pods or unhealthy workloads
kubectl get pods -A --field-selector status.phase!=Running,status.phase!=Succeeded | head -20
# Snapshot current state for rollback comparison
kubectl get deployments -A -o json > /tmp/pre-upgrade-deployments.json
kubectl get daemonsets -A -o json > /tmp/pre-upgrade-daemonsets.json
Upgrade procedure
For managed Kubernetes (EKS, GKE, AKS), the control plane upgrade is one command. For self-managed clusters, follow the node-by-node approach:
# kubeadm upgrade (self-managed)
sudo apt update && sudo apt install -y kubeadm=1.34.3-*
sudo kubeadm upgrade plan
sudo kubeadm upgrade apply v1.34.3
# Then upgrade kubelet on each node (drain first)
kubectl drain node-1 --ignore-daemonsets --delete-emptydir-data
# On the node:
sudo apt install -y kubelet=1.34.3-* kubectl=1.34.3-*
sudo systemctl daemon-reload && sudo systemctl restart kubelet
# Back on control plane:
kubectl uncordon node-1
# Verify the node is back and running the right version
kubectl get nodes
Post-upgrade smoke test
After every patch upgrade, run a quick validation to catch regressions:
# Verify all nodes upgraded
kubectl get nodes -o jsonpath='{range .items[*]}{.metadata.name}: {.status.nodeInfo.kubeletVersion}{"
"}{end}'
# Check component health
kubectl get componentstatuses 2>/dev/null
kubectl get --raw /readyz?verbose | grep -E "ok|failed"
# Run a quick workload test
kubectl run upgrade-test --image=busybox --restart=Never -- sleep 30
kubectl get pod upgrade-test -w
kubectl delete pod upgrade-test
# Check for any new warnings in events
kubectl get events -A --sort-by='.lastTimestamp' | tail -20 | grep -i warning
Check your manifests for deprecated APIs with the K8s Deprecation Checker. Plan your next upgrade with the Upgrade Planner. Full version history at our Kubernetes Release Tracker.
Official resources:
🛠️ Try These Free Tools
Paste your Kubernetes YAML to detect deprecated APIs before upgrading.
Paste your dependency file to check for end-of-life packages.
Plan your upgrade path with breaking change warnings and step-by-step guidance.
Track These Releases