Kubernetes

Kubernetes 1.32 End of Life: Migration Playbook for February 28, 2026

Kubernetes 1.32 reaches end of life on February 28, 2026. Here's your complete migration playbook with kubeadm commands, cloud provider guides (EKS, GKE, AKS), and a step-by-step upgrade path to 1.34.

Matheus February 16, 2026 6 min read

12 days. That’s how long Kubernetes 1.32 has left before the upstream project stops issuing patches. After February 28, 2026, there are no more security fixes, no more bug patches, no more backports. Version 1.32.12 — released on February 10 — is the last update you will ever get.

If you’re still running 1.32 in production, this is your migration playbook. Not a gentle nudge. A concrete, step-by-step plan to get off a version that’s about to become a liability.

Kubernetes Health
Kubernetes 1.32 EOL


What “End of Life” Actually Means (It’s Worse Than You Think)

Let’s be precise about what happens on March 1st if you’re still on 1.32.

No more CVE patches. When the next Kubernetes vulnerability drops — and it will — the fix will ship for 1.33, 1.34, and 1.35. Not 1.32. You’ll read the advisory, understand exactly how your clusters are exposed, and have no upstream fix to apply.

This isn’t theoretical. Look at what’s already been patched in 1.33.x that 1.32 users are exposed to right now:

  • CVE-2025-5187 (fixed in 1.33.4): Nodes can delete themselves by adding an OwnerReference to their own Node object. An attacker with node-level access can cause cascading disruption by self-destructing nodes in your cluster. This is the kind of bug that makes incident response teams lose sleep.
  • CVE-2025-4563 (fixed in 1.33.2): DRA (Dynamic Resource Allocation) authorization bypass. If you’re using DRA — and more teams are as GPU workloads grow — this one matters.
  • No more bug fixes. Several nasty bugs were fixed in 1.33.x patches that will never be backported to 1.32:

  • A kubelet watchdog that kills the kubelet during slow container runtime initialization (1.33.7). If you’ve ever seen mysterious kubelet restarts after a node reboot, this might be why.
  • A DRA double-allocation race condition during rapid pod scheduling (1.33.8). You won’t hit this until you do — and when you do, two pods will think they own the same resource.
  • A DaemonSet orphaned pod regression (1.33.6) that can leave ghost pods consuming resources with no controller managing them.
  • No more compatibility guarantees. Ecosystem tools — Helm, Istio, cert-manager, ArgoCD — will drop 1.32 from their test matrices. You’ll start seeing “unsupported version” warnings, then errors, then silent incompatibilities that only surface at 3 AM.

    Kubernetes 1.32 had a solid run. Originally released December 11, 2024, it received 12 patch releases over ~14 months. That’s the standard lifecycle. But its time is up.


    Where Should You Land?

    You have three supported targets. Here’s the honest comparison:

    K8s 1.33 Freshness
    K8s 1.34 Freshness
    K8s 1.35 Freshness

    1.33 1.34 1.35
    EOL June 28, 2026 October 27, 2026 February 28, 2027
    Support remaining ~4 months ~8 months ~12 months
    Hops from 1.32 1 2 3
    Maturity Fully battle-tested Stable, well-patched Current release, still early patches
    Risk profile Low risk, low runway Low risk, good runway Low risk on paper, less field time
    Recommended for “Just get off 1.32 NOW” Most production teams Teams who just upgraded recently

    Our recommendation: Target 1.34 for most teams.

    Here’s the reasoning:

    Why not 1.33? It works, it’s stable, and it’s the fewest changes from where you are. But with EOL on June 28, you’d be doing this exact same fire drill in four months. That’s not a migration strategy — that’s procrastination with extra steps.
    Why not 1.35? It’s the current release with the longest support runway. But getting there requires three sequential minor version upgrades (1.32→1.33→1.34→1.35), and the newest release has had less time in the field. Unless you upgraded to 1.34 recently and are just continuing the chain, the extra hop adds risk and downtime for marginal benefit.
    Why 1.34? Two hops (1.32→1.33→1.34), eight months of support, and a version that’s had enough patch releases to shake out the rough edges. You get the major 1.33 features (sidecar containers GA, nftables GA) plus whatever 1.34 brought to the table, and you won’t need to think about upgrading again until late summer.

    The one exception: if you’re in a change-freeze or have a release cycle that makes two hops impossible before February 28, go to 1.33 now and plan the 1.33→1.34 hop for March. Getting off 1.32 is the priority.


    The Upgrade Path: You Cannot Skip Minor Versions

    🔔 Never Miss a Breaking Change

    Monthly release roundup — breaking changes, security patches, and upgrade guides across your stack.

    ✅ You're in! Check your inbox for confirmation.

    This is the part where people get burned. Kubernetes version skew policy is strict: you must upgrade one minor version at a time. There is no shortcut from 1.32 to 1.34. You go through 1.33, you validate, and then you continue.

    Here’s the sequence for a kubeadm-managed cluster:

    Pre-Upgrade Checklist

    Before you touch anything:

    # 1. Confirm your current version
    

    kubectl version --short

    kubectl get nodes -o wide

    # 2. Check for deprecated API usage that will break on upgrade # Install kubectl-deprecations or use kubent

    kubectl get --raw /metrics | grep apiserver_requested_deprecated_apis

    # 3. Verify etcd health

    ETCDCTL_API=3 etcdctl endpoint health \

    --endpoints=https://127.0.0.1:2379 \

    --cacert=/etc/kubernetes/pki/etcd/ca.crt \

    --cert=/etc/kubernetes/pki/etcd/server.crt \

    --key=/etc/kubernetes/pki/etcd/server.key

    # 4. Back up etcd (non-negotiable)

    ETCDCTL_API=3 etcdctl snapshot save /backup/etcd-pre-upgrade-$(date +%Y%m%d).db \

    --endpoints=https://127.0.0.1:2379 \

    --cacert=/etc/kubernetes/pki/etcd/ca.crt \

    --cert=/etc/kubernetes/pki/etcd/server.crt \

    --key=/etc/kubernetes/pki/etcd/server.key

    # 5. Check component version skew # kubelet must be within one minor version of the API server # kube-proxy must match the API server minor version

    kubectl get nodes -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.status.nodeInfo.kubeletVersion}{"\n"}{end}'

    Hop 1: 1.32 → 1.33

    # On the first control plane node:
    # Update kubeadm
    

    apt-get update && apt-get install -y kubeadm=1.33.<em>-</em>

    # or on RHEL/CentOS: # yum install -y kubeadm-1.33.* # Verify the upgrade plan

    kubeadm upgrade plan

    # Apply the upgrade (first control plane only)

    kubeadm upgrade apply v1.33.8

    # Upgrade kubelet and kubectl

    apt-get install -y kubelet=1.33.<em>-</em> kubectl=1.33.<em>-</em>

    systemctl daemon-reload

    systemctl restart kubelet

    For additional control plane nodes:

    kubeadm upgrade node
    

    apt-get install -y kubelet=1.33.<em>-</em> kubectl=1.33.<em>-</em>

    systemctl daemon-reload

    systemctl restart kubelet

    For each worker node:

    # From a machine with kubectl access:
    

    kubectl drain <node-name> --ignore-daemonsets --delete-emptydir-data

    # On the worker node:

    apt-get update && apt-get install -y kubeadm=1.33.<em>-</em>

    kubeadm upgrade node

    apt-get install -y kubelet=1.33.<em>-</em>

    systemctl daemon-reload

    systemctl restart kubelet

    # From kubectl:

    kubectl uncordon <node-name>

    Stop here. Validate. Don’t chain upgrades without confirming the cluster is healthy:

    kubectl get nodes          # All nodes Ready?
    

    kubectl get pods -A # Any CrashLoopBackOff?

    kubectl get cs # Component statuses healthy?

    # Run your smoke tests. You have smoke tests, right?

    Hop 2: 1.33 → 1.34

    Repeat the exact same process, substituting 1.34 for 1.33. Same drain-upgrade-uncordon dance. Same validation.

    Version skew during upgrade: The Kubernetes version skew policy allows kubelet to be one minor version behind the API server. This means during the 1.33→1.34 upgrade, your 1.33 kubelets will work with the 1.34 API server while you roll nodes. But 1.32 kubelets will not work with a 1.34 API server. This is why you can’t skip versions.


    Cloud Provider Timelines: You Might Have More Time (For a Price)

    If you’re running managed Kubernetes, your deadlines are slightly different — but don’t get complacent.

    Amazon EKS

  • Standard support EOL for 1.32: March 23, 2026 (three weeks after upstream)
  • Extended support EOL: March 23, 2027
  • EKS extended support buys you a full extra year, but at a premium: $0.60 per cluster per hour on top of the standard $0.10/hour. That’s roughly $4,400/year per cluster just for the privilege of staying on 1.32. For a single cluster, maybe. For a fleet, you’re burning budget to avoid an upgrade you’ll have to do anyway.

    # Check your EKS cluster version
    

    aws eks describe-cluster --name <cluster-name> \

    --query 'cluster.version' --output text

    # Start an EKS upgrade to 1.33

    aws eks update-cluster-version \

    --name <cluster-name> \

    --kubernetes-version 1.33

    # Watch the update status

    aws eks describe-update --name <cluster-name> \

    --update-id <update-id-from-previous-command>

    # Don't forget to update your node groups after!

    aws eks update-nodegroup-version \

    --cluster-name <cluster-name> \

    --nodegroup-name <nodegroup-name>

    Google GKE

    GKE typically provides 2-4 weeks of grace after upstream EOL before auto-upgrading clusters. If you haven’t set a maintenance window and an upgrade strategy, GKE will upgrade your clusters for you. That sounds convenient until it happens during your traffic peak.

    # Check GKE cluster version
    

    gcloud container clusters describe <cluster-name> \

    --zone <zone> --format="value(currentMasterVersion)"

    # Initiate upgrade

    gcloud container clusters upgrade <cluster-name> \

    --zone <zone> --master --cluster-version 1.33

    Azure AKS

    AKS follows a similar pattern: roughly 2-4 weeks past upstream EOL, with platform-managed upgrades kicking in after that. AKS’s “long-term support” (LTS) versions are a separate track — 1.32 is not an LTS release, so no special treatment here.

    # Check AKS version
    

    az aks show --resource-group <rg> --name <cluster-name> \

    --query kubernetesVersion -o tsv

    # Upgrade AKS

    az aks upgrade --resource-group <rg> --name <cluster-name> \

    --kubernetes-version 1.33

    The bottom line for cloud users: You have a few weeks of buffer. Use that buffer for testing, not for procrastination. Start the upgrade now and use the extra weeks as a safety net, not a crutch.


    What You Gain: 5 Features Worth the Upgrade

    Upgrading isn’t just about escaping EOL. The jump from 1.32 to 1.33 is one of the most feature-rich minor releases in recent Kubernetes history. Here’s what actually matters in production:

    1. Sidecar Containers — GA (KEP-753)

    This is the big one. After years of KEPs, alpha gates, and community debate, native sidecar containers are generally available. Init containers with restartPolicy: Always now have proper lifecycle management: they start before your main containers, stay running alongside them, and shut down after them.

    If you’re running service meshes (Istio, Linkerd), log shippers, or any sidecar-dependent architecture, this eliminates a whole class of race conditions. No more hacks with postStart hooks and sleep loops to ensure your Envoy proxy is ready before your app starts.

    Watch out: A sidecar startup probe race condition was fixed in 1.33.6. Make sure you’re on 1.33.8 (latest) to avoid it.

    2. nftables Kube-Proxy Backend — GA (KEP-3866)

    The iptables-based kube-proxy is showing its age. nftables is faster, handles large rule sets better, and is the future of Linux packet filtering. With GA in 1.33, it’s production-ready.

    The caveat: This doesn’t mean nftables is the default yet. You still need to opt in. But if you’re running clusters with thousands of Services, the performance difference is measurable — especially rule reload times during Service churn. An iif vs iifname bug in local traffic detection was fixed in 1.33.6, so again: run the latest patch.

    3. In-Place Pod Resource Resize — Beta (KEP-1287)

    Change a pod’s CPU and memory requests/limits without restarting it. Still beta, so it’s behind a feature gate, but this is the kind of capability that changes how you think about vertical scaling. No more killing a pod just because it needs 200Mi more memory during a traffic spike.

    4. Topology-Aware Routing — GA (KEP-4444)

    trafficDistribution: PreferClose is now GA. Traffic prefers endpoints in the same zone before crossing zone boundaries. This is pure money in multi-AZ deployments: less cross-zone data transfer, lower latency, better tail percentiles. If you’re on AWS or GCP and not using this, you’re paying an invisible cloud networking tax.

    5. Multiple Service CIDRs — GA (KEP-1880)

    You can now dynamically expand your ClusterIP range without cluster recreation. If you’ve ever hit the ceiling on your Service CIDR and had to do gymnastics to work around it, this fixes that permanently. Especially relevant for large multi-tenant clusters.


    Breaking Changes and Gotchas: What to Watch For

    Every upgrade has landmines. Here are the ones that bite in the 1.32→1.33 transition:

    nftables Consideration

    While nftables kube-proxy went GA, the default backend is still iptables in 1.33. However, start planning your migration now. Test nftables in staging. Future versions may change the default, and you don’t want to be scrambling when that happens. The migration guide is essential reading — nftables rule semantics differ from iptables in subtle ways that will break custom NetworkPolicy implementations relying on iptables-specific behavior.

    Deprecated API Removals

    Check for any APIs that were deprecated in 1.31 or earlier and removed in 1.33. The flowcontrol.apiserver.k8s.io/v1beta3 API group is one to watch. Run kubectl-deprecations or kubent before upgrading:

    # Using kubent (kube-no-trouble)
    

    kubent

    # Or check directly

    kubectl get --raw /metrics | grep apiserver_requested_deprecated_apis

    Feature Gate Changes

    Some feature gates that were beta (and on by default) in 1.32 graduated to GA in 1.33, which means the gates are locked and removed. If you were explicitly setting these gates in your kubelet or API server configs, the flags will cause startup errors. Audit your --feature-gates flags before upgrading.

    DRA (Dynamic Resource Allocation) Changes

    If you’re using DRA for GPU or custom resource scheduling, be aware of the authorization bypass fix (CVE-2025-4563) and the double-allocation race fix. The fixes are in 1.33.2 and 1.33.8 respectively, so target 1.33.8 as your landing version.


    Your 5-Step Action Plan

    Here’s what to do this week. Not next month. This week.

    Step 1: Audit (Today)

    # Find every cluster still on 1.32
    # For kubeadm clusters:
    

    kubectl version -o json | jq '.serverVersion.minor'

    # For EKS:

    aws eks list-clusters --output text | xargs -I{} \

    aws eks describe-cluster --name {} \

    --query '[cluster.name, cluster.version]' --output text

    # For GKE:

    gcloud container clusters list \

    --format="table(name, currentMasterVersion)"

    # For AKS:

    az aks list --query '[].{name:name, version:kubernetesVersion}' -o table

    Step 2: Test in Staging (This Week)

    Upgrade a non-production cluster to 1.33. Run your full test suite (see our Kubernetes upgrade checklist). Pay special attention to:

  • Service mesh behavior (sidecar lifecycle changes)
  • Network policies (if you plan to test nftables)
  • Any workloads using DRA
  • Custom admission webhooks (API changes can break them silently)
  • Step 3: Upgrade Production to 1.33 (Week of Feb 23)

    Follow the kubeadm or cloud provider upgrade steps above. Target 1.33.8 — it has the latest security and bug fixes.

    Step 4: Validate and Soak (1 Week)

    Run 1.33 in production for at least a few days. Monitor:

    # Watch for elevated error rates
    

    kubectl get events --sort-by='.lastTimestamp' -A | head -50

    # Check component health

    kubectl get componentstatuses

    # Monitor pod restarts (a spike means something broke)

    kubectl get pods -A --sort-by='.status.containerStatuses[0].restartCount' | tail -20

    Step 5: Continue to 1.34 (Early March)

    Once 1.33 is stable, repeat the process for 1.34. This is your final destination — 8 months of support runway, the features you need, and a stable foundation.


    The Clock Is Ticking

    February 28 is not a soft deadline. It’s the day your clusters become unpatched infrastructure. Every day after that, your attack surface grows and your ecosystem compatibility shrinks.

    The upgrade from 1.32 to 1.33 (and then 1.34) is well-trodden ground. Thousands of clusters have made this jump. The tooling works. The docs are solid. The features are worth it.

    What’s not worth it is explaining to your security team in April why you’re running a Kubernetes version with known, unpatched CVEs because the upgrade “wasn’t prioritized.”

    Start today. Your future on-call self will thank you.


    Related Reading


    Track Kubernetes version health, EOL dates, and upgrade paths in real-time at ReleaseRun. We monitor the releases so you don’t miss a deadline.

    🛠️ Interactive Tool

    Plan your K8s upgrade path

    Open in new tab ↗

    🛠️ Try These Free Tools

    ⚠️ K8s Manifest Deprecation Checker

    Paste your Kubernetes YAML to detect deprecated APIs before upgrading.

    📦 Dependency EOL Scanner

    Paste your dependency file to check for end-of-life packages.

    🗺️ Upgrade Path Planner

    Plan your upgrade path with breaking change warnings and step-by-step guidance.

    See all free tools →

    🔔

    Stay ahead of breaking changes

    Free email alerts for EOL dates, CVEs, and major releases across your stack.

    Get Alerts →