Kubernetes EOL policy explained for on-call humans
I’ve watched “Kubernetes nearing end of support” turn into a Friday night outage more than once.
The alert sounds polite, but it usually means you waited too long, and now you have to upgrade under pressure. Here’s what Kubernetes “support” and “end of life” actually mean, how the upstream clock works, where managed providers bend the rules, and the one upgrade habit that keeps this from becoming a quarterly fire drill.
The only two numbers that matter: N-2 and ~14 months
Keep it simple.
Upstream Kubernetes only actively maintains the newest three minor versions. People call that “N-2 support” because if today’s newest minor is N, upstream still patches N-1 and N-2, and that’s it.
- N-2 support: Upstream keeps release branches open for three minor versions at a time. If you run older than that, upstream stops shipping fixes for your line.
- ~14 months per minor: Each minor version’s patch series runs for roughly 14 months, with a “maintenance mode” window at the end. After that, upstream treats the branch as EOL.
Some folks treat N-2 like a law of physics.
I don’t. I treat it like a budget. If you cannot afford minor upgrades, you’re paying that cost somewhere else, usually as risk, stress, and ugly catch-up projects.
What “supported” means (and what it does not mean)
This bit confuses teams.
In upstream Kubernetes, “supported” mostly means “release managers still accept backports to this branch and cut patch releases.” Support does not mean your cluster stays safe by magic. You still need to apply those patches, and you still need to keep your own add-ons working.
- Supported upstream: You can reasonably expect upstream patches, backports, and release activity on that minor line.
- Not supported upstream: You might still run fine, but you should not expect upstream to ship fixes for your branch, even if a new CVE shows up.
Standard support vs maintenance mode vs EOL (the part you plan around)
🔔 Never Miss a Breaking Change
Get weekly release intelligence — breaking changes, security patches, and upgrade guides before they break your build.
✅ You're in! Check your inbox for confirmation.
Maintenance mode feels like purgatory.
I’ve seen teams ignore it because “we still get patches,” then get stuck when the branch hits EOL and their provider starts nagging them weekly. Upstream splits the patch series into a normal period, then a short tail where only high-importance fixes tend to land, then the line ends.
- Standard support (about 12 months): Regular patch releases and normal backport flow.
- Maintenance mode (about 2 months): Release managers focus on high-importance fixes, think CVEs and critical core bugs.
- EOL (after about 14 months total): Expect no more upstream patch releases for that minor line. Plan like you are on your own.
So.
If your calendar says you have “months,” you probably have less usable time than you think, because you still need test cycles, a change window, and a rollback plan.
How often do Kubernetes patch releases happen?
Patches feel boring.
That’s exactly why they work. Kubernetes patch releases usually land on a monthly cadence, and early in a minor release you often see patches arrive faster. Teams that skip patches end up “saving time” and then spending it later during incident response. I don’t trust “we’ll patch next quarter” from any production platform team.
- Most months: Expect a patch release rhythm you can plan around.
- When things go sideways: Critical bugs and security issues can trigger out-of-band patches, which is why you want a patch pipeline that does not require a summit meeting.
N-2 support in plain English (no math jokes)
Here’s the translation.
If the newest Kubernetes minor is 1.34, upstream focuses on 1.34, 1.33, and 1.32. When 1.35 ships, 1.32 drops off the active set. That drop-off matters because you lose the upstream backport path that feeds patch releases.
If you wait long enough, you stop doing “a minor upgrade” and start doing “a multi-minor rescue mission.” That’s when the weird API removals and add-on incompatibilities stack up.
Upstream support vs EKS, GKE, AKS, OpenShift, and friends
This is where teams get burned.
Upstream policy tells you what the Kubernetes project supports. Your managed provider might keep a version available longer, backport fixes privately, or force upgrades on their schedule. Treat upstream as baseline truth, then read your provider’s policy like it’s a contract, because it is.
- Provider timelines differ: Your cluster can run “fine” while still sitting outside upstream support.
- Providers enforce upgrades: Some will nag you. Some will block cluster operations. Some will eventually upgrade you whether you like it or not.
I haven’t tested every provider’s edge cases recently.
But I have seen the same pattern across clouds: teams assume “managed” means “someone else handles lifecycle.” Then the first forced upgrade email hits, and suddenly everyone cares about version skew and deprecated APIs.
The hidden half of “support”: API deprecations (the stuff that breaks upgrades)
API removals bite harder than EOL.
I’ve watched perfectly healthy clusters fail an upgrade because one old Helm chart still shipped a removed beta API. The control plane comes up, admission starts rejecting objects, and you’re staring at errors that feel obvious in hindsight.
- Deprecation happens on a clock: Kubernetes deprecates and eventually stops serving older API versions. Beta APIs have specific time and release-count rules.
- Upgrades fail in a boring way: Your manifests apply fine on the old cluster, then the new API server rejects them because it no longer serves that version.
Does anyone actually read these changelogs line by line?
A practical upgrade rule that keeps you out of trouble
Schedule it.
If you want calm upgrades, do three things consistently. Stay inside the upstream supported minor set, patch on a steady cadence, and budget at least one minor upgrade per year so you never hit that multi-minor catch-up wall.
- Stay within N, N-1, N-2: If you drift older, treat it like a production risk, not a backlog item.
- Patch regularly: Patches carry security and stability fixes. Treat them like brushing your teeth.
- Do at least one minor upgrade per year: The patch series only runs for about 14 months end to end, so yearly upgrades keep you inside the window.
FAQ
People ask the same questions every time.
Here are the answers I end up typing in Slack during upgrade season.
- How long is a Kubernetes version supported? Upstream actively maintains the newest three minor versions at a time, and each minor’s patch series runs for roughly 14 months including a short maintenance mode tail.
- What does Kubernetes maintenance mode mean? It’s the last stretch of a minor’s patch lifecycle where upstream focuses on high-importance fixes, then the branch reaches EOL.
- How often are Kubernetes patch releases? Typically monthly, sometimes faster early in a minor’s life, with occasional out-of-band releases for critical issues.
- Does EOL mean my cluster stops working? No. Your cluster keeps running. You just stop getting the upstream patch stream that normally carries security and critical fixes.
- Is Kubernetes support the same as EKS or GKE support? No. Use upstream policy as your baseline, then follow your provider’s version support policy for real enforcement and timelines.
Anyway.
Bookmark the upstream releases page, check where your clusters sit, and put upgrade dates on the calendar before the EOL email shows up again.
How to build an upgrade calendar that does not lie to you
Most teams have an “upgrade plan” that lives in a spreadsheet nobody opens. Here is how to build one that actually triggers action.
- Track three dates per cluster: (1) Current version release date, (2) upstream EOL date (14 months after release for recent versions), (3) your internal upgrade deadline (set this 2 months before EOL to leave buffer for testing and rollback). Put these in your team calendar as recurring reminders, not in a wiki page.
- Subscribe to release announcements: Watch the kubernetes/kubernetes repo on GitHub for releases. The sig-release mailing list posts schedules months in advance. ReleaseRun tracks these for you if you want a curated feed instead of raw GitHub notifications.
- Automate the nag: Set up a Slack or PagerDuty alert that fires when your cluster version is within 3 months of EOL. If you use a managed provider (EKS, GKE, AKS), check their support windows too – they sometimes drop support before upstream EOL.
Managed Kubernetes EOL: the provider adds another clock
Upstream Kubernetes EOL is one deadline. Your cloud provider has a different one. And they do not always align.
- GKE: Supports versions longer than upstream. Regular channel gets patches for ~14 months, Extended channel up to 24 months. But GKE auto-upgrades you if you do not act. This sounds helpful until an auto-upgrade breaks your admission webhook at 3 a.m.
- EKS: Standard support is ~14 months from upstream release. Extended support adds 12 more months but costs $0.60/hour per cluster. That is roughly $500/month per cluster to avoid upgrading. AWS will auto-upgrade you to the oldest supported version if you expire completely.
- AKS: Follows upstream closely. N-2 minor versions supported. No extended support tier. AKS will stop issuing patches for expired versions and eventually force-upgrade your control plane (but not your nodes, which creates a version skew problem).
The common trap: teams assume their managed provider handles everything. It does not. Provider EOL means “we stop patching CVEs for this version.” If you stay on an expired version, you are running unpatched Kubernetes in production with a false sense of security.
Keep Reading
- Kubernetes Release History
- Docker vs Kubernetes in Production (2026): Security-First Decision Rubric
- Kubernetes Gateway API vs Ingress vs LoadBalancer: What to Use in 2026
- Kubernetes 1.35 release: the stuff that can break your cluster
- Kubernetes Upgrade Checklist (Minor Version): the runbook I wish I had
- Kubernetes v1.34.3 release notes: what I’d check before upgrading