Skip to content
Kubernetes Releases

Kubernetes v1.36.0 preview: watch cache fixes, AppArmor removed

platform version release preview: you know that charming failure mode where the control plane comes back “Ready” and your controllers still behave like the API server is half-asleep? Or the one where your security posture depends on a deprecated annotation that everybody swore they’d clean up “next sprint”? Kubernetes v1.36.0 is queued for 2026-04-22. Inside […]

Jack Pauley March 16, 2026 6 min read
platform version release preview infographic

platform version release preview: you know that charming failure mode where the control plane comes back “Ready” and your controllers still behave like the API server is half-asleep? Or the one where your security posture depends on a deprecated annotation that everybody swore they’d clean up “next sprint”?

Kubernetes v1.36.0 is queued for 2026-04-22. Inside the 45‑day window, that’s close enough that you should stop treating it like a future-you problem. The theme here isn’t shiny features; it’s maintainers paying down architectural debt by killing legacy behavior and tightening startup guarantees.

Watch cache initialization: less roulette during apiserver startup

The apiserver now defaults to initializing the watch cache via a PostStartHook before declaring itself ready. The concrete artifact is kubernetes/kubernetes PR #135777, with the KEP-side alignment in kubernetes/enhancements PR #5744. The maintainer summary is blunt: fewer subtle races and more reliable watch behavior when the control plane restarts.

So what? Anything built on informers and watches (read: everything) gets fewer “works on my cluster” startup flakes. Expect fewer controller retry storms after apiserver restarts and fewer heisenbugs where a component starts watching before the cache is actually usable. The flip side: if you accidentally wrote automation that depends on that race (don’t laugh, I’ve seen worse), 1.36 will flush it out.

AppArmor annotation support: stop pretending you’ll migrate later

AppArmor moved from the legacy annotation path (container.apparmor.security.beta.kubernetes.io/*) to securityContext.appArmorProfile back in v1.30. Kubernetes has been dragging the corpse of the annotation along with warnings and partial enforcement for multiple releases. The ecosystem already tracks the end-state as “removed by 1.36” (see downstream impact threads), and upstream issues show the exact warning + baseline policy interactions people are hitting today.

So what? Two nasty gotchas:

  • Policy engines and admission hooks that still validate the annotation will become security theater. Your “baseline” might look enforced while your pods run effectively unconfined.
  • Manifests generated by old tooling (or copied from tribal-wiki YAML) will start failing in the worst way: not on merge, but on deploy. Enjoy the 2 AM pager.

API lifecycle reality check: resource.k8s.io/v1beta1 gets deleted in 1.36

If you’re using the resource.k8s.io/v1beta1 API surface, the removal in 1.36 is not theoretical—vendors are already documenting it as a hard deletion with the obvious migration to v1beta2.

So what? CI that applies old API versions will hard-fail after the cluster upgrade (“no matches for kind” / API not served). Existing stored objects may survive in-place depending on conversion rules, but your deployment pipeline won’t. That’s the part that breaks prod: not the running state, the change pipeline.

Kubernetes has been quietly shifting from “eventually consistent glue code” toward “control plane with explicit startup and lifecycle guarantees.” The watch-cache work is an example: maintainers are spending effort on correctness and determinism rather than adding another half-baked alpha knob.

On the security side, AppArmor is the predictable maturation curve: first you get a hacky annotation, then a real field, then years of compatibility baggage, then finally a deletion. The industry trend is clear—clusters are becoming multi-tenant by default, policy engines are everywhere, and the old ‘it’s just YAML’ excuse doesn’t survive an audit. Removing annotation-based config also removes a whole class of “policy reads field A, runtime reads field B” footguns.

Test it now (don’t wait for your cloud provider to roll it out on a Tuesday):

# Spin up a 1.36 beta cluster with kind (adjust image tag as betas/rcs land)
kind create cluster --name k136 --image kindest/node:v1.36.0-beta.0

# Or validate against an existing staging cluster:
kubectl version --short

AppArmor migration checklist:

# Find deprecated AppArmor annotations in repo
rg -n "container.apparmor.security.beta.kubernetes.io" .

# Example: replace annotation usage with securityContext.appArmorProfile
# (actual profile type/name depends on your node setup)

Red Flags (grep these in apiserver + admission + kubelet logs):

  • “deprecated since v1.30; use the \”appArmorProfile\” field instead” — you still have annotation emitters in the chain.
  • PodSecurity baseline/restricted violations referencing AppArmor annotations — your policy layer is still keying off legacy paths.
  • “no matches for kind” against resource.k8s.io/v1beta1 — your deploy pipeline is applying removed APIs.
  • Controller restarts / informer relist spikes immediately after apiserver restarts — you’ve got watch assumptions worth re-testing under the new startup gating.

🛠️ Try These Free Tools

⚠️ K8s Manifest Deprecation Checker

Paste your Kubernetes YAML to detect deprecated APIs before upgrading.

🗺️ Upgrade Path Planner

Plan your upgrade path with breaking change warnings and step-by-step guidance.

🏗️ Terraform Provider Freshness Check

Paste your Terraform lock file to check provider versions.

See all free tools →

Stay Updated

Get the best releases delivered monthly. No spam, unsubscribe anytime.

By subscribing you agree to our Privacy Policy.