Releases

Kubernetes Gateway API vs Ingress: A 2025 Decision Framework That Won’t Surprise You in Prod

Last Updated: 2026-01-02 Kubernetes Gateway API vs Ingress: A 2025 Decision Framework That Won’t Surprise You in Prod Quick Navigation Methodology I’ve watched “just add an annotation” turn into a 200-line Ingress nobody wants to touch. Ingress still routes traffic fine. It also hits a ceiling the moment you run shared edge infrastructure, multiple teams, […]

Jack Pauley January 1, 2026 6 min read
Kubernetes Gateway API vs Ingress
Last Updated: 2026-01-02

Kubernetes Gateway API vs Ingress: A 2025 Decision Framework That Won’t Surprise You in Prod

I’ve watched “just add an annotation” turn into a 200-line Ingress nobody wants to touch.

Ingress still routes traffic fine. It also hits a ceiling the moment you run shared edge infrastructure, multiple teams, and any real policy story beyond “hope the controller behaves.” Gateway API fixes the ownership model first, then the routing model.

Our Methodology

This guide is based on a mix of sources and hands-on validation: the official Kubernetes and Gateway API specifications and documentation, controller project docs and conformance matrices, relevant GitHub issues and commit history for popular controllers, discussions from community SIGs and mailing lists, and targeted staging tests that exercise routing, TLS, rewrites, and policy behaviors across multiple controllers. Recommendations emphasize observable, testable behaviors rather than marketing claims.

What changed by 2025: Ingress stayed small, Gateway API grew up

Ingress gives you a stable, minimal HTTP(S) entrypoint in networking.k8s.io/v1.

That small spec helps you start fast, then it forces you into controller annotations or controller-specific CRDs for rewrites, auth, rate limits, weird TLS edge cases, and half the stuff your security team asks for.

Gateway API lives under gateway.networking.k8s.io.

The thing nobody mentions is that it’s less about “new YAML” and more about “who owns what.” Gateway API splits responsibilities the way real orgs work, even when teams share the same load balancer.

  • GatewayClass: Declares which controller implements the data plane, similar in spirit to how StorageClass points to a storage backend.
  • Gateway: The shared edge object. It holds listeners, addresses, and TLS termination so app teams do not fight over cert config.
  • Routes (HTTPRoute, GRPCRoute, TCPRoute): App-owned routing rules that attach to a Gateway via parentRefs.
  • Policy attachment: The direction of travel for auth, rate limits, retries, timeouts. In practice, you still rely on controller policy CRDs for a lot of this.

Ignore the GitHub commit count. It’s a vanity metric. Read the controller’s conformance and the features you actually need.

Decision framework: pick based on org constraints, not taste

Here’s the fork in the road.

If you run one team, one cluster, and you just need host and path routing, Ingress stays boring and effective. If you run shared edge for multiple teams, Ingress becomes a merge-conflict factory with a side hustle in production incidents.

Multi-tenancy and ownership boundaries

This bit me when we tried to run “platform-owned TLS” and “team-owned routing” in the same setup.

Ingress never standardized who can attach what. Gateway API does, and that alone can justify the switch if you run multi-tenant clusters.

  • Choose Gateway API: The platform team owns Gateways. App teams own Routes. You control cross-namespace attachment with allowedRoutes and related access patterns.
  • Choose Ingress: Each team runs its own ingress controller per namespace or per cluster, or you accept a centralized Ingress object that everybody edits.

L7 routing needs (rewrites, header mods, traffic splits)

Most “Ingress vs Gateway” debates hide a simpler truth: you already run a pile of controller-specific behavior.

If you depend on rewrites, header changes, weighted backends, or structured status and conditions for debugging, Gateway API’s core resources usually read cleaner than “annotations and hope.” You still need to test behavior on your controller because semantics vary.

Portability, but the annoying kind

Gateway API improves portability at the routing layer.

It does not make OIDC, WAF, or rate limiting magically portable. Those features often land as vendor policy CRDs. So you still make an implementation bet, you just make it in a more obvious place.

Feature comparison (what you get without vendor hacks)

I don’t trust “known issues: none” from any project.

Assume partial parity. Treat “supports Gateway API” as a matrix: core conformance, extended features, and whatever policy CRDs your controller ships.

  • Basic host and path routing: Ingress and HTTPRoute both handle this. You will not get fired for choosing either.
  • Delegated routing across namespaces: Gateway API models this directly. Ingress usually fakes it through process and conventions.
  • Weighted backends: Gateway API supports weights on backendRefs. Ingress often needs controller-specific canary features.
  • Status and debuggability: Gateway API exposes route attachment and listener status in a way operators can actually use at 02:00.
  • Non-HTTP: Gateway API defines route types beyond HTTP. Your controller might not fully implement them, so validate before you promise anything.

Mapping common Ingress patterns to Gateway API (and where it still breaks)

Most migrations fail on the “small” stuff.

You translate YAML, then behavior changes. Path matching edges, URL-encoding, timeouts, and header handling love to ruin your week.

Host + path routing: Ingress to HTTPRoute

Ingress usually starts here and stays here until it doesn’t.

HTTPRoute expresses the same intent, but you attach it to a shared Gateway, which matters once you run more than one team.

  • Ingress pattern: One Ingress, multiple paths, controller handles routing.
  • Gateway API pattern: One shared Gateway, team-owned HTTPRoutes attach with parentRefs.

TLS termination: move it to the Gateway

This is where platform teams finally breathe.

Put TLS config on the Gateway listener. Let app teams attach routes without touching cert plumbing. Certificate provisioning still depends on your controller and tooling, so test your cert-manager or cloud cert flow before you promise self-service TLS.

Rewrites and redirects: test semantics, don’t trust intent

URL rewrites look portable until you hit encoded slashes and trailing slash weirdness.

Gateway API uses HTTPRoute filters for rewrites, but controllers differ. Treat rewrites as “must run in staging” work, not “looks right in review” work.

Canary traffic splits: weights help, stickiness hurts

Weighted backends make canaries feel easy.

Then someone asks for session affinity, consistent hashing, or header-based routing. Those features often live outside the portable core. Some folks skip canaries for patch releases. I don’t, but I get it.

Production gotchas I see first

Migrating YAML does not migrate behavior.

Ingress controllers carry years of defaults: buffering, timeouts, HTTP/2 settings, header normalization, and path matching quirks. Gateway API does not standardize those runtime details.

  • Test ugly requests: Large cookies, large headers, slow clients, slow upstreams, streaming, websockets, and gRPC message size limits.
  • Expect attachment failures early: Teams trip on allowedRoutes, RBAC, and cross-namespace reference rules like ReferenceGrant.
  • Policy sprawl comes back: If you do not standardize policies, “one-off policy CRDs” replace “one-off annotations.” Same mess, new nouns.
  • Observability shifts: Validate how your controller labels metrics by Gateway and HTTPRoute, and how logs identify the owning route/team.

Write a “Route not attached” runbook before the first team hits it. Make them read status and conditions. It saves days.

A pragmatic migration plan: dual-run, cut over slowly, keep rollback boring

Do not do a flag day.

I’ve seen teams try. They move everything at once, then they spend 48 hours staring at 404s and arguing about whether DNS caching counts as “a bug.”

Phase 0: inventory what you actually use

Start with facts, not vibes.

Export every Ingress and every annotation. Bucket them into basic routing, TLS, rewrites, auth, rate limiting, canary, and custom snippets. Anything in “custom snippets” should scare you a little.

Phase 1: stand up Gateway API infra with zero user traffic

Make it real in staging first.

Install the CRDs your controller requires. Create a GatewayClass and a Gateway. Wire a test hostname, then lock down who can attach routes with allowedRoutes and whatever cross-namespace reference controls your controller expects.

Phase 2: dual-run behind a shadow hostname

Clone one representative app.

Expose it at something like app-gw.example.com. Validate routing, rewrites, timeouts, metrics, logs, and what happens when the backend dies. This is where you find the “it worked on Ingress” default behavior you forgot you depended on.

Phase 3: cut over by hostname or path, keep Ingress intact

Move one slice at a time.

Keep the old Ingress definition applied and ready. Prefer DNS cutover with a low TTL, unless your gateway supports safe L7 weighting and you know how to observe it.

Rollback plan (write it down before you need it)

Rollback should feel boring.

Flip DNS back to the Ingress VIP or load balancer. Keep certificates compatible. Trigger rollback on concrete SLO signals you already alert on, like 5xx rate, p95 latency, and auth failure spikes.

Other stuff in this release of reality: dependency bumps, some image updates, the usual.

How to reduce controller lock-in with Gateway API

Gateway API reduces lock-in at the routing layer.

You can still lock yourself in with policy CRDs, so treat policy like a product you maintain.

  • Keep core routing clean: Put hosts, paths, backends, and basic filters in Gateway and HTTPRoute. Avoid proprietary escape hatches unless you document the exception.
  • Centralize non-portable features: If you need OIDC, WAF, or advanced rate limiting, hide it behind platform-owned policy tiers and a review path.
  • Choose controllers for day-2 operations: Pick based on conformance for the features you rely on, upgrade/rollback story, and the quality of status and metrics.
  • Test policy as code: Run integration tests for your top routes and policies on every controller upgrade. Portable YAML that breaks at runtime is worse than honest vendor config.

Bottom line

If you run shared edge for multiple teams, Gateway API usually wins.

Keep Ingress for legacy edge cases and controller-specific behavior you cannot replace yet. Gateway API points in the right direction. The risk comes from assuming portability without controller-specific testing and guardrails. There’s probably a better way to test this, but I haven’t found one that beats a brutal staging suite and a rollback you can do half-asleep.