Ingress-NGINX Is Dead. Here's How to Migrate Without Panic.

On March 24, 2026, the most-used ingress controller in Kubernetes became read-only. If you’re running ingress-nginx, your cluster didn’t catch fire, but any CVE discovered after that date will never get a patch.
That’s the honest version. Not “immediate action required” and not “don’t worry about it.” The project is gone, the security exposure is real, and the migration is more manageable than the scaremongering articles suggest if you understand what you’re actually dealing with.
About 50% of cloud-native environments were running ingress-nginx when the Kubernetes Steering Committee issued its warning. Chances are, you’re in that 50%, and you’re probably not a platform team with dedicated ops engineers. You’re a small team where one person inherited the cluster.
This ingress-nginx migration guide is for you. By the end, you’ll know exactly how complex your migration is, which controller to pick, and what the actual steps look like. No vendor pitch, no enterprise assumptions.
If you’d rather have someone assess your specific cluster’s complexity, that’s exactly the kind of thing we do asynchronously, no call required. Request a free infrastructure audit →
What Actually Happened to ingress-nginx
Open-source sustainability failure turned into a security crisis.
For years, ingress-nginx was maintained by one or two volunteers working in their spare time. That’s not an exaggeration. Tabitha Sable, a Kubernetes Security Response Committee co-chair, wrote in the official retirement announcement that “SIG Network and the Security Response Committee have exhausted our efforts to find additional support to make Ingress NGINX sustainable.”
The breaking point was CVE-2025-1974, nicknamed IngressNightmare. CVSS score of 9.8. Unauthenticated remote code execution through the admission webhook, affecting an estimated 43% of cloud environments. The root cause traced back to the configuration-snippet annotation, a feature that let users inject arbitrary NGINX directives directly. That feature was the project’s most popular and most dangerous design decision, and fixing it properly would have required a rewrite the team didn’t have capacity for.
The community tried to build a replacement called InGate. That project also got retired before shipping.
You’re left with the kubernetes-retired/ingress-nginx repository, read-only, with a final version that will never receive another security patch. Your running workloads keep running. The next CVE does not. (You’ll see it called both “retired” and “deprecated” in search results, same situation, same urgency.)
Quick Clarification: The Ingress API Is Not Deprecated
Before you panic about migrating everything to Gateway API, this distinction matters.
The Ingress API spec, the Kubernetes resource type itself, is not deprecated and has no removal timeline. It’s frozen, meaning no new features, but it’s not going away. Other ingress controllers (Traefik, HAProxy, Kong, and others) still use the Ingress API and work fine.
What’s retired is specifically kubernetes/ingress-nginx, the controller that processed those Ingress resources. That’s a different project from nginx/kubernetes-ingress, the commercial NGINX Ingress Controller maintained by F5. Same NGINX name, completely different codebases.
| Project | Maintainer | Status |
|---|---|---|
kubernetes/ingress-nginx | Kubernetes community | Retired March 24, 2026 |
nginx/kubernetes-ingress | F5 / NGINX Inc. | Actively maintained |
| NGINX Gateway Fabric | F5 / NGINX Inc. | Active, pure Gateway API implementation |
Run this to confirm which controller you’re actually running:
kubectl get pods --all-namespaces --selector app.kubernetes.io/name=ingress-nginx
If that returns pods, you’re running the retired controller. If it returns nothing, you might be on the F5 product or something else entirely.
How Much Work Is This Migration, Really?
The honest answer is that it depends almost entirely on how many custom annotations you have, not how many Ingress objects.
I’ve seen this split pretty consistently. A cluster with 12 Ingresses using basic TLS and path routing is a Friday afternoon. A cluster with 8 Ingresses but heavy use of configuration-snippet annotations is a different week entirely. Run these two commands before you estimate anything:
# See what you're working with
kubectl get ingress --all-namespaces
# This is the complexity signal
kubectl get ingress --all-namespaces -o json \
| jq '.items[].metadata.annotations | keys[]' \
| grep snippet
If the snippet grep comes back empty, you’re almost certainly in tier 1 or 2. If it returns results, budget more time and read the controller selection section carefully.
Simple clusters (a few hours): Basic TLS, path routing, standard rewrites. No custom annotations beyond the basics. ingress2gateway handles this cleanly. This is the majority of small team setups.
Medium complexity (half a day): CORS headers, rate limiting, auth-url, proxy timeouts. ingress2gateway covers most of it. Expect some manual review of the output before applying to production.
Complex clusters (days): configuration-snippet, server-snippet, ModSecurity WAF, custom NGINX modules. No automatic conversion path. These require manual rewrites because arbitrary NGINX directives don’t have a direct Gateway API equivalent.
The encouraging data point: Skyscrapers, a managed services provider, audited 1,097 real Ingress resources across their client clusters and found 89 distinct annotation types. Only 56 of those Ingresses, about 5%, contained raw NGINX snippets. If your cluster looks like most clusters, the complex tier applies to a small fraction of your total Ingresses, not all of them.
One thing that trips up the time estimate: most Helm charts for applications already support Gateway API as an option. Before assuming you need to rewrite manifests, check helm show values <your-chart> and look for gateway or httpRoute options. You might already have a migration path built into your chart.
This migration is forced infrastructure work, it creates toil without adding features. If you want to make sure you’re spending the minimum time on it, the quick infrastructure wins guide has context on where migration work fits relative to other ops priorities.
Gateway API in 60 Seconds
Gateway API replaces the single Ingress resource with a three-layer model. Here’s what those layers are and why they exist:
GatewayClass defines which controller implementation handles traffic. Set once per cluster. Think of it as declaring “Traefik handles our gateways” or “Envoy Gateway handles our gateways.”
Gateway defines the listening configuration: ports, protocols, TLS settings. Usually one per cluster for small teams. This is what cert-manager attaches certificates to.
HTTPRoute defines the actual routing rules for a specific service: which hostnames, which paths, which backend. One per service (or per group of services on the same hostname).
# The three-resource relationship, simplified
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: my-app
spec:
parentRefs:
- name: my-gateway # references the Gateway
hostnames: ["myapp.example.com"]
rules:
- matches:
- path:
type: PathPrefix
value: /
backendRefs:
- name: my-app-service
port: 8080
For a small team wearing all hats, the role separation is just organizational, you control all three resources. Gateway API hit GA in October 2023 and is now on v1.4 as of October 2025. This isn’t experimental technology; it’s been running production workloads for over two years.
Which Gateway API Controller Should You Use?
Every article on this topic lists five or six options and hedges. I’ll be more direct.
For most small teams: Traefik or Envoy Gateway. Everything else has a narrower use case.
Traefik (lowest migration effort)
Traefik v3.5+ ships an Ingress NGINX Provider that accepts your existing nginx.ingress.kubernetes.io annotations without touching your Ingress resources. If your Ingresses cover basic TLS, path routing, and no snippets, you deploy Traefik, enable the provider, and the migration is done without rewriting a single manifest.
The caveats: the provider is still experimental. Coverage is roughly 80%, not 100%. Some globally-configured behaviors, like default SSL redirect and certain rate-limiting configurations, don’t have a mapping. Don’t assume it works blindly, test in a non-production namespace first.
Traefik also doesn’t support multiple Gateway resources, so it’s not suitable if you’re planning multi-tenant platform work.
Use Traefik if you want to be done this week and your cluster is relatively clean.
Envoy Gateway (best long-term positioning)
Envoy Gateway is a CNCF project with contributions from Google, Microsoft, and others. It follows Gateway API conformance strictly and is what the ingress2gateway tool’s --emitter envoy-gateway flag outputs to. If you want to migrate once and not revisit this decision, Envoy Gateway is the better long-term choice.
The caveat: it requires a bit more YAML upfront. And there’s a known issue where traffic sees 503 errors during route updates, actively being fixed, but worth knowing about for high-traffic production environments.
Use Envoy Gateway if you’re willing to write the YAML properly and want a clean Gateway API-native setup.
Everything else:
- Cilium: if Cilium is already your CNI, use its built-in Gateway API support and skip adding another component
- Istio: if you’re already running Istio, use it; don’t install it just for ingress
- NGINX Gateway Fabric: only supports one Gateway resource per cluster; this rules it out for any multi-tenant or expanding setup
- Chainguard’s ingress-nginx fork: patched CVE images only, no development; buys 6–12 months but you’d have to migrate from it anyway
Here’s the short version:
| Controller | Best For | Migration Effort | Main Caveat |
|---|---|---|---|
| Traefik | Fastest migration, <50 Ingresses | Low | Provider is experimental |
| Envoy Gateway | Long-term standard | Medium | Known 503 bug during updates |
| Cilium | Teams already on Cilium CNI | Low | CNI-dependent |
| Istio | Teams already running Istio | Low | Overkill if starting fresh |
The Gateway API implementations list has the full conformance table if you want to research further.
Not sure if you have the bandwidth to scope this migration properly right now? We do async infrastructure audits, Loom walkthrough of your cluster, written report with complexity estimate. No call required →
The Migration: Step by Step
Step 1: Install ingress2gateway and audit your cluster
# Install ingress2gateway 1.0
brew install ingress2gateway
# or via Go
go install sigs.k8s.io/[email protected]
# Run a dry-run audit across all namespaces
ingress2gateway print \
--providers=ingress-nginx \
--all-namespaces \
--output-dir ./gateway-migration
The ingress2gateway 1.0 release post describes it as “a migration assistant, not a one-shot replacement.” That’s accurate. Read every warning the tool generates. Untranslatable configs are flagged explicitly; those are your manual work items.
If you’re migrating to Envoy Gateway, you need the --emitter flag or you’ll get generic Gateway API output that doesn’t match Envoy’s expected format:
# For Envoy Gateway specifically
ingress2gateway print \
--providers=ingress-nginx \
--namespace my-app \
--emitter envoy-gateway \
> gwapi-envoy.yaml
Step 2: Install your new controller alongside ingress-nginx
Both controllers run simultaneously during migration. Each gets its own external IP from your cloud load balancer. There’s no conflict with existing ingress-nginx traffic.
# Example: install Envoy Gateway
helm install eg oci://docker.io/envoyproxy/gateway-helm \
--version v1.3.0 \
-n envoy-gateway-system \
--create-namespace
Step 3: Test in staging before touching production DNS
Apply the ingress2gateway output to a non-production namespace first. Then test against the new controller’s IP directly, bypass DNS entirely during testing:
# Get the new controller's external IP
kubectl get svc -n envoy-gateway-system
# Test with curl, bypassing DNS
curl -H "Host: myapp.example.com" http://<NEW_CONTROLLER_IP>/healthz
Step 4: Shift DNS, then watch
Lower your DNS TTL to 60 seconds before cutting over. Switch the A record to the new controller IP. Watch your error rates for 15–30 minutes. Keep ingress-nginx running. Don’t delete it until you’ve seen at least 24–48 hours of clean production traffic on the new controller.
Cross-namespace routing: one more thing to know. If any of your HTTPRoutes reference backends in a different namespace from the Route itself, Gateway API requires a ReferenceGrant resource in the backend’s namespace to permit it. This is a security model change from Ingress. It’s the most common “why isn’t this working?” moment in Gateway API migrations.
# ReferenceGrant: allow HTTPRoute in namespace-a to reach service in namespace-b
apiVersion: gateway.networking.k8s.io/v1beta1
kind: ReferenceGrant
metadata:
name: allow-my-app
namespace: namespace-b # lives in the backend's namespace
spec:
from:
- group: gateway.networking.k8s.io
kind: HTTPRoute
namespace: namespace-a
to:
- group: ""
kind: Service
Your Rollback Plan (Have This Ready)
Because you’re running both controllers in parallel, rollback is simple: repoint DNS back to the ingress-nginx service IP.
# ingress-nginx should still be running, confirm here
kubectl get svc -n ingress-nginx
# If something breaks, repoint DNS to this IP
# (via your DNS provider: Route53, Cloudflare, etc.)
ingress-nginx keeps running until you explicitly delete it. There’s no forced cutover. Keep the old Ingress resources in place until you’ve validated the new setup under real production traffic, 24–48 hours minimum, longer if you want the confidence.
The goal is to make the rollback obvious enough that you don’t need to think about it under pressure.
cert-manager: What You Need to Update
This is the detail that trips up almost everyone who doesn’t read the fine print.
With Ingress-NGINX, each Ingress object declared its own TLS secret. cert-manager watched Ingress annotations and issued certs at the Ingress level. With Gateway API, TLS is managed at the Gateway listener level, cert-manager watches the Gateway, not the HTTPRoute.
For small teams running a single Gateway in a single namespace, this is actually simpler once it’s set up. The Gateway owns all TLS for the cluster; you configure cert-manager once at the top level rather than per-service.
Update cert-manager to v1.15+ before migrating if you’re behind. Gateway API support improved significantly in that release. Then annotate your Gateway (not the HTTPRoute) to trigger certificate issuance:
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: my-gateway
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod # triggers cert issuance
spec:
gatewayClassName: envoy-gateway
listeners:
- name: https
port: 443
protocol: HTTPS
hostname: "*.example.com"
tls:
mode: Terminate
certificateRefs:
- name: my-wildcard-cert # cert-manager creates this
If your cert and HTTPRoute are in different namespaces, you’ll need a ReferenceGrant here too, same pattern as the cross-namespace routing case above. The cert-manager Gateway API docs have the full reference.
Frequently Asked Questions
What is Kubernetes Gateway API? Gateway API is the successor to the Ingress resource, a more expressive, role-oriented routing API built into Kubernetes. It uses three resources: GatewayClass (which controller handles traffic), Gateway (ports, protocols, TLS), and HTTPRoute (per-service routing rules). It’s been GA since October 2023.
Which Gateway API controller should small teams use? Traefik if you want the fastest migration with minimal YAML changes, its Ingress NGINX Provider accepts most existing annotations natively. Envoy Gateway if you want strict Gateway API conformance and a clean long-term foundation. If Cilium is already your CNI, use its built-in gateway.
Is the Kubernetes Ingress API deprecated? No. The Ingress API spec is frozen (no new features), but it’s not being removed from Kubernetes. Other ingress controllers still work fine against it. What’s retired is specifically the ingress-nginx controller that processed those Ingress resources.
Can I keep running ingress-nginx after March 2026? Technically yes. Existing deployments keep running. But there are no more security patches, any new CVE will never be fixed. If your organization does SOC 2, PCI-DSS, or similar compliance audits, EOL software in the L7 data path is an automatic finding. The urgency is about forward risk, not immediate breakage.
What’s the difference between ingress-nginx and NGINX Ingress Controller?
Different projects. kubernetes/ingress-nginx (the retired one) was a community project maintained by Kubernetes SIG-Network. nginx/kubernetes-ingress is F5’s commercial product, still actively maintained. Same NGINX in the name, completely different codebases. Run kubectl get pods --all-namespaces --selector app.kubernetes.io/name=ingress-nginx to confirm which one you have.
Do I need to migrate everything at once? No. Run both controllers in parallel and migrate service by service. Test each service on the new controller before cutting over DNS. Doing it incrementally is safer and gives you practice before tackling anything complicated.
What does ingress2gateway actually do? It’s an official CLI tool from Kubernetes SIG-Network that converts your existing ingress-nginx Ingress objects into Gateway API YAML. Released as v1.0 on March 20, 2026, with support for 30+ nginx annotations. Treat its output as a starting point, read the warnings it generates before applying anything to production.
I’m running Istio, anything to watch when installing Gateway API CRDs? Yes. Gateway API CRD v1.5.x crashes Istio 1.28 and 1.29. If you’re on either version, pin to Gateway API v1.4.x CRDs until you’ve upgraded Istio first.
What to Do This Week
The migration complexity scales with your annotation sprawl, not your ingress count. That’s the most useful thing I can tell you before you start estimating the work.
Here’s the short version:
- Run the inventory (15 min):
kubectl get ingress --all-namespacesand the snippet grep above - Fewer than ~20 Ingresses, no snippet annotations: pick Traefik or Envoy Gateway, run ingress2gateway, migrate this sprint
- Snippet annotations present: run
ingress2gateway print --dry-runand count the warnings, those are manual rewrites; scope accordingly - Don’t delete ingress-nginx until your new controller has handled 24–48 hours of production traffic
This migration is toil, forced work that doesn’t ship product. The goal is to spend the minimum time on it. Your toil reduction roadmap starts with identifying which forced migrations like this one you can time-box and eliminate for good. That’s a separate question from the migration itself, but if you’re not sure whether you need dedicated ops help for this, it’s worth 10 minutes to figure out before you commit a sprint to it.
If your cluster has significant annotation sprawl and you’d rather have someone scope this out, that’s exactly the kind of async infrastructure audit we do. I’ll record a Loom walkthrough of your cluster’s migration complexity and send a written report, no call required.
Related Articles
Kubernetes for SaaS Startups: Do You Actually Need It?
According to the CNCF 2025 Annual Cloud Native Survey, 82% of container users run Kubernetes in production. That stat makes startup CTOs panic. But your 15-person SaaS company is not in the same category as the enterprises driving that number.
Every growing startup hits the Kubernetes question eventually. Your app is getting real traffic, deployments are getting messy, and someone on the team suggests K8s. The anxiety is real because the internet makes it sound like you’re not a serious company until you’re running clusters.
What is Toil in DevOps? (And What It's Actually Costing You)
The deploy script hasn’t changed in eight months. Every Friday afternoon, someone on your team runs it by hand, copying the steps from a Notion doc, pasting commands into the terminal, and watching logs scroll by.
Nobody’s complained. It works. But somewhere in the back of your mind, you know this shouldn’t still be manual.
That’s DevOps toil. And if that scenario sounds familiar, you’ve got more of it than you realize.
How mature is your DevOps?
Take our free assessment. Get a maturity score across 5 dimensions and specific recommendations — written by an engineer, not a bot.
Free DevOps AssessmentGet DevOps insights in your inbox
No spam. Unsubscribe anytime.

