1 The End of an Era: Transitioning Away from Ingress NGINX | Google Open Source Blog

opensource.google.com

Menu

The End of an Era: Transitioning Away from Ingress NGINX

Thursday, February 12, 2026

An AI generated image that depicts an old crumbling building labeled Ingress NGINX displays a banner saying Retiring March 2026. An equally crumbling road leads away from the building to a glowing, nice, new archway with the Kubernetes Logo at the top, with Gateway API across the top arch. Within the arch at the center of the light is another Kubernetes-inspired logo with the words Gateway API below it. Several light paths of varying colors, with labels like HTTP Route and TCP Route lead away from the archway, some of them off the side of the frame, and some of them to a set of servers. Some of the servers have clouds with up or down arrows above them, indicating cloud traffic moving in or out.

For many of us, the first time we successfully routed traffic into a Kubernetes cluster, we did it using Ingress NGINX. It was the project that turned a complex networking API into something we could actually use.

However, the Kubernetes community recently announced that Ingress NGINX is officially entering retirement. Maintenance will cease in March 2026.

Here is what you need to know about why this is happening, what comes next, and why this "forced" migration is actually a great opportunity for your infrastructure.

Clarifying Terminology

First, there are some confusing, overlapping terms here. Let's clarify.

  1. Ingress API - Kubernetes introduced the Ingress API as a Generally Available (GA) feature in 2020 with the release of Kubernetes version 1.19. This API is still available in Kubernetes with no immediate plans for deprecation or removal. However, it is "feature-frozen" meaning it is no longer being actively worked on or updated. The community has instead moved to Gateway API, which we'll talk more about later in this post.
  2. Ingress NGINX - "Ingress" is an API object available by default in Kubernetes, as described above. You can define your Ingress needs as an Ingress resource. But that resource won't actually do anything without a controller. Ingress NGINX is a very popular controller that uses NGINX as a reverse proxy and load balancer. This will no longer be maintained as of March 2026.
  3. As it says in the What You Need To Know blog from the Kubernetes project, "Existing deployments of Ingress NGINX will continue to function and installation artifacts will remain available." However "there will be no further releases, no bugfixes, and no updates to resolve any security vulnerabilities that may be discovered."
  4. NGINX Ingress Controller - to make things more confusing, there is another controller called "NGINX Ingress." This controller to implement ingress for your Kubernetes resources via NGINX and NGINX Plus, owned and maintained by F5 / NGINX Inc. This will continue to be maintained and available in both its Open Source and Commercial forms.

In this blog post,we are going to talk about "Ingress NGINX," the controller being deprecated. We will also talk about "Ingress" or the "Ingress API", which is still around, but feature-frozen.

What Problem Did Ingress NGINX Solve?

In the early days of Kubernetes, getting external traffic to your pods was a nightmare. You either had to use expensive, cloud-specific LoadBalancers for every single service or manage complex NodePorts.

While the Kubernetes Ingress API was introduced as a standard specification for Layer 7 routing (HTTP/HTTPS), it was inherently limited, designed for a simpler time in Kubernetes' history, and offered minimal features. Features like advanced routing, traffic splitting, and non-HTTP protocols were not natively supported by the API

Ingress NGINX solved this problem by serving as a robust Ingress controller that executed the API's rules. Leveraging the widely adopted NGINX reverse proxy, the controller provided a powerful, provider-agnostic entry point for cluster traffic. It was able to:

  • Consolidate multiple services under a single IP address.
  • Provide robust Layer 7 capabilities, including SSL/TLS termination and basic load balancing.
  • Use familiar NGINX configuration logic inside a cloud-native environment.
  • Extend the basic Ingress API to support advanced features, such as rate limiting, custom headers, and sophisticated traffic management, by allowing users to inject familiar, raw NGINX configuration logic using custom nginx.ingress.kubernetes.io annotations (often called "snippets").

This flexibility, achieved by translating standard Ingress objects into feature-rich NGINX configurations, made Ingress NGINX the de-facto controller and the "Swiss Army Knife" of Kubernetes networking.

Why is it Retiring?

If it's so popular, why kill it? The very flexibility that made it so popular also (at least partially) led to its demise. The announcement points to two primary "silent killers":

  • The "Snippet" Security Debt: Ingress NGINX gained popularity through its flexibility, specifically "snippets" that let users inject raw NGINX config via annotations. Today, these are viewed as major security risks, as they can allow for configuration injection attacks. Fixing this architectural "feature" has become an insurmountable task.
  • The Maintainership Gap: Despite having millions of users, the project was sustained by only one or two people working in their spare time. In an industry where security vulnerabilities move fast, "best-effort" maintenance isn't enough to protect the global ecosystem.

Time for Gateway API

The removal of the popular NGINX ingress implementation opens up an opportunity to transition to the Gateway API. While the Ingress API in Kubernetes is not going anywhere (just the NGINX variant of it), development on it is frozen, and there are reasons for that.

Think of Gateway API as "Ingress 2.0." While the Ingress API is a single, limited resource, Gateway API is role-oriented. It separates the concerns of the Infrastructure Provider (who sets up the LB), the Cluster Operator (who defines policies), and the Application Developer (who routes the traffic).

For the Kubernetes Podcast from Google, we've interviewed Kubernetes maintainers working on Gateway API (Like in this episode featuring Lior Lieberman), and they tell a great story about why it was developed. In the early days of Kubernetes, the maintainers & contributors weren't sure exactly what users would need with regard to ingress management for workloads running on Kubernetes. The early Kubernetes Ingress object was an attempt to address the problems the maintainers thought users would need to solve, and they didn't get it all right. The annotations Ingress-NGINX supported on top of the Ingress API helped cover the many gaps in the Kubernetes API, but the annotations tied you to Ingress-NGINX. Those gaps have now been largely closed by Gateway API, and the API is supported by many conformant implementations, so you can have confidence in the portability of the API.

An important feature of Gateway API's design is that it is an API standard defined by the community, but implemented by your infrastructure or networking solution provider. Networking ultimately boils down to cables transmitting electrical signals between machines. What kind of machines and how they're connected has a big impact on the types of ingress capabilities available to you- or at least in how they're actually implemented. Gateway API provides a standard set of capabilities that you can access in a standardized way, while allowing for the reality of different networking implementations across providers. It's meant to help you get the most out of your infrastructure- regardless of what that infrastructure actually is.

How Gateway API solves the old problems with Ingress NGINX:

  • Security by Design: No more "configuration snippets." Features are built into the API natively, reducing the risk of accidental misconfiguration.
  • Standardization: Unlike the old Ingress API, which required custom annotations for almost everything (like traffic splitting), Gateway API builds these features into the spec, offering greater portability.
  • Extensibility: It is designed to handle more than just HTTP—it brings the same power to TCP, UDP, and gRPC.

The Challenges of Transitioning

Migration is rarely "click and play." Users moving away from Ingress NGINX should prepare for:

  • Annotation Mapping: Most of your nginx.ingress.kubernetes.io annotations won't work on new controllers. You'll need to map these to the new Gateway API "HTTPRoute" logic.
  • Learning Curve: Gateway API has more "objects" to manage (Gateways, GatewayClasses, Routes). It takes a moment to wrap your head around the hierarchy, but it was implemented that way based on experience - these objects should help you manage your workloads' ingress needs more efficiently.
  • Feature Parity: If you rely on very specific, obscure NGINX modules, you'll need to verify that your new controller (be it Envoy-based like Emissary or Cilium, or a different NGINX-based provider) supports them.

Why It's Worth It

The retirement of Ingress NGINX is not just a chore; it is a forcing function for adopting more sustainable architecture. By migrating to Gateway API, you gain:

  • Stability and Active Development: Gateway API is a General Availability (GA) networking standard that has maintained a "standard channel" without a single breaking change or API version deprecation for over two years. Unlike many Ingress controllers where development has largely paused, most Gateway controllers are far more actively maintained and continue to add new features like CORS and timeouts.
  • Portability: Choosing a different Ingress controller might seem easier, but if you rely on Ingress-NGINX annotations, you will likely have to migrate to another set of implementation-specific annotations. Gateway API provides more portable features directly in the core API and ensures a consistent experience across different implementations. When you select an implementation that is conformant with the latest v1.4 release, you can be confident that the behavior of these features will be consistent.
  • Future-Proof Extensibility: While Gateway API supports many more features than the core Ingress API, if you find a needed feature missing, an implementation is likely to provide a similar or equivalent feature as an implementation-specific extension. For example, GKE Gateway and Envoy Gateway extend the API with their own custom policies.

Next Steps

Start your migration planning today to capitalize on the opportunity and meet the deadline.

  1. Audit Your Usage: Run kubectl get pods --all-namespaces -l app.kubernetes.io/name=ingress-nginx to see where you are still using the legacy controller.
  2. Utilize Automation: Check out the ingress2gateway project. A lot of work is going into this tool to make the migration experience better, including adding support for the most widely used Ingress-NGINX annotations.
  3. Experiment and Provide Feedback: Give Gateway API a try! Start a PoC with a conformant Gateway API implementation (like GKE Gateway, Cilium, or Envoy Gateway). The community welcomes help and feedback on ingress2gateway and encourages users to share feedback on what Gateway API is getting right and wrong.
  4. Adhere to the Timeline: You have until March 2026 before the security updates stop. Start your migration planning sooner rather than later!

For more details on migrating from Ingress to Gateway API refer to our documentation.

.