opensource.google.com

Menu

The End of an Era: Transitioning Away from Ingress NGINX

Thursday, February 12, 2026

An AI generated image that depicts an old crumbling building labeled Ingress NGINX displays a banner saying Retiring March 2026. An equally crumbling road leads away from the building to a glowing, nice, new archway with the Kubernetes Logo at the top, with Gateway API across the top arch. Within the arch at the center of the light is another Kubernetes-inspired logo with the words Gateway API below it. Several light paths of varying colors, with labels like HTTP Route and TCP Route lead away from the archway, some of them off the side of the frame, and some of them to a set of servers. Some of the servers have clouds with up or down arrows above them, indicating cloud traffic moving in or out.

For many of us, the first time we successfully routed traffic into a Kubernetes cluster, we did it using Ingress NGINX. It was the project that turned a complex networking API into something we could actually use.

However, the Kubernetes community recently announced that Ingress NGINX is officially entering retirement. Maintenance will cease in March 2026.

Here is what you need to know about why this is happening, what comes next, and why this "forced" migration is actually a great opportunity for your infrastructure.

Clarifying Terminology

First, there are some confusing, overlapping terms here. Let's clarify.

  1. Ingress API - Kubernetes introduced the Ingress API as a Generally Available (GA) feature in 2020 with the release of Kubernetes version 1.19. This API is still available in Kubernetes with no immediate plans for deprecation or removal. However, it is "feature-frozen" meaning it is no longer being actively worked on or updated. The community has instead moved to Gateway API, which we'll talk more about later in this post.
  2. Ingress NGINX - "Ingress" is an API object available by default in Kubernetes, as described above. You can define your Ingress needs as an Ingress resource. But that resource won't actually do anything without a controller. Ingress NGINX is a very popular controller that uses NGINX as a reverse proxy and load balancer. This will no longer be maintained as of March 2026.
  3. As it says in the What You Need To Know blog from the Kubernetes project, "Existing deployments of Ingress NGINX will continue to function and installation artifacts will remain available." However "there will be no further releases, no bugfixes, and no updates to resolve any security vulnerabilities that may be discovered."
  4. NGINX Ingress Controller - to make things more confusing, there is another controller called "NGINX Ingress." This controller to implement ingress for your Kubernetes resources via NGINX and NGINX Plus, owned and maintained by F5 / NGINX Inc. This will continue to be maintained and available in both its Open Source and Commercial forms.

In this blog post,we are going to talk about "Ingress NGINX," the controller being deprecated. We will also talk about "Ingress" or the "Ingress API", which is still around, but feature-frozen.

What Problem Did Ingress NGINX Solve?

In the early days of Kubernetes, getting external traffic to your pods was a nightmare. You either had to use expensive, cloud-specific LoadBalancers for every single service or manage complex NodePorts.

While the Kubernetes Ingress API was introduced as a standard specification for Layer 7 routing (HTTP/HTTPS), it was inherently limited, designed for a simpler time in Kubernetes' history, and offered minimal features. Features like advanced routing, traffic splitting, and non-HTTP protocols were not natively supported by the API

Ingress NGINX solved this problem by serving as a robust Ingress controller that executed the API's rules. Leveraging the widely adopted NGINX reverse proxy, the controller provided a powerful, provider-agnostic entry point for cluster traffic. It was able to:

  • Consolidate multiple services under a single IP address.
  • Provide robust Layer 7 capabilities, including SSL/TLS termination and basic load balancing.
  • Use familiar NGINX configuration logic inside a cloud-native environment.
  • Extend the basic Ingress API to support advanced features, such as rate limiting, custom headers, and sophisticated traffic management, by allowing users to inject familiar, raw NGINX configuration logic using custom nginx.ingress.kubernetes.io annotations (often called "snippets").

This flexibility, achieved by translating standard Ingress objects into feature-rich NGINX configurations, made Ingress NGINX the de-facto controller and the "Swiss Army Knife" of Kubernetes networking.

Why is it Retiring?

If it's so popular, why kill it? The very flexibility that made it so popular also (at least partially) led to its demise. The announcement points to two primary "silent killers":

  • The "Snippet" Security Debt: Ingress NGINX gained popularity through its flexibility, specifically "snippets" that let users inject raw NGINX config via annotations. Today, these are viewed as major security risks, as they can allow for configuration injection attacks. Fixing this architectural "feature" has become an insurmountable task.
  • The Maintainership Gap: Despite having millions of users, the project was sustained by only one or two people working in their spare time. In an industry where security vulnerabilities move fast, "best-effort" maintenance isn't enough to protect the global ecosystem.

Time for Gateway API

The removal of the popular NGINX ingress implementation opens up an opportunity to transition to the Gateway API. While the Ingress API in Kubernetes is not going anywhere (just the NGINX variant of it), development on it is frozen, and there are reasons for that.

Think of Gateway API as "Ingress 2.0." While the Ingress API is a single, limited resource, Gateway API is role-oriented. It separates the concerns of the Infrastructure Provider (who sets up the LB), the Cluster Operator (who defines policies), and the Application Developer (who routes the traffic).

For the Kubernetes Podcast from Google, we've interviewed Kubernetes maintainers working on Gateway API (Like in this episode featuring Lior Lieberman), and they tell a great story about why it was developed. In the early days of Kubernetes, the maintainers & contributors weren't sure exactly what users would need with regard to ingress management for workloads running on Kubernetes. The early Kubernetes Ingress object was an attempt to address the problems the maintainers thought users would need to solve, and they didn't get it all right. The annotations Ingress-NGINX supported on top of the Ingress API helped cover the many gaps in the Kubernetes API, but the annotations tied you to Ingress-NGINX. Those gaps have now been largely closed by Gateway API, and the API is supported by many conformant implementations, so you can have confidence in the portability of the API.

An important feature of Gateway API's design is that it is an API standard defined by the community, but implemented by your infrastructure or networking solution provider. Networking ultimately boils down to cables transmitting electrical signals between machines. What kind of machines and how they're connected has a big impact on the types of ingress capabilities available to you- or at least in how they're actually implemented. Gateway API provides a standard set of capabilities that you can access in a standardized way, while allowing for the reality of different networking implementations across providers. It's meant to help you get the most out of your infrastructure- regardless of what that infrastructure actually is.

How Gateway API solves the old problems with Ingress NGINX:

  • Security by Design: No more "configuration snippets." Features are built into the API natively, reducing the risk of accidental misconfiguration.
  • Standardization: Unlike the old Ingress API, which required custom annotations for almost everything (like traffic splitting), Gateway API builds these features into the spec, offering greater portability.
  • Extensibility: It is designed to handle more than just HTTP—it brings the same power to TCP, UDP, and gRPC.

The Challenges of Transitioning

Migration is rarely "click and play." Users moving away from Ingress NGINX should prepare for:

  • Annotation Mapping: Most of your nginx.ingress.kubernetes.io annotations won't work on new controllers. You'll need to map these to the new Gateway API "HTTPRoute" logic.
  • Learning Curve: Gateway API has more "objects" to manage (Gateways, GatewayClasses, Routes). It takes a moment to wrap your head around the hierarchy, but it was implemented that way based on experience - these objects should help you manage your workloads' ingress needs more efficiently.
  • Feature Parity: If you rely on very specific, obscure NGINX modules, you'll need to verify that your new controller (be it Envoy-based like Emissary or Cilium, or a different NGINX-based provider) supports them.

Why It's Worth It

The retirement of Ingress NGINX is not just a chore; it is a forcing function for adopting more sustainable architecture. By migrating to Gateway API, you gain:

  • Stability and Active Development: Gateway API is a General Availability (GA) networking standard that has maintained a "standard channel" without a single breaking change or API version deprecation for over two years. Unlike many Ingress controllers where development has largely paused, most Gateway controllers are far more actively maintained and continue to add new features like CORS and timeouts.
  • Portability: Choosing a different Ingress controller might seem easier, but if you rely on Ingress-NGINX annotations, you will likely have to migrate to another set of implementation-specific annotations. Gateway API provides more portable features directly in the core API and ensures a consistent experience across different implementations. When you select an implementation that is conformant with the latest v1.4 release, you can be confident that the behavior of these features will be consistent.
  • Future-Proof Extensibility: While Gateway API supports many more features than the core Ingress API, if you find a needed feature missing, an implementation is likely to provide a similar or equivalent feature as an implementation-specific extension. For example, GKE Gateway and Envoy Gateway extend the API with their own custom policies.

Next Steps

Start your migration planning today to capitalize on the opportunity and meet the deadline.

  1. Audit Your Usage: Run kubectl get pods --all-namespaces -l app.kubernetes.io/name=ingress-nginx to see where you are still using the legacy controller.
  2. Utilize Automation: Check out the ingress2gateway project. A lot of work is going into this tool to make the migration experience better, including adding support for the most widely used Ingress-NGINX annotations.
  3. Experiment and Provide Feedback: Give Gateway API a try! Start a PoC with a conformant Gateway API implementation (like GKE Gateway, Cilium, or Envoy Gateway). The community welcomes help and feedback on ingress2gateway and encourages users to share feedback on what Gateway API is getting right and wrong.
  4. Adhere to the Timeline: You have until March 2026 before the security updates stop. Start your migration planning sooner rather than later!

For more details on migrating from Ingress to Gateway API refer to our documentation.

This Week in Open Source #14

Friday, February 6, 2026

This Week in Open Source for February 06, 2026

A look around the world of open source

Here we are at the beginning of February, and the world of open source is navigating a fascinating landscape of innovation and challenge. The main focus of many articles this week is on the evolving relationship between AI and software maintenance. But open source is about more than just the code; it's about the people and the spirit of collaboration. With that we look at the Open Gaming Collective which is pushing Linux gaming further and the SLSA framework and how it is foundational in software security.

Dive in to see what's happening this week in open source!

Upcoming Events

  • February 24 - 25: The Linux Foundation Member Summit is happening in Napa, California. It is the annual gathering for Linux Foundation members that fosters collaboration, innovation, and partnerships among the leading projects and organizations working to drive digital transformation with open source technologies.
  • March 5 - 8: SCALE 23x is happening in Pasadena, California. It is North America's largest community-run open source conference and includes four days of sessions, workshops, and community activities focused on open source, security, DevOps, cloud native, and more.
  • March 9 - 10: FOSSASIA Summit 2026 is happening in Bangkok, Thailand. It will be a two-day hybrid event that showcases the latest in open technologies, fostering collaboration across enterprises, developers, educators, and communities.

Open Source Reads and Links

  • [Article] Curl shutters bug bounty program to remove incentive for submitting AI slop - The maintainer of popular open-source data transfer tool cURL has ended the project's bug bounty program after maintainers struggled to assess a flood of AI-generated contributions.
  • [Article] Vibe Coding Is Killing Open Source Software, Researchers Argue - So much open source software is utilized when people vibe code with LLMs. However, vibe coders don't give back, according to research. What can be done to make vibe coders understand the importance of the open source ecosystem and giving back?
  • [Blog] AI Slopageddon and the OSS Maintainers - AI-generated low-quality code, called "AI slop," is overwhelming open source maintainers and harming collaboration. Some projects have banned AI contributions, while others require disclosure and careful review to manage the problem. How can we make changes when platforms benefit from AI tools but often ignore the burden this puts on maintainers?
  • [Paper] Will It Survive? Deciphering the Fate of AI-Generated Code in Open Source - AI-generated code lasts longer in open-source projects than human-written code. It is changed less often but has more bug fixes and security updates. Predicting when AI code will be modified is hard because many outside factors affect it.
  • [Article] Open Gaming Collective (OGC) formed to push Linux gaming even further - On the fun side of open source the Open Gaming Collective is a new group uniting many Linux gaming projects to work together. They will share important tools and kernel patches to make Linux gaming better and less fragmented. Bazzite and other members will use OGC's shared improvements for better hardware support and gaming experience.
  • [Blog] Supply Chain Robots, Electric Sheep, and SLSA - Securing the software supply chain is crucial to protect against attacks that can compromise code and build systems. SLSA is a practical framework that helps organizations improve supply chain security step-by-step by verifying source code and build integrity. A good read to understand this aspect of software security.

As we like to say, "a community is a garden, not a building; it requires tending, not just construction".

How is your team tending to your open source "garden" this month? We'd love to hear your stories! Share them on our @GoogleOSS X account or our @opensource.google Bluesky account.

ZetaSQL is being renamed to GoogleSQL

Tuesday, February 3, 2026

AI Generated image of the word ZetaSQL followed by a double arrow then the word GoogleSQL.

We're excited to announce a small but significant change: the open-source project known as ZetaSQL has been officially renamed to GoogleSQL(https://github.com/google/googlesql). This move unifies the name of our powerful SQL dialect, analysis, and parsing libraries under a single, consistent banner, whether you're using it within Google's cloud and internal services or as part of the open-source community.

For years, GoogleSQL has been the standard SQL dialect across many Google services like BigQuery and Spanner. Originally, while we called the language component GoogleSQL internally, we weren't using that name to describe the dialect in our public-facing products. Since then, we've started using the GoogleSQL name in our public-facing products and documentation, to emphasize that it's the same shared dialect across products.

Now, we're renaming the open source package too, to emphasize that it supports the same SQL dialect used in BigQuery, Spanner, and other products. The goal of open sourcing our work was always to allow developers outside of Google to leverage the same robust and compliant SQL foundation. With the name change, we aim to reduce confusion and make it easier for everyone to find and discuss the same great technology. Whether you're an internal engineer, a Google Cloud customer, or an open-source developer, you're using GoogleSQL.

This is primarily a branding change. The technology, features, and the team behind it remain the same. The open-source repository will continue to thrive, now proudly bearing the GoogleSQL name. We believe this unification will strengthen the GoogleSQL ecosystem, making it more accessible and understandable for our growing community of users and contributors.

We're enthusiastic about this next chapter for GoogleSQL in the open-source world and look forward to continued collaboration and innovation with the community.

This Week in Open Source #13

Friday, January 23, 2026

This Week in Open Source for January 23, 2026

A look around the world of open source

Can you believe we're already wrapping up the first month of the year? January is coming to a close. The open source ecosystem is buzzing with activity, from the upcoming community gatherings at FOSDEM in Brussels to new conversations around AI standards and cloud flexibility.

Google Open Source believes that "a community is a garden, not a building". It requires constant tending to thrive. This week, we're looking at how we can all contribute to that growth—whether it's by securing the software supply chain, standardizing AI agents, or simply learning from the legends of our field like Linus Torvalds.

Dive in to see what's happening this week in open source!

Upcoming Events

  • January 29: CHAOSScon Europe 2026 is co-located with FOSDEM in Brussels, Belgium. This conference revolves around discussing open source project health, CHAOSS updates, use cases, and hands-on workshops for developers, community managers, project managers, and anyone interested in measuring open source project health. It also shares insights from the CHAOSS context working groups including OSPOs, University Open Source, and Open Source in Science and Research.
  • January 31 - February 1: FOSDEM 2026 is happening at the Université Libre de Bruxelles in Brussels, Belgium. It is a free event for software developers to meet, share ideas and collaborate. Every year, thousands of developers of free and open source software from all over the world gather at the event in Brussels.
  • February 24 - 25: The Linux Foundation Member Summit is happening in Napa, California. It is the annual gathering for Linux Foundation members that fosters collaboration, innovation, and partnerships among the leading projects and organizations working to drive digital transformation with open source technologies.
  • March 5 - 8: SCALE 23x is happening in Pasadena, California. It is North America's largest community-run open source conference and includes four days of sessions, workshops, and community activities focused on open source, security, DevOps, cloud native, and more.
  • March 9 - 10: FOSSASIA Summit 2026 is happening in Bangkok, Thailand. It will be a two-day hybrid event that showcases the latest in open technologies, fostering collaboration across enterprises, developers, educators, and communities.

Open Source Reads and Links

  • [Article] The state of trusted open source - This review of the state of trusted open source report goes over many statistics. One of the interesting ones is that vulnerabilities most often hide in the smaller dependencies of the larger projects we might be focused on. What does this mean for your approach to security? How should various open source communities deal with this?
  • [Blog] Software Heritage Archive recognized as a digital public good - As the Software Heritage Archive celebrates its 10th anniversary, the Archive has scaled to protect over 27 billion unique source files, even solving the "2PB problem" by deploying protocols that compressed 78TB of graph data into a 3TB research dataset. This ensures that humanity's executable history remains a global commons rather than a proprietary secret, aligning with our belief at Google that Code is for today, Open Source is forever.
  • [Blog] Agent Definition Language: The open standard AI agents have been missing - The Agent Definition Language (ADL) creates a clear, shared way to describe AI agents so they work well across different systems. This helps teams understand what agents do, how they behave, and how to govern them safely. As an open and standard, ADL makes AI agents easier to build, review, and share in the open-source community.
  • [Blog] AI Agent Engineering in Go with the Google ADK - AI, agents, and the related protocols touch on many open source projects. This post gives you a technical hands on with the Agent Starter Pack. By following it you'll learn how to build, test, and securely deploy a Go AI agent using Google Cloud services.
  • [Article] How Kubernetes Broke the AWS Cloud Monopoly - Before Kubernetes, companies felt locked into AWS because of its unique APIs. Kubernetes allowed apps to run on any cloud, giving users more choice and helping other cloud providers grow. This has made multi-cloud the way forward for many enterprises. Are you utilizing a multi-cloud strategy? Has Kubernetes helped you get there?
  • [Article] Even Linux Creator Linus Torvalds is Using AI to Code in 2026 - Opinions vary on where and whether AI is useful in various areas. One place that it has shown the greatest benefit is in as a tool for writing code. It seems Linus Torvalds has started to use it to assist with part of his AudioNoise side project. What a good way to find out how best AI can work for oneself. How have you been using AI with your code?

What exciting open source events and news are you hearing about? Let us know on our @GoogleOSS X account or our new @opensource.google Bluesky account.

A JSON schema package for Go

Wednesday, January 21, 2026

JSON Schema is a specification for describing JSON values that has become a critical part of LLM infrastructure. We recently released github.com/google/jsonschema-go/jsonschema, a comprehensive JSON Schema package for Go. We use it in the official Go SDK for MCP and expect it to become the canonical JSON Schema package for Google's Go SDKs that work with LLMs.

JSON Schema has been around for many years. Why are we doing this now, and what do LLMs have to do with it?

JSON is a flexible way to describe values. A JSON value can be null, a string, a number, a boolean, a list of values, or a mapping from strings to values. In programming language terms, JSON is dynamically typed. For example, a JSON array can contain a mix of strings, numbers, or any other JSON value. That flexibility can be quite powerful, but sometimes it's useful to constrain it. Think of JSON Schema as a type system for JSON, although its expressiveness goes well beyond typical type systems. You can write a JSON schema that requires all array elements to be strings, as you could in a typical programming language type system, but you can also constrain the length of the array or insist that its first three elements are strings of length at least five while the remaining elements are numbers.

The ability to describe the shape of JSON values like that has always been useful, but it is vital when trying to coax JSON values out of LLMs, whose output is notoriously hard to constrain. JSON Schema provides an expressive and precise way to tell an LLM how its JSON output should look. That's particularly useful for generating inputs to tools, which are usually ordinary functions with precise requirements on their input. It also turns out to be useful to describe a tool's output to the LLM. So frameworks like MCP use JSON Schema to specify both the inputs to and outputs from tools. JSON Schema has become the lingua franca for defining structured interactions with LLMs.

Requirements for a JSON Schema package

Before writing our own package, we took a careful look at the existing JSON Schema packages; we didn't want to reinvent the wheel. But we couldn't find one that had all the features that we felt were important:

  1. Schema creation: A clear, easy-to-use Go API to build schemas in code.
  2. Serialization: A way to convert a schema to and from its JSON representation.
  3. Validation: A way to check whether a given JSON value conforms to a schema.
  4. Inference: A way to generate a JSON Schema from an existing Go type.

We looked at the following packages:

It didn't seem feasible to cobble together what we needed from multiple packages, so we decided to write our own.

A Tour of jsonschema-go

A Simple, open Schema struct

At the core of the package is a straightforward Go struct that directly represents the JSON Schema specification. This open design means you can create complex schemas by writing a struct literal:

var schema = &jsonschema.Schema{
  Type:        "object",
  Description: "A simple person schema",
  Properties: map[string]*jsonschema.Schema{
    "name": {Type: "string"},
    "age": {Type: "integer", Minimum: jsonschema.Ptr(0.0)},
  },
  Required: []string{"name"},
}

A Schema will marshal to a valid JSON value representing the schema, and any JSON value representing a schema can be unmarshalled into a Schema.

The Schema struct defines fields for all standard JSON Schema keywords that are defined in popular specification drafts. To handle additional keywords not present in the specification, Schema includes an Extra field of type map[string]any.

Validation and resolution

Before using a schema to validate JSON values, the schema itself must be validated, and its references to other schemas must be followed so that those schemas can themselves be checked. We call this process resolution. Calling Resolve on a Schema returns a jsonschema.Resolved, an opaque representation of a valid schema optimized for validation. Resolved.Validate accepts almost any value that can be obtained from calling json.Umarshal: null, basic types like strings and numbers, []any, and map[string]any. It returns an error describing all the ways in which the value fails to satisfy the schema.

rs, err := schema.Resolve(nil)
if err != nil {
  return err
}
err = rs.Validate(map[string]any{"name": "John Doe", "age": 20})
if err != nil {
  fmt.Printf("validation failed: %v\n", err)
}

Originally, Validate accepted a Go struct. We removed that feature because it is not possible to validate some schemas against a struct. For example, If a struct field has a non-pointer type, there is no way to determine whether the corresponding key was present in the original JSON, so there is no way to enforce the required keyword.

Inference from Go types

While it's always possible to create a schema by constructing a Schema value, it's often convenient to create one from a Go value, typically a struct. This operation, which we call inference, is provided by the functions For and ForType. Here is For in action:

type Person struct {
    Name string `json:"name" jsonschema:"person's full name"`
    Age int `json:"age,omitzero"`
}

schema, err := jsonschema.For[Person](nil)

/* schema is:
{
    "type": "object",
    "required": ["name"],
    "properties": {
        "age":  {"type": "integer"},
        "name": {
            "type": "string",
            "description": "person's full name"
        }
    },
    "additionalProperties": false
}
*/

For gets information from struct field tags. As this example shows, it uses the name in the json tag as the property name, and interprets omitzero or omitempty to mean that a field is optional. It also looks for a jsonschema tag to get property descriptions. (We considered adding support for other keywords to the jsonschema tag as some other packages do, but that quickly gets complicated. We left an escape hatch in case we decide to support other keywords in the future.)

ForType works the same way, but takes a reflect.Type. It's useful when the type is known only at runtime.

A foundation for the Go community

By providing a high-quality JSON Schema package, we aim to strengthen the entire Go ecosystem for AI applications (and, indeed, any application that needs to validate JSON). This library is already a critical dependency for Google's own AI SDKs, and we're committed to its long-term health. We welcome external contributions, whether they are bug reports, bug fixes, performance enhancements, or support for additional JSON Schema drafts. Before beginning work, please file an issue on our issue tracker.

.