opensource.google.com

Menu

Kubernetes: Efficient Multi-Zone Networking with Topology Aware Routing

Wednesday, November 18, 2020

Topology Aware Routing of Services, a feature that was first introduced as alpha in the Kubernetes 1.17 release, aims to solve an often overlooked issue with Kubernetes Services; that they are not region aware.

Kubernetes services provide a uniform, durable, and easy to use method of accessing a variety of different backend applications. These backend applications are most commonly an exposed app running within your pods. Kubernetes does this by reserving a static virtual IP and DNS name, unique to it throughout the cluster and turning them into simple load balancers.

While this model is great for small clusters or applications, if you have thousands of nodes, your cluster spans multiple regions, or your application is latency sensitive then the service model can start to break down a bit. By default, each endpoint in a service has an equal opportunity to be selected as the destination. If you’re accessing a service with a backend hosted in the same zone, there’s a high probability that you’d be directed to a pod in a completely separate zone—likely in a completely separate region—and is what Topology Awareness intends to solve.

The Topology Aware Routing of Services feature added the concept of topologyKeys as an additional field in service objects. It allows you to define a set of node labels that could be used to route traffic closer to where it originated from.

Example Service with topologyKeys


apiVersion: v1
kind: Service
metadata:
  Name: my-app-web
spec:
  selector:
    app: my-app
  ports:
    - protocol: TCP
      port: 80
      targetPort: 8080
  topologyKeys:

    - "topology.kubernetes.io/zone"

    - "topology.kubernetes.io/region"

In this example, the service makes use of some commonly used labels for its topology preferences. It signals that when kube-proxy is routing traffic for that service, it should only route to pods within the same zone or region the traffic is originating from.

This is great! Traffic should remain “close” to where it originated and remove unnecessary latency.

While topologyKeys is available as alpha in 1.17, it hasn’t yet graduated to the next stage because the first pass at building topology-aware routing surfaced many challenges and scalability issues.

Each node in the cluster now has to manage a potentially complex ruleset for each service that would require more frequent updating. In clusters with thousands of pods or thousands of nodes, this solution quickly becomes untenable.

Another pain point with this implementation depends on how your application was distributed across a zone or region, as it's quite possible that a singular pod would be receiving ALL traffic for that zone or region. The preference list doesn’t take into account the performance of the pod on the receiving end and could potentially cause an outage.

These problems have led the Kubernetes Network Special Interest Group (SIG) to do a full re-evaluation of how to approach the Topology Awareness implementation.

What’s Planned for Topology Aware Routing?
The new design is intended to automatically handle the routing of services so that they will be load-balanced across a minimum number of the closest possible endpoints. It does this by applying an algorithm using two of the topology keys to signal affinity for service routing: topology.kubernetes.io/region and topology.kubernetes.io/zone without having to specify them via topologyKeys at the service level.

This algorithm works by establishing a dynamic threshold for a service where it calculates an expected number of endpoints per zone. It then builds a list of available endpoints for that service, prioritizing the ones that are in the same zone. If there are not enough endpoints to meet that expected number, it adds them from other zones until it reaches its expected number of endpoints. This list of expected endpoints, or a subset of endpoints are then passed to the nodes within that zone.

These nodes no longer have to maintain the complex set of rules like they had in the first iteration, and now just manage the small subset of endpoints for each service. This is less flexible than its predecessor, but it drastically reduces the performance overhead when compared to the previous method, while also covering the majority of use-cases. A big win for everyone.

These features are slated to graduate to alpha in the 1.21 release in the first part of 2021. If Topology Aware Routing would be of value to you, please consider taking the time to test it when it becomes available. Early feedback is highly appreciated and helps shape the direction of the feature.

Until then, if you’d like to learn more about Service Topology, Endpoint Slice, and the various algorithms that have been evaluated for service routing, check out Rob Scott’s presentation: Improving Network Efficiency with Topology Aware Routing, on November 19th, at KubeCon + CloudNativeCon North America.



By Bob Killen, Program Manager – Google Open Source Programs Office
.