opensource.google.com

Menu

How Google uses Census internally

Wednesday, March 7, 2018

This post is the first in a series about OpenCensus, a set of open source instrumentation libraries based on what we use inside Google. This series will cover the benefits of OpenCensus for developers and vendors, Google’s interest in open sourcing instrumentation tools, how to get started with OpenCensus, and our long-term vision.

If you’re new to distributed tracing and metrics, we recommend Adrian Cole’s excellent talk on the subject: Observability Three Ways.

Gaining Observability into Planet-Scale Computing

Google adopted or invented new technologies, including distributed tracing (Dapper) and metrics processing, in order to operate some of the world’s largest web services. However, building analysis systems didn’t solve the difficult problem of instrumenting and extracting data from production services. This is what Census was created to do.

The Census project provides uniform instrumentation across most Google services, capturing trace spans, app-level metrics, and other metadata like log correlations from production applications. One of the biggest benefits of uniform instrumentation to developers inside of Google is that it’s almost entirely automatic: any service that uses gRPC automatically collects and exports basic traces and metrics.

OpenCensus offers these capabilities to developers everywhere. Today we’re sharing how we use distributed tracing and metrics inside of Google.

Incident Management

When latency problems or new errors crop up in a highly distributed environment, visibility into what’s happening is critical. For example, when the latency of a service crosses expected boundaries, we can view distributed traces in Dapper to find where things are slowing down. Or when a request is returning an error, we can look at the chain of calls that led to the error and examine the metadata captured during a trace (typically logs or trace annotations). This is effectively a bigger stack trace. In rare cases, we enable custom trigger-based sampling which allows us to focus on specific kinds of requests.

Once we know there’s a production issue, we can use Census data to determine the regions, services, and scope (one customer vs many) of a given problem. You can use service-specific diagnostics pages, called “z-pages,” to monitor problems and the results of solutions you deploy. These pages are hosted locally on each service and provide a firehose view of recent requests, stats, and other performance-related information.

Performance Optimization

At Google’s scale, we need to be able to instrument and attribute costs for services. We use Census to help us answer questions like:
  • How much CPU time does my query consume?
  • Does my feature consume more storage resources than before?
  • What is the cost of a particular user operation at a particular layer of the stack?
  • What is the total cost of a particular user operation across all layers of the stack?
We’re obsessed with reducing the tail latency of all services, so we’ve built sophisticated analysis systems that process traces and metrics captured by Census to identify regressions and other anomalies.

Quality of Service

Google also improves performance dynamically depending on the source and type of traffic. Using Census tags, traffic can be directed to more appropriate shards, or we can do things like load shedding and rate limiting.

Next week we’ll discuss Google’s motivations for open sourcing Census, then we’ll shift the focus back onto the open source project itself.

By Pritam Shah and Morgan McLean, Census team
.