opensource.google.com

Menu

Code Search for Google open source projects

Wednesday, April 1, 2020

We are pleased to launch Code Search for Google open source projects. Code Search is one of Google’s most popular internal tools, and now we have a version (same binary, different flags) targeted to open source communities.

Googlers use Code Search every day to help understand the codebase: they search for half-remembered functions and usages; jump through the codebase to figure out what calls the function they are viewing; and try to identify when and why a particular line of code changed.

The Code Search tool gives a rich code browsing experience. For example, the blame button shows which user last changed each line and you can display history on the same page as the file contents. In addition, it supports a powerful search language and, for some repositories, cross-references.

Suggest-as-you-type in any search box annotates suggestions with the type of code object, the repository and the path, helping users find what they want faster.


The search language supports regular expressions and a number of helpful search atoms. For a user looking for a function foo in a Go file, instead of sifting through thousands of results containing foo, the user can search for lang:go function:foo to limit search results to Go files where foo is a function and not a struct or a word in a comment.

One example is finding a file using only part of the name. The query file:KytheURI.java goes directly to the file, since there is only one such file.

See the quick reference for more information.

In addition to text search, some of the open source repositories have cross-references powered by Kythe. Kythe is a Google open source project that includes tools to help understand code. Project owners instrument a build of their repository to output compilation information for Kythe. Kythe tools convert this data to a graph. This graph connects definitions to declarations and code references to the abstract objects they represent (described by a graph schema). Google then runs an internal pipeline that combines these graphs for the different languages, prunes unnecessary pieces, and optimizes it for serving cross-references. The whole process runs several times per day to keep the data fresh.

Open source communities use a broader set of build systems than Google. In order to support cross-references, Kythe added drop-in support for Bazel, CMake, Maven, and Go. Projects using other build systems can use Kythe-provided wrappers for clang and javac to instrument their builds; these are used by Chromium and Android AOSP to provide compilation information for Kythe.

Because Kythe is based on the build, Kythe cross-references include links to files generated as part of the build process, such as Java files generated for AutoValues (example here) or protos. For repositories where cross-references are enabled, clicking on a symbol will take you to a definition of that symbol.

Clicking on the definition of a symbol will open a cross-reference panel, showing all the places where that symbol is referenced. For example, clicking on toVName below, we can see the places that reference this method. One of the callers is parseVName, and clicking on that shows the callers of that method.



At this time, we only provide search on the repositories listed below, but we plan to add more over time:
We are also investing in making the application keyboard-navigable and usable with a screen reader.

We hope you find this tool useful!

By Kris Hildrum, Code Search Team

Kpt: Packaging up your Kubernetes configuration with git and YAML since 2014

Tuesday, March 31, 2020

Kubernetes configuration manifests have become an industry standard for deploying both custom and off-the-shelf applications (as well as for infrastructure). Manifests are combined into bundles to create higher-level deployable systems as well as reusable blueprints (such as a product offering, off the shelf software, or customizable starting point for a new application).

However, most teams lack the expertise or desire to create bespoke bundles of configuration from scratch and instead: 1) either fork them from another bundle, or 2) use some packaging solution which generates manifests from code.

Teams quickly discover they need to customize, validate, audit and re-publish their forked/ generated bundles for their environment. Most packaging solutions to date are tightly coupled to some format written as code (e.g. templates, DSLs, etc). This introduces a number of challenges when trying to extend, build on top of, or integrate them with other systems. For example, how does one update a forked template from upstream, or how does one apply custom validation?

Packaging is the foundation of building reusable components, but it also incurs a productivity tax on the users of those components.

Today we’d like to introduce kpt, an OSS tool for Kubernetes packaging, which uses a standard format to bundle, publish, customize, update, and apply configuration manifests.

Kpt is built around an “as data” architecture bundling Kubernetes resource configuration, a format for both humans and machines. The ability for tools to read and write the package contents using standardized data structures enables powerful new capabilities:
  • Any existing directory in a Git repo with configuration files can be used as a kpt package.
  • Packages can be arbitrarily customized and later pull in updates from upstream by merging them.
  • Tools and automation can perform high-level operations by transforming and validating package data on behalf of users or systems.
  • Organizations can develop their own tools and automation which operate against the package data.
  • Existing tools and automation that work with resource configuration “just work” with kpt.
  • Existing solutions that generate configuration (e.g. from templates or DSLs) can emit kpt packages which enable the above capabilities for them.

Example workflow with kpt

Now that we’ve established the benefits of using kpt for managing your packages of Kubernetes config, lets walk through how an enterprise might leverage kpt to package, share and use their best practices for Kubernetes across the organization.


First, a team within the organization may build and contribute to a repository of best practices (pictured in blue) for managing a certain type of application, for example a microservice (called “app”). As the best practices are developed within an organization, downstream teams will want to consume and modify configuration blueprints based on them. These blueprints provide a blessed starting point which adheres to organization policies and conventions.

The downstream team will get their own copy of a package by downloading it to their local filesystem (pictured in red) using kpt pkg get. This clones the git subdirectory, recording upstream metadata so that it can be updated later.

They may decide to update the number of replicas to fit their scaling requirements or may need to alter part of the image field to be the image name for their app. They can directly modify the configuration using a text editor (as would be done before). Alternatively, the package may define setters, allowing fields to be set programmatically using kpt cfg set. Setters streamline workflows by providing user and automation friendly commands to perform common operations.

Once the modifications have been made to the local filesystem, the team will commit and push their package to an app repository owned by them. From there, a CI/CD pipeline will kick off and the deployment process will begin. As a final customization before the package is deployed to the cluster, the CI/CD pipeline will inject the digest of the image it just built into the image field (using kpt cfg set). When the image digest has been set, the CI/CD pipeline can send the manifests to the cluster using kpt live apply. Kpt live operates like kubectl apply, providing additional functionality to prune resources deleted from the configuration and block on rollout completion (reporting status of the rollout back to the user).

Now that we’ve walked through how you might use kpt in your organization, we’d love it if you’d try it out, read the docs, or contribute.

One more thing

There’s still a lot to the story we didn’t cover here. Expect to hear more from us about:
  • Using kpt with GitOps
  • Building custom logic with functions
  • Writing effective blueprints with kpt and kustomize
By Phillip Wittrock, Software Engineer and Vic Iglesias, Cloud Solutions Architect

OpenTelemetry is now beta!

Monday, March 30, 2020

OpenTelemetry and OpenCensus have been a critical part of our goal of making platforms like Kubernetes more observable and more manageable. This has been a multi-year journey for us, from creating OpenCensus and growing it into a core part of major web services’ observability stack, to our announcement of OpenTelemetry last year and the rapid growth of the OpenTelemetry community.

Beta is a big milestone for OpenTelemetry, as developers can now use the SDKs, integrations, and Collector to capture distributed traces and metrics from their applications and send them to backends like Prometheus, Jaeger, Cloud Monitoring, Cloud Trace, and others for analysis. This is a great time to try out OpenTelemetry and get involved in the observability community— whether you’re looking to improve your visibility into production services, giving your users performance data from client libraries that you maintain—or want to join a rapidly-growing open source project!

To learn more, please read our official community announcement, which copied below:

Co-authored by maintainers, community contributors, and members of the OpenTelemetry governance committee.

OpenTelemetry has just begun its first wave of beta releases, starting with the Collector and the Erlang, Go, Java, JavaScript, and Python SDKs, followed by the .Net SDK and Java auto-instrumentation agent. This means that you can begin integrating OpenTelemetry into your applications and client libraries to capture app-level metrics and distributed traces.

If you’re not already familiar with OpenTelemetry, the project provides a single set of language-specific APIs, SDKs, agents, and other components that you can use to collect distributed traces, metrics, and related metadata from your applications. In addition to its core capabilities, much of OpenTelemetry’s utility comes from integrations for HTTP and RPC libraries, storage clients, etc. that allow developers to capture critical observability data from their applications with almost zero effort. After capturing these signals, each OpenTelemetry component can export them to your backends of choice, including Prometheus, Jaeger, Zipkin, Azure Monitor, Dynatrace, Google Cloud Monitoring + Trace, Honeycomb, Lightstep, New Relic, and Splunk.

This first beta release includes:
  • APIs and SDKs for Erlang, Go, Java, JavaScript, and Python, which include the interfaces and implementations that you need to define and create distributed traces and metrics, manage sampling and context propagation, etc. The .Net API + SDK will follow shortly.
  • Language-specific API integrations for at least one popular HTTP framework, gRPC, and at least one popular storage client, which can be enabled with one line of code, and will automatically capture relevant traces and metrics and handle context propagation.
  • Language-specific exporters that allow SDKs to send captured traces and metrics to any supported backends.
  • The OpenTelemetry Collector, which can receive data from OpenTelemetry SDKs and other sources, and then export this telemetry to any supported backend.
  • Auto-Instrumentation for Java that captures telemetry from 47 Java libraries and frameworks without requiring any modification to your application.
  • Documentation for each component including getting started guides.
As these and subsequent OpenTelemetry components enter beta (requirements and release plan), we are declaring that they are ready to start integrating with. This means that service developers can begin to include OpenTelemetry in their applications and that maintainers of storage, RPC, etc. clients should start testing the OpenTelemetry APIs to provide better observability of their users.

However, this does come with some caveats:
  • Each OpenTelemetry component will likely undergo several beta releases in the coming weeks — this is simply the first.
  • While functional, beta components have not gone through thorough testing or benchmarking and they are not intended for production workloads.
  • While we aim to avoid any major changes to the OpenTelemetry APIs between beta and GA release candidates, we cannot guarantee that there will not be any changes during this period.
  • Some functionality is still missing from the first beta and will be added in subsequent releases; this is documented in each component’s GitHub repository.
In the coming weeks, you can expect additional beta releases from the first wave of OpenTelemetry components and others. In particular, we expect the API + SDK for .Net and the Java auto-instrumentation agent to be ready soon. Eventually, components will reach a level of maturity and testing where we’ll feel confident in naming them a release candidate (RC), after which we will not make any breaking changes to the APIs for that component.

This beta milestone is a huge accomplishment for the OpenTelemetry community, and every contributor should be proud of the fact that OpenTelemetry is now working and ready to integrate with. This is a great opportunity for the maintainers of client libraries to begin integrating with the OpenTelemetry APIs, for end-users to start integrating it into their services, and for anyone interested in contributing to join our rapidly growing community by joining our mailing lists, Gitter chats, and the monthly community meeting!

By Morgan McLean, Product Manager
.