opensource.google.com

Menu

Posts from 2021

Basis Universal Textures - Khronos Ratification and <model-viewer> Support

Thursday, February 18, 2021

In 2019, Google partnered with Binomial to open source the Basis Universal texture codec with the goal to make high-quality textures more efficient for network transmission and graphics processing unit (GPU) memory usage. The Basis Universal texture format is 6-8 times smaller than JPEG on the GPU, yet has similar storage size as JPEG—making it a great alternative to current GPU compression methods that are inefficient and don’t operate cross platform. The format is intended for a variety of use cases: games, virtual and augmented reality, maps, photos, small videos, and more.

the Basis Universal texture codec
Over the past year, several exciting developments have been made to make Basis Universal more useful. A new high-quality mode was introduced, allowing the codec to use the highest quality formats modern GPUs support, finally bringing the web up to modern GPU texture standards—with cross platform support. Additionally, the Basis encoder now has an option to build a WebAssembly version, allowing for innovative web applications to take advantage of outputting to the super-compressed format. Lastly, the Khronos Group has announced and ratified the Basis Universal texture extension to glTF format, allowing for compressed assets that can be shipped and displayed everywhere in a KTX 2.0 container. This will have profound impacts on how models are distributed via the web and advance applications like eCommerce, making it easy to take advantage of 3D content on any platform.

In addition to these new features, developers worldwide have been making it easier to take advantage of Basis Universal. <model-viewer> has just added support for glTF files with universal textures, making it as easy as two lines of JavaScript to have beautiful, interactive 3D models on your page and in the coming months, the <model-viewer> editor will add support for encoding to universal textures. Additionally, 3D engines like Three.js, Babylon.js, Godot, Archilogic, and Playcanvas have added support for Basis Universal, with more engine support coming. Basis Universal is already in applications many use every day.

We look forward to seeing Basis Universal adoption soar as it has never been easier to distribute 3D assets. Check out the code and demo on GitHub, let us know what you think, and how you plan to use it!

By Stephanie Hurlburt, Binomial and Jamieson Brettle, Chrome Media

A new resource for coordinated vulnerability disclosure in open source projects

Wednesday, February 17, 2021

One of the joys of open source is the freedom it gives you to create: contributors get to build the projects they want how they want; it’s up to them. Of course, blank slates don’t come with directions, which makes more niche areas of software development and management a challenge for contributors. Vulnerability disclosure is one of those areas.

Google doesn’t restrict its open source work to one team, instead we teach any and all Googlers about open source: how to release, how to contribute, how to use, and, in general, how to be a good open source citizen. This approach scales well, and gives people the knowledge to be lifelong open source community members. This includes sharing knowledge about open source security, a topic that isn’t new, but is finally getting the industry attention it deserves.

The intimidating blank slate and a lack of time for contributors to develop policies means many open source projects have no documented vulnerability reporting information, much less a plan for how to handle and disclose a reported vulnerability. We recently updated our guidance for coordinated vulnerability disclosure in open source projects that come out of Google and have published it in hopes that other projects will find this helpful for their project security practices.

The new guide has three sections:
It’s a myth that if a project hasn’t received a vulnerability report yet, it doesn’t need a disclosure policy. It’s also a myth that you need to be “a security person” to implement a vulnerability disclosure policy. A successful coordinated vulnerability disclosure frequently comes down to good process management and clear, thoughtful communication. You don’t have to be an expert in operating systems capabilities to understand how a reporter manipulated it to cause an account privilege escalation through your project. A predetermined policy, some templates, and a well-executed runbook will take you through discovering, patching, and disclosing most kinds of vulnerabilities.

Coordinated Vulnerability Disclosure in Open Source Projects

Vulnerability disclosure is part of Fix in the Know, Prevent, Fix framework we proposed recently for open source vulnerability management. In today’s industry, with all of our supply chain dependencies, improving open source project security in even one project can have a multiplying effect. Vulnerability disclosure is a key aspect of that overall security posture. Our hope is that projects will take this guide, remix and adapt to their projects, and share their changes with others so we can collectively increase our open source security.

By Anne Bertucio, Google Open Source

Updates on the Tsunami Security Scanning Engine

Wednesday, February 10, 2021


Several months ago, we open sourced the Tsunami security scanner: a false-positive-free infrastructure scanning engine focusing on high severity, actively exploited vulnerabilities. Today, we are releasing the first major update for Tsunami.

In the last few months, we have done a lot of work in the background to prepare Tsunami for the next step and focused on the following:
  • Vulnerability research: In order to keep Tsunami's detection capabilities up-to-date, we kicked-off various projects to research the exploitation of vulnerabilities in the wild. We will soon publish more information about our initiatives in this space—stay tuned.
  • New detection capabilities: Based on our research, we have added 15 new detector plugins to Tsunami for actively exploited vulnerabilities.
  • Continuous Integration pipeline for our open-source builds: We set up a CI/CD pipeline that automatically mirrors and tests changes between our internal version management system and the open source repository. This will enable us to easily merge internal and external contributions.
  • Test bed for end-to-end testing: This summer we hosted an intern (Yuxin Wu), who built and open-sourced a test bed for Tsunami. The test bed can automatically deploy arbitrary versions of off-the-shelf software based on docker images. We are using the test bed to automatically check whether a Tsunami detector is working for all vulnerable versions of a software and keeps functioning for future versions.
  • Web application fingerprinting: We added Web application fingerprinting capabilities to Tsunami. Tsunami, now detects popular off-the-shelve Web applications. This information can be used by Tsunami for more precise and less intrusive vulnerability verification. Furthermore, it enables security teams to build a software inventory based on Tsunami scans. We'll keep working on refining our fingerprinting approach and extending our fingerprinting database.

Today, we are releasing the new detectors and the fingerprinting capabilities. You can find the new detectors and the web fingerprinter in our plugin repository.

If you are adopting Tsunami within your organization and if you have questions or would like to contribute, feel free to contact us at any time at tsunami-scanner@google.com.

By Guoli Ma, Claudio Criscione & Sebastian Lekies, Vulnerability Management Team

The 2021 Season of Docs application for organizations is open!

Tuesday, February 9, 2021

Season of docs icon

Google Open Source is delighted to announce Season of Docs 2021!

The 2019 Season of Docs brought together open source organizations and technical writers to create 44 successful documentation projects. In 2020, we had 64 successful standard-length technical writing projects and are still awaiting long-running project results.

In 2021, the Season of Docs program will continue to support better documentation in open source and provide opportunities for skilled technical writers to gain open source experience. In addition, building on what we’ve learned from the successful 2019 and 2020 projects, we’re expanding our focus to include learning about effective metrics for evaluating open source documentation.

What are the 2021 program changes?

Season of Docs 2021 will allow open source organizations to apply for a grant based on their documentation needs. If selected, open source organizations will use their grant to hire a technical writer directly to complete their documentation project. Organizations will have up to six months to complete their documentation project. Keep reading for more information about the organization application or visit the Season of Docs site.

Technical writers interested in working with accepted open source organizations will be able to share their contact information via the Season of Docs GitHub repository; or they may submit proposals directly to the organizations and will not need to submit a formal application through Season of Docs.

Participating organizations will help broaden our understanding of effective documentation practices and metrics in open source by submitting a final case study upon completion of the program. The project case study will outline the problem the documentation project was intended to solve, what metrics were used to judge the effectiveness of the documentation, and what the organization learned for the future. All the project case studies will be published on the Season of Docs site at the end of the program.

How does it work?

February 9 - March 26 Open source organizations apply to take part in Season of Docs
April 16 Google publishes the list of accepted organizations, along with their project proposals and doc development can begin.
June 16 Organization administrators begin to submit monthly evaluations to report on the status of their project.
November 30 Organization administrators submit their case study and final project evaluation.
December 14 Google publishes the 2021 case studies and aggregate project data.
May 2, 2022 Organizations begin to participate in post-program followup surveys.

See the timeline for details.

Organization applications

Organization applications are now open! The deadline to apply is March 26, 2021 at 18:00 UTC.

To apply, first read the guidelines for creating an organization application on the Season of Docs website.

Take a look at the examples of project ideas, then create a project proposal based on your open source project’s actual documentation needs. Your goal is to attract technical writers to your organization, making them feel comfortable about approaching the organization and excited about what they can achieve.

Organizations can submit their applications here: http://goo.gle/3qVxArQ. Organization applications close on March 26th at 18:00 UTC.

Technical writers interested in participating in the 2021 Season of Docs should read our guide for technical writers on the Season of Docs website.

If you have any questions about the program, please email us at season-of-docs@google.com.

Join us

Explore the Season of Docs website at g.co/seasonofdocs to learn more about participating in the program. Use our logo and other promotional resources to spread the word. Check out the timeline and FAQ, and get ready to apply!

By Kassandra Dhillon and Erin McKean, Google Open Source Programs Office

Google joins the Rust Foundation

Monday, February 8, 2021

Droidstacean: Rust mascot Ferris, with Android mascot color/aspects
Droidstacean by Ivan Lozano, based on a design by Karen Rustad Tölva.
Rust is a systems programming language that combines low-level control over performance with modern language features and a focus on memory safety. Memory safety has been an enduring challenge for software developers, particularly those working on systems programs. Google has begun using Rust in settings where memory safety and performance are key considerations, including in key Android systems.

The Rust Core Team recently completed its work to build a new home for Rust: The Rust Foundation. Building on Google’s longstanding investments in C/C++ and the compilers and toolchains, we are delighted to announce our membership in the Rust Foundation. We look forward to participating more in the Rust community, in particular working across the industry on key issues including interoperability with C++, coordinating security reviews and decreasing the costs of crate updates, and continuing to grow our investments in existing Rust projects.

Memory safety security defects frequently threaten device safety, especially for applications and operating systems. For example, on Android, we’ve found that more than half of the security vulnerabilities we addressed in 2019 resulted from memory safety bugs. And this is despite significant efforts from Google and other contributors to the Android Open Source Project to either invest in or invent a variety of technologies, including AddressSanitizer, improved memory allocators, and numerous fuzzers and other code checking tools. Rust has proven effective at providing an additional layer of protection beyond even these tools in many other settings, including browsers, games, and even key libraries. We are excited to expand both our usage of Rust at Google and our contributions to the Rust Foundation and Rust ecosystem.

Today, some examples of projects where Google is either already using Rust or contributing to the Rust ecosystem include:
  • Operating system modules in Android, including bluetooth and Keystore 2.0
  • Low-level projects, such as the crosvm virtual machine monitor and drivers (alternative to QEMU) used in ChromeOS
  • Contributing to open source projects that we use and use Rust, such as the Mercurial source code control system
  • Firmware for FIDO security key support
And, there are many additional projects that are evaluating the use of Rust for new libraries or products. Some examples include:
We are also excited to support key Rust projects and their maintainers, such as:
  • Adding Rust code to curl
  • Working with ISRG to add a Rust TLS module to the Apache HTTP Server Project
We can’t wait to work across the industry to contribute to and support existing projects and libraries as well as help build out key areas such as C++ interoperability and security review.

By Lars Bergstrom, Director of Engineering, Android Platform Programming Languages

Writing fuzz tests with ease using Bazel

We are announcing Bazel support for developing and testing fuzz tests, with OSS-Fuzz integration, through the new rules_fuzzing Bazel library.

Fuzzing is an effective, well-known testing technique for finding security and stability bugs in software. But writing and testing fuzz tests can be tedious. Developers typically need to:
  • Implement a fuzz driver function, which exercises the API under test;
  • Build the code with the proper instrumentation (such as Address Sanitizer);
  • Link it with one of the available fuzzing engine libraries (libFuzzer, AFL++, Honggfuzz, etc.) that provide the core test generation logic;
  • Run the fuzz test binary with the right set of flags (e.g., to specify corpora or dictionaries);
  • Package the fuzz test and its resources for consumption by fuzzing infrastructures, such as OSS-Fuzz.
Unfortunately, build systems don't traditionally offer any support beyond the core primitives of producing executables, so projects adopting fuzzing often end up reimplementing fuzz test recipes.

Bazel is a versatile and extensible build system, focused on scalable, reliable, and reproducible builds. Originally designed to scale to Google's entire monolithic repository, it now underpins large enterprises and key open source Internet infrastructure projects.

We are pleased to announce that projects using Bazel can get advanced fuzzing support through the new rules_fuzzing extension library. The new fuzzing rules take care of all the boilerplate needed to build and run fuzz tests. Developers simply write the fuzz driver code and define a build target for it (example driver and target for RE2). Fuzz tests can be built and run using a number of fuzzing engines provided out-of-the-box, such as libFuzzer and Honggfuzz, as well as sanitizers. The rule library also provides the ability to define additional fuzzing engines.

You can integrate the fuzzing library with around 10 LOC in your Bazel WORKSPACE file. Defining a fuzz test in Bazel is as easy as writing the following in your BUILD file:

load("@rules_fuzzing//fuzzing:cc_deps.bzl, "cc_fuzz_test")
cc_fuzz_test(
   name = "my_fuzz_test",
   srcs = ["my_fuzz_test.cc"],
   deps = [":my_library"],
)


You can easily test the fuzzer locally by invoking its launcher:

$ bazel run --config=asan-libfuzzer //:my_fuzz_test_run

To improve the effectiveness of test case generation, fuzz tests also support seed corpora and dictionaries, through additional rule attributes. They will automatically be validated and included in fuzz test runs. Fuzz tests also serve as regression tests on the seed corpus. For example, you can add previously found and fixed crashes to the corpus and have them tested in your CI workflows:

$ bazel test --config=asan-replay //:my_fuzz_test

The fuzzing rules provide built-in support for OSS-Fuzz, our continuous fuzzing service for open source projects. The OSS-Fuzz support drastically simplifies writing the build scripts in project integration by automatically packaging the fuzz test and its dependencies using the expected OSS-Fuzz structure.

The Envoy Proxy project is one of the early adopters of the fuzzing rules library. As a large, mature C++ codebase, Envoy has maintained its own custom implementation of fuzzing support for its over 50 fuzz targets written so far. By switching to the new Bazel fuzzing rules, Envoy's fuzz targets automatically gained new features, such as local running and testing tools and support for multiple fuzzing engines. At the same time, Envoy simplified its OSS-Fuzz integration scripts. Moreover, it will automatically gain future functionality (e.g., more effective fuzzing engines, better coverage tracking, improved corpus management) as the Bazel fuzzing rules library evolves.

The Bazel rules for fuzzing draw from Google's experience providing effective fuzzing tools to our internal developers. We hope the new Bazel support for fuzzing will lower the barrier to fuzzing adoption in open source communities, further increasing the security and reliability of many projects. To learn more about integrating the fuzzing rules into your project, take a look at the Getting Started section in the documentation.

By Stefan Bucur, Software Analysis, Asra Ali, Envoy, and Abhishek Arya, OSS-Fuzz – Google

Launching OSV - Better vulnerability triage for open source

Friday, February 5, 2021

Open Source Vulnerabilities logo


We are excited to launch OSV (Open Source Vulnerabilities), our first step towards improving vulnerability triage for developers and consumers of open source software. The goal of OSV is to provide precise data on where a vulnerability was introduced and where it got fixed, thereby helping consumers of open source software accurately identify if they are impacted and then make security fixes as quickly as possible. We have started OSV with a data set of fuzzing vulnerabilities found by the OSS-Fuzz service. OSV project evolved from our recent efforts to improve vulnerability management in open source ("Know, Prevent, Fix" framework).

Vulnerability management can be painful for both consumers and maintainers of open source software, with tedious manual work involved in many cases.

For consumers of open source software, it is often difficult to map a vulnerability such as a Common Vulnerabilities and Exposures (CVE) entry to the package versions they are using. This comes from the fact that versioning schemes in existing vulnerability standards (such as Common Platform Enumeration (CPE)) do not map well with the actual open source versioning schemes, which are typically versions/tags and commit hashes. The result is missed vulnerabilities that affect downstream consumers.

Similarly, it is time consuming for maintainers to determine an accurate list of affected versions or commits across all their branches for downstream consumers after a vulnerability is fixed, in addition to the process required for publication. Unfortunately, many open source projects, including ones that are critical to modern infrastructure, are under resourced and overworked. Maintainers don't always have the bandwidth to create and publish thorough, accurate information about their vulnerabilities even if they want to.

These challenges result in open source consumers not incorporating important security fixes promptly. OSV aims to:
  1. Reduce the work required by maintainers to publish vulnerabilities, and
  2. Improve the accuracy of vulnerability queries for downstream consumers by providing precise vulnerability metadata in an easy-to-query database (complementing existing vulnerability databases).

Automation

OSV aims to simplify the vulnerability reporting process for an open source package maintainer by accurately determining the list of affected versions and commits. This requires providing both the commits that introduce and fix the bugs. If that information is not available, OSV requires providing a reproduction test case and steps to generate an application build, and then it performs bisection to find these commits in an automated fashion. OSV takes care of the rest of the analysis to figure out impacted commit ranges (accounting for cherry picks) and versions/tags.

How OSV works


OSV automates the triage workflow for an open source package consumer by providing an API to query for vulnerabilities. A typical OSV workflow for a package consumer looks like the picture above:
  1. A package consumer sends a query to OSV with a package version or commit hash as input.
    curl -X POST -d \
    '{"commit": "6879efc2c1596d11a6a6ad296f80063b558d5e0f"}' \
    'https://api.osv.dev/v1/query?key=$API_KEY'

     curl -X POST -d \
      '{"version": "1.0.0", "package": {"name": "pkg", "ecosystem""pypi"}' \
      'https://api.osv.dev/v1/query?key=$API_KEY'
  1. OSV looks up the set of vulnerabilities affecting that particular version and returns a list of vulnerabilities impacting the package. The vulnerability metadata is returned in a machine-readable JSON format.
  2. The package consumer uses this information to either cherry-pick security fixes (based on precise fix metadata) or update to a later version.

Ongoing work

OSV currently provides access to thousands of vulnerabilities from 380+ critical OSS projects integrated with OSS-Fuzz. We are planning to work with open source communities to extend with data from various language ecosystems (e.g. NPM, PyPI) and work out a pipeline for package maintainers to submit vulnerabilities with minimal work.

Our goal with OSV is to rethink and promote better, scalable vulnerability tracking for open source. In an ideal world, vulnerability management should be done closer to the actual open source development process, aided by automated infrastructure. Projects that depend on open source should be promptly notified and fixes uptaken quickly when a vulnerability is reported.

You can access the OSV website and documentation at https://osv.dev. You can explore the open source repo or contribute to the project on GitHub, and join the mailing list to stay up to date with OSV and share your thoughts on vulnerability tracking. 

By Oliver Chang and Kim Lewandowski, Google Security Team

Know, Prevent, Fix: A framework for shifting the discussion around vulnerabilities in open source

Wednesday, February 3, 2021

Executive Summary:
The security of open source software has rightfully garnered the industry’s attention, but solutions require consensus about the challenges and cooperation in the execution. The problem is complex and there are many facets to cover: supply chain, dependency management, identity, and build pipelines. Solutions come faster when the problem is well-framed; we propose a framework (“Know, Prevent, Fix”) for how the industry can think about vulnerabilities in open source and concrete areas to address first, including:
  • Consensus on metadata and identity standards: We need consensus on fundamentals to tackle these complex problems as an industry. Agreements on metadata details and identities will enable automation, reduce the effort required to update software, and minimize the impact of vulnerabilities.
  • Increased transparency and review for critical software: For software that is critical to security, we need to agree on development processes that ensure sufficient review, avoid unilateral changes, and transparently lead to well-defined, verifiable official versions.
The following framework and goals are proposed with the intention of sparking industry-wide discussion and progress on the security of open source software.

Google Summer of Code 2021 is open for mentor organization applications!

Friday, January 29, 2021

GSoC logo
With the new year comes the start of our 17th edition of Google Summer of Code (GSoC)! Right now open source projects and organizations can apply to participate as mentoring organizations for the students in the 2021 program. GSoC is a global program that draws student developers (18 years old and over) from around the world to contribute to open source projects. This year, from June 7th to August 16th, each student will spend 10 weeks working on a coding project with the support of volunteer mentors from participating open source organizations.

Does your open source project want to learn more about becoming a mentoring organization? Visit the program site and read the mentor guide to learn about what it means to be a mentor organization, how to prepare your community (hint: have plenty of enthusiastic mentors!), creating appropriate project ideas (that will be ~175 hour projects for the student), and tips for preparing your application.

We welcome all types of organizations and are very eager to involve first-time organizations with a 2021 goal of accepting 40 new orgs. We encourage veteran organizations to refer other organizations they think would be a good fit to participate in GSoC as well.

Last year, 1,106 students completed the program under the guidance of over 2,000 mentors from 198 open source organizations. Many types of open source organizations are involved in GSoC, from small and medium sized open source organizations to larger, umbrella organizations with many sub-projects under them (Python Software Foundation, Apache Software Foundation, etc.). Some organizations are relatively young (less than 2 years old), while other organizations have been around for 20+ years.

You can apply to be a mentoring organization for GSoC starting today on the program site. The deadline to apply is February 19th at 19:00 UTC. We will publicly announce the organizations chosen for GSoC 2021 on March 9th.

Please visit the program site for more information on how to apply and review the detailed timeline of important deadlines. We also encourage you to check out the Mentor Guide and our short video on why open source projects want to be a part of the GSoC program.

Good luck to all open source mentoring organization applicants!

By Stephanie Taylor, Google Open Source

The Future of Tilt Brush

Tuesday, January 26, 2021

Tilt Brush by Google

Open Sourcing Tilt Brush

Tilt Brush, Google's virtual reality painting application, has collaborated with amazing creators over the years, many of whom were part of our Artist in Residence Program. We have tremendous pride for all those collaborations, and the best part has been watching our community learn from each other and develop their abilities over the years.

As we continue to build helpful and immersive AR experiences, we want to continue supporting the artists using Tilt Brush by putting it in your hands. This means open sourcing Tilt Brush, allowing everyone to learn how we built the project, and encouraging them to take it in directions that are near and dear to them.

Tilt Brush launched on the SteamVR platform for the HTC Vive VR headset in April 2016. It went on to help users create their artwork on every major VR platform, including the Oculus Rift, Windows Mixed Reality, Valve Index, PlayStation VR, and Oculus Quest VR headsets. Tilt Brush won dozens of awards, including the Unity Awards 2015: Best VR Experience, the Cannes Lions 2017 Gold Lion in Innovation, and the Oculus Quest award for Best of 2019: VR Creativity Tool of the Year, and was often featured on The Tonight Show Starring Jimmy Fallon. As we look back on Tilt Brush, we’re proud of what this creative application has achieved, and excited for where the community will take it.
Tilt Brush by Google

What’s Included

The open source archive of the Tilt Brush code can be found at: https://github.com/googlevr/tilt-brush

Please note that it is not an actively developed product, and no pull requests will be accepted. You can use, distribute, and modify the Tilt Brush code in accordance with the Apache 2.0 License under which it is released.

In order to be able to release the Tilt Brush code as open source, there were a few things we had to change or remove due to licensing restrictions. In almost all cases, we documented the process for adding those features back in our comprehensive build guide. ‘Out of the box’, the code in the archive will compile a working version of Tilt Brush, requiring you only to add the SteamVR Unity SDK.

The currently published version of Tilt Brush will always remain available in digital stores for users with supported VR headsets. If you're interested in creating your own Tilt Brush experience, please review the build guide and visit our github repo to access the source code.

Cheers, and happy painting from the Tilt Brush team!

By Tim Aidley, Software Engineer, and Jon Corralejo, Program Manager – Tilt Brush

Assess the security of Google Kubernetes Engine (GKE) with InSpec for GCP

Monday, January 25, 2021

We are excited to announce the GKE CIS 1.1.0 Benchmark InSpec profile under an open source software license is now available on GitHub, which allows you to assess Google Kubernetes Engine (GKE) clusters against security controls recommended by CIS. You can validate the security posture of your GKE clusters using Chef InSpec™ by assessing their compliance against the Center for Internet Security (CIS) 1.1.0 benchmark for GKE.

Validating security compliance of GKE

GKE is a popular platform to run containerized applications. Many organizations have selected GKE for its scalability, self-healing, observability and integrations with other services on Google Cloud. Developer agility is one of the most compelling arguments for moving to a microservices architecture on Kubernetes, introducing configuration changes at a faster pace and demanding security checks as part of the development lifecycle.

Validating the security settings of your GKE cluster is a complex challenge and requires an analysis of multiple layers within your Cloud infrastructure:
  • GKE is a managed service on GCP, with controls to tweak the cluster’s behaviour which have an impact on its security posture. These Cloud resource configurations can be configured and audited via Infrastructure-as-Code (IaC) frameworks such as Terraform, the gcloud command line or the Google Cloud Console.
  • Application workloads are deployed on GKE by interacting via the Kubernetes (K8S) API. Kubernetes resources such as pods, deployments and services are often deployed from yaml templates using the command line tool kubectl.
  • Kubernetes uses configuration files (such as the kube-proxy and kubelet file) typically in yaml format which are stored on the nodes’ file system.

InSpec for auditing GKE

InSpec is a popular DevSecOps framework that checks the configuration state of resources in virtual machines and containers, on Cloud providers such as Google Cloud, AWS, and Microsoft Azure. The InSpec GCP resource pack 1.8 (InSpec-GCP) provides a consistent way to audit GCP resources and can be used to validate the attributes of a GKE cluster against a desired state declared in code. We previously released a blog post on how to validate your Google Cloud resources with InSpec-GCP against compliance profiles such as the CIS 1.1.0 benchmark for GCP.

While you can use the InSpec-GCP resource pack to define the InSpec controls to validate resources against the Google Cloud API, it does not directly allow you to validate configurations of other relevant layers such as Kubernetes resources and config files on the nodes. Luckily, the challenge to audit Kubernetes resources with InSpec has already been solved by the inspec-k8s resource pack. Further, files on nodes can be audited using remote access via SSH. All together, we can validate the security posture of GKE holistically using the inspec-gcp and inspec-k8s resource packs as well as controls using the InSpec file resource executed in an ssh session.

Running the CIS for GKE compliance profile with InSpec

With the GKE CIS 1.1.0 Benchmark InSpec Profile we have implemented the security controls to validate a GKE cluster against the recommended settings on GCP resource level, Kubernetes API level and file system level. The repository is split into three profiles (inspec-gke-cis-gcp, inspec-gke-cis-k8s and inspec-gke-cis-ssh), since each profile requires a different “target”, or -t parameter when run using the InSpec command line. For ease of use, a wrapper script run_profiles.sh has been provided in the root directory of the repository with the purpose of running all three profiles and storing the reports in the dedicated folder reports.
The script requires the cluster name (-c), ssh username (-u), private key file for ssh authentication (-k), cluster region or zone (-r or -z) and InSpec input file as required by the inspec.yml files in each profile (-i). As an example, the following line will run all three profiles to validate the compliance of cluster inspec-cluster in zone us-central1-a:

./run_profiles.sh -c inspec-cluster \
                           -u konrad \
                           -k /home/konrad/.ssh/google_compute_engine \
                           -z us-central1-a \
                           -i inputs.yml
Running InSpec profile inspec-gke-cis-gcp ...

Profile: InSpec GKE CIS 1.1 Benchmark (inspec-gke-cis-gcp)
Version: 0.1.0
Target: gcp://<service account used for InSpec>

<lots of InSpec output omitted>

Profile Summary: 16 successful controls, 10 control failures2 controls skipped
Test Summary: 18 successful, 11 failures, 2 skipped
Stored report in reports/inspec-gke-cis-gcp_report.
Running InSpec profile inspec-gke-cis-k8s …

Profile: InSpec GKE CIS 1.1 Benchmark (inspec-gke-cis-k8s)
Version: 0.1.0
Target: kubernetes://<IP address of K8S endpoint>:443

<lots of InSpec output omitted>

Profile Summary: 9 successful controls, 1 control failure, 0 controls skipped
Test Summary: 9 successful, 1 failure, 0 skipped
Stored report in reports/inspec-gke-cis-gcp_report.
Running InSpec profile inspec-gke-cis-ssh on node <cluster node 1> ...

Profile: InSpec GKE CIS 1.1 Benchmark (inspec-gke-cis-ssh)
Version: 0.1.0
Target: ssh://<username>@<cluster node 1>:22

<lots of InSpec output omitted>

Profile Summary: 10 successful controls, 5 control failures, 1 control skipped
Test Summary: 12 successful, 6 failures, 1 skipped
Stored report in reports/inspec-gke-cis-ssh_<cluster node 1>_report.


Analyze your scan reports

Once the wrapper script has completed successfully you should analyze the JSON or HTML reports to validate the compliance of your GKE cluster. One way to perform the analysis is to upload the collection of JSON reports of a single run from the reports folder to the open source InSpec visualization tool Heimdall Lite (GitHub) by the Mitre Corporation. An example of a compliance dashboard is shown below:
Scan Reports dashboard


Try it yourself and run the GKE CIS 1.1.0 Benchmark InSpec profile in your Google Cloud environment! Clone the repository and follow the CLI example in the Readme file to run the InSpec profiles against your GKE clusters. We also encourage you to report any issues on GitHub that you may find, suggest additional features and to contribute to the project using pull requests. Also, you can read our previous blog post on using InSpec-GCP for compliance validations of your GCP environment.

By Bakh Inamov, Security Specialist Engineer and Konrad Schieban, Infrastructure Cloud Consultant

The API Registry API

Friday, January 8, 2021

We’ve found that many organizations are challenged by the increasing number of APIs that they make and use. APIs become harder to track, which can lead to duplication rather than reuse. Also, as APIs expand to cover an ever-broadening set of topics, they can proliferate different design styles, at times creating frustrating inefficiencies.

To address this, we’ve designed the Registry API, an experimental approach to organizing information about APIs. The Registry API allows teams to upload and share machine-readable descriptions of APIs that are in use and in development. These descriptions include API specifications in standard formats like OpenAPI, the Google API Discovery Service Format, and the Protocol Buffers Language.

An organized collection of API descriptions can be the foundation for a wide range of tools and services that make APIs better and easier to use.
  • Linters verify that APIs follow standard patterns
  • Documentation generators provide documentation in consistent, easy-to-read, accessible formats
  • Code generators produce API clients and server scaffolding
  • Searchable online catalogs make everything easier to find
But perhaps most importantly, bringing everything about APIs together into one place can accelerate the consistency of an API portfolio. With organization-wide visibility, many find they need less explicit governance even as their APIs become more standardized and easy to use.

The Registry API is a gRPC service that is formally described by Protocol Buffers and that closely follows the Google API Design Guidelines at aip.dev. The Registry API description is annotated to support gRPC HTTP/JSON transcoding, which allows it to be automatically published as a JSON REST API using a proxy. Proxies also enable gRPC web, which allows gRPC calls to be directly made from browser-based applications, and the project includes an experimental GraphQL interface.

We’ve released a reference implementation that can be run locally or deployed in a container with Google Cloud Run or other container-based services. It stores data using the Google Cloud Datastore API or a configurable relational interface layer that currently supports PostgreSQL and SQLite.

Following AIP-181, we’ve set the Registry API’s stability level as "alpha," but our aim is to make it a stable base for API lifecycle applications. We’ve open-sourced our implementation to share progress and gather feedback. Please tell us about your experience if you use it.

By Tim Burks, Tech Lead – Apigee API Lifecycle and Governance

Season of Docs announces results of 2020 program

Wednesday, January 6, 2021

Season of Docs has announced the 2020 program results for standard-length projects. You can view a list of successfully completed technical writing projects on the website along with their final project reports.

Seasons of docs graphic
During the program, technical writers spend a few months working closely with an open source community. They bring their technical writing expertise to improve the project's documentation while the open source projects provided mentors to introduce the technical writers to open source tools, workflows, and the project's technology.

For standard-length technical writing projects in Season of Docs, the doc development phase is September 14, 2020 – November 30, 2020. However, some technical writers may apply for a long-running project. The technical writer makes this decision under consultation with the open source organization, based on the expectations for their project. For a long-running project, the doc development phase is September 14, 2020 – March 1, 2021.

64 technical writers successfully completed their standard-length technical writing projects. There are 18 long-running projects in progress that are expected to finish in March.
  • 85% of the mentors had a positive experience and want to mentor again in future Season of Docs cycles
  • 96% of the technical writers had a positive experience
  • 96% plan to continue contributing to open source projects
  • 94% of the technical writers said that Season of Docs helped improved their knowledge of code and/or open source
Take a look at the list of successful projects to see the wide range of subjects covered!

What is next?

The long-running projects are still in progress and finish in March 2021. Technical writers participating in these long-running projects submit their project reports before March 8th, and the writer and mentor evaluations are due by March 12th. Successfully completed long-running technical writing projects will be published on the results page on March 15, 2021.

If you were excited about participating, please do write social media posts. See the promotion and press page for images and other promotional materials you can include, and be sure to use the tag #SeasonOfDocs when promoting your project on social media. To include the tech writing and open source communities, add #WriteTheDocs, #techcomm, #TechnicalWriting, and #OpenSource to your posts.

Stay tuned for information about Season of Docs 2021—watch for posts in this blog and sign up for the announcements email list.

By Kassandra Dhillon and Erin McKean, Google Open Source Programs Office

.