opensource.google.com

Menu

Posts from February 2021

Basis Universal Textures - Khronos Ratification and <model-viewer> Support

Thursday, February 18, 2021

In 2019, Google partnered with Binomial to open source the Basis Universal texture codec with the goal to make high-quality textures more efficient for network transmission and graphics processing unit (GPU) memory usage. The Basis Universal texture format is 6-8 times smaller than JPEG on the GPU, yet has similar storage size as JPEG—making it a great alternative to current GPU compression methods that are inefficient and don’t operate cross platform. The format is intended for a variety of use cases: games, virtual and augmented reality, maps, photos, small videos, and more.

the Basis Universal texture codec
Over the past year, several exciting developments have been made to make Basis Universal more useful. A new high-quality mode was introduced, allowing the codec to use the highest quality formats modern GPUs support, finally bringing the web up to modern GPU texture standards—with cross platform support. Additionally, the Basis encoder now has an option to build a WebAssembly version, allowing for innovative web applications to take advantage of outputting to the super-compressed format. Lastly, the Khronos Group has announced and ratified the Basis Universal texture extension to glTF format, allowing for compressed assets that can be shipped and displayed everywhere in a KTX 2.0 container. This will have profound impacts on how models are distributed via the web and advance applications like eCommerce, making it easy to take advantage of 3D content on any platform.

In addition to these new features, developers worldwide have been making it easier to take advantage of Basis Universal. <model-viewer> has just added support for glTF files with universal textures, making it as easy as two lines of JavaScript to have beautiful, interactive 3D models on your page and in the coming months, the <model-viewer> editor will add support for encoding to universal textures. Additionally, 3D engines like Three.js, Babylon.js, Godot, Archilogic, and Playcanvas have added support for Basis Universal, with more engine support coming. Basis Universal is already in applications many use every day.

We look forward to seeing Basis Universal adoption soar as it has never been easier to distribute 3D assets. Check out the code and demo on GitHub, let us know what you think, and how you plan to use it!

By Stephanie Hurlburt, Binomial and Jamieson Brettle, Chrome Media

A new resource for coordinated vulnerability disclosure in open source projects

Wednesday, February 17, 2021

One of the joys of open source is the freedom it gives you to create: contributors get to build the projects they want how they want; it’s up to them. Of course, blank slates don’t come with directions, which makes more niche areas of software development and management a challenge for contributors. Vulnerability disclosure is one of those areas.

Google doesn’t restrict its open source work to one team, instead we teach any and all Googlers about open source: how to release, how to contribute, how to use, and, in general, how to be a good open source citizen. This approach scales well, and gives people the knowledge to be lifelong open source community members. This includes sharing knowledge about open source security, a topic that isn’t new, but is finally getting the industry attention it deserves.

The intimidating blank slate and a lack of time for contributors to develop policies means many open source projects have no documented vulnerability reporting information, much less a plan for how to handle and disclose a reported vulnerability. We recently updated our guidance for coordinated vulnerability disclosure in open source projects that come out of Google and have published it in hopes that other projects will find this helpful for their project security practices.

The new guide has three sections:
It’s a myth that if a project hasn’t received a vulnerability report yet, it doesn’t need a disclosure policy. It’s also a myth that you need to be “a security person” to implement a vulnerability disclosure policy. A successful coordinated vulnerability disclosure frequently comes down to good process management and clear, thoughtful communication. You don’t have to be an expert in operating systems capabilities to understand how a reporter manipulated it to cause an account privilege escalation through your project. A predetermined policy, some templates, and a well-executed runbook will take you through discovering, patching, and disclosing most kinds of vulnerabilities.

Coordinated Vulnerability Disclosure in Open Source Projects

Vulnerability disclosure is part of Fix in the Know, Prevent, Fix framework we proposed recently for open source vulnerability management. In today’s industry, with all of our supply chain dependencies, improving open source project security in even one project can have a multiplying effect. Vulnerability disclosure is a key aspect of that overall security posture. Our hope is that projects will take this guide, remix and adapt to their projects, and share their changes with others so we can collectively increase our open source security.

By Anne Bertucio, Google Open Source

Updates on the Tsunami Security Scanning Engine

Wednesday, February 10, 2021


Several months ago, we open sourced the Tsunami security scanner: a false-positive-free infrastructure scanning engine focusing on high severity, actively exploited vulnerabilities. Today, we are releasing the first major update for Tsunami.

In the last few months, we have done a lot of work in the background to prepare Tsunami for the next step and focused on the following:
  • Vulnerability research: In order to keep Tsunami's detection capabilities up-to-date, we kicked-off various projects to research the exploitation of vulnerabilities in the wild. We will soon publish more information about our initiatives in this space—stay tuned.
  • New detection capabilities: Based on our research, we have added 15 new detector plugins to Tsunami for actively exploited vulnerabilities.
  • Continuous Integration pipeline for our open-source builds: We set up a CI/CD pipeline that automatically mirrors and tests changes between our internal version management system and the open source repository. This will enable us to easily merge internal and external contributions.
  • Test bed for end-to-end testing: This summer we hosted an intern (Yuxin Wu), who built and open-sourced a test bed for Tsunami. The test bed can automatically deploy arbitrary versions of off-the-shelf software based on docker images. We are using the test bed to automatically check whether a Tsunami detector is working for all vulnerable versions of a software and keeps functioning for future versions.
  • Web application fingerprinting: We added Web application fingerprinting capabilities to Tsunami. Tsunami, now detects popular off-the-shelve Web applications. This information can be used by Tsunami for more precise and less intrusive vulnerability verification. Furthermore, it enables security teams to build a software inventory based on Tsunami scans. We'll keep working on refining our fingerprinting approach and extending our fingerprinting database.

Today, we are releasing the new detectors and the fingerprinting capabilities. You can find the new detectors and the web fingerprinter in our plugin repository.

If you are adopting Tsunami within your organization and if you have questions or would like to contribute, feel free to contact us at any time at tsunami-scanner@google.com.

By Guoli Ma, Claudio Criscione & Sebastian Lekies, Vulnerability Management Team

The 2021 Season of Docs application for organizations is open!

Tuesday, February 9, 2021

Season of docs icon

Google Open Source is delighted to announce Season of Docs 2021!

The 2019 Season of Docs brought together open source organizations and technical writers to create 44 successful documentation projects. In 2020, we had 64 successful standard-length technical writing projects and are still awaiting long-running project results.

In 2021, the Season of Docs program will continue to support better documentation in open source and provide opportunities for skilled technical writers to gain open source experience. In addition, building on what we’ve learned from the successful 2019 and 2020 projects, we’re expanding our focus to include learning about effective metrics for evaluating open source documentation.

What are the 2021 program changes?

Season of Docs 2021 will allow open source organizations to apply for a grant based on their documentation needs. If selected, open source organizations will use their grant to hire a technical writer directly to complete their documentation project. Organizations will have up to six months to complete their documentation project. Keep reading for more information about the organization application or visit the Season of Docs site.

Technical writers interested in working with accepted open source organizations will be able to share their contact information via the Season of Docs GitHub repository; or they may submit proposals directly to the organizations and will not need to submit a formal application through Season of Docs.

Participating organizations will help broaden our understanding of effective documentation practices and metrics in open source by submitting a final case study upon completion of the program. The project case study will outline the problem the documentation project was intended to solve, what metrics were used to judge the effectiveness of the documentation, and what the organization learned for the future. All the project case studies will be published on the Season of Docs site at the end of the program.

How does it work?

February 9 - March 26 Open source organizations apply to take part in Season of Docs
April 16 Google publishes the list of accepted organizations, along with their project proposals and doc development can begin.
June 16 Organization administrators begin to submit monthly evaluations to report on the status of their project.
November 30 Organization administrators submit their case study and final project evaluation.
December 14 Google publishes the 2021 case studies and aggregate project data.
May 2, 2022 Organizations begin to participate in post-program followup surveys.

See the timeline for details.

Organization applications

Organization applications are now open! The deadline to apply is March 26, 2021 at 18:00 UTC.

To apply, first read the guidelines for creating an organization application on the Season of Docs website.

Take a look at the examples of project ideas, then create a project proposal based on your open source project’s actual documentation needs. Your goal is to attract technical writers to your organization, making them feel comfortable about approaching the organization and excited about what they can achieve.

Organizations can submit their applications here: http://goo.gle/3qVxArQ. Organization applications close on March 26th at 18:00 UTC.

Technical writers interested in participating in the 2021 Season of Docs should read our guide for technical writers on the Season of Docs website.

If you have any questions about the program, please email us at season-of-docs@google.com.

Join us

Explore the Season of Docs website at g.co/seasonofdocs to learn more about participating in the program. Use our logo and other promotional resources to spread the word. Check out the timeline and FAQ, and get ready to apply!

By Kassandra Dhillon and Erin McKean, Google Open Source Programs Office

Google joins the Rust Foundation

Monday, February 8, 2021

Droidstacean: Rust mascot Ferris, with Android mascot color/aspects
Droidstacean by Ivan Lozano, based on a design by Karen Rustad Tölva.
Rust is a systems programming language that combines low-level control over performance with modern language features and a focus on memory safety. Memory safety has been an enduring challenge for software developers, particularly those working on systems programs. Google has begun using Rust in settings where memory safety and performance are key considerations, including in key Android systems.

The Rust Core Team recently completed its work to build a new home for Rust: The Rust Foundation. Building on Google’s longstanding investments in C/C++ and the compilers and toolchains, we are delighted to announce our membership in the Rust Foundation. We look forward to participating more in the Rust community, in particular working across the industry on key issues including interoperability with C++, coordinating security reviews and decreasing the costs of crate updates, and continuing to grow our investments in existing Rust projects.

Memory safety security defects frequently threaten device safety, especially for applications and operating systems. For example, on Android, we’ve found that more than half of the security vulnerabilities we addressed in 2019 resulted from memory safety bugs. And this is despite significant efforts from Google and other contributors to the Android Open Source Project to either invest in or invent a variety of technologies, including AddressSanitizer, improved memory allocators, and numerous fuzzers and other code checking tools. Rust has proven effective at providing an additional layer of protection beyond even these tools in many other settings, including browsers, games, and even key libraries. We are excited to expand both our usage of Rust at Google and our contributions to the Rust Foundation and Rust ecosystem.

Today, some examples of projects where Google is either already using Rust or contributing to the Rust ecosystem include:
  • Operating system modules in Android, including bluetooth and Keystore 2.0
  • Low-level projects, such as the crosvm virtual machine monitor and drivers (alternative to QEMU) used in ChromeOS
  • Contributing to open source projects that we use and use Rust, such as the Mercurial source code control system
  • Firmware for FIDO security key support
And, there are many additional projects that are evaluating the use of Rust for new libraries or products. Some examples include:
We are also excited to support key Rust projects and their maintainers, such as:
  • Adding Rust code to curl
  • Working with ISRG to add a Rust TLS module to the Apache HTTP Server Project
We can’t wait to work across the industry to contribute to and support existing projects and libraries as well as help build out key areas such as C++ interoperability and security review.

By Lars Bergstrom, Director of Engineering, Android Platform Programming Languages

Writing fuzz tests with ease using Bazel

We are announcing Bazel support for developing and testing fuzz tests, with OSS-Fuzz integration, through the new rules_fuzzing Bazel library.

Fuzzing is an effective, well-known testing technique for finding security and stability bugs in software. But writing and testing fuzz tests can be tedious. Developers typically need to:
  • Implement a fuzz driver function, which exercises the API under test;
  • Build the code with the proper instrumentation (such as Address Sanitizer);
  • Link it with one of the available fuzzing engine libraries (libFuzzer, AFL++, Honggfuzz, etc.) that provide the core test generation logic;
  • Run the fuzz test binary with the right set of flags (e.g., to specify corpora or dictionaries);
  • Package the fuzz test and its resources for consumption by fuzzing infrastructures, such as OSS-Fuzz.
Unfortunately, build systems don't traditionally offer any support beyond the core primitives of producing executables, so projects adopting fuzzing often end up reimplementing fuzz test recipes.

Bazel is a versatile and extensible build system, focused on scalable, reliable, and reproducible builds. Originally designed to scale to Google's entire monolithic repository, it now underpins large enterprises and key open source Internet infrastructure projects.

We are pleased to announce that projects using Bazel can get advanced fuzzing support through the new rules_fuzzing extension library. The new fuzzing rules take care of all the boilerplate needed to build and run fuzz tests. Developers simply write the fuzz driver code and define a build target for it (example driver and target for RE2). Fuzz tests can be built and run using a number of fuzzing engines provided out-of-the-box, such as libFuzzer and Honggfuzz, as well as sanitizers. The rule library also provides the ability to define additional fuzzing engines.

You can integrate the fuzzing library with around 10 LOC in your Bazel WORKSPACE file. Defining a fuzz test in Bazel is as easy as writing the following in your BUILD file:

load("@rules_fuzzing//fuzzing:cc_deps.bzl, "cc_fuzz_test")
cc_fuzz_test(
   name = "my_fuzz_test",
   srcs = ["my_fuzz_test.cc"],
   deps = [":my_library"],
)


You can easily test the fuzzer locally by invoking its launcher:

$ bazel run --config=asan-libfuzzer //:my_fuzz_test_run

To improve the effectiveness of test case generation, fuzz tests also support seed corpora and dictionaries, through additional rule attributes. They will automatically be validated and included in fuzz test runs. Fuzz tests also serve as regression tests on the seed corpus. For example, you can add previously found and fixed crashes to the corpus and have them tested in your CI workflows:

$ bazel test --config=asan-replay //:my_fuzz_test

The fuzzing rules provide built-in support for OSS-Fuzz, our continuous fuzzing service for open source projects. The OSS-Fuzz support drastically simplifies writing the build scripts in project integration by automatically packaging the fuzz test and its dependencies using the expected OSS-Fuzz structure.

The Envoy Proxy project is one of the early adopters of the fuzzing rules library. As a large, mature C++ codebase, Envoy has maintained its own custom implementation of fuzzing support for its over 50 fuzz targets written so far. By switching to the new Bazel fuzzing rules, Envoy's fuzz targets automatically gained new features, such as local running and testing tools and support for multiple fuzzing engines. At the same time, Envoy simplified its OSS-Fuzz integration scripts. Moreover, it will automatically gain future functionality (e.g., more effective fuzzing engines, better coverage tracking, improved corpus management) as the Bazel fuzzing rules library evolves.

The Bazel rules for fuzzing draw from Google's experience providing effective fuzzing tools to our internal developers. We hope the new Bazel support for fuzzing will lower the barrier to fuzzing adoption in open source communities, further increasing the security and reliability of many projects. To learn more about integrating the fuzzing rules into your project, take a look at the Getting Started section in the documentation.

By Stefan Bucur, Software Analysis, Asra Ali, Envoy, and Abhishek Arya, OSS-Fuzz – Google

Launching OSV - Better vulnerability triage for open source

Friday, February 5, 2021

Open Source Vulnerabilities logo


We are excited to launch OSV (Open Source Vulnerabilities), our first step towards improving vulnerability triage for developers and consumers of open source software. The goal of OSV is to provide precise data on where a vulnerability was introduced and where it got fixed, thereby helping consumers of open source software accurately identify if they are impacted and then make security fixes as quickly as possible. We have started OSV with a data set of fuzzing vulnerabilities found by the OSS-Fuzz service. OSV project evolved from our recent efforts to improve vulnerability management in open source ("Know, Prevent, Fix" framework).

Vulnerability management can be painful for both consumers and maintainers of open source software, with tedious manual work involved in many cases.

For consumers of open source software, it is often difficult to map a vulnerability such as a Common Vulnerabilities and Exposures (CVE) entry to the package versions they are using. This comes from the fact that versioning schemes in existing vulnerability standards (such as Common Platform Enumeration (CPE)) do not map well with the actual open source versioning schemes, which are typically versions/tags and commit hashes. The result is missed vulnerabilities that affect downstream consumers.

Similarly, it is time consuming for maintainers to determine an accurate list of affected versions or commits across all their branches for downstream consumers after a vulnerability is fixed, in addition to the process required for publication. Unfortunately, many open source projects, including ones that are critical to modern infrastructure, are under resourced and overworked. Maintainers don't always have the bandwidth to create and publish thorough, accurate information about their vulnerabilities even if they want to.

These challenges result in open source consumers not incorporating important security fixes promptly. OSV aims to:
  1. Reduce the work required by maintainers to publish vulnerabilities, and
  2. Improve the accuracy of vulnerability queries for downstream consumers by providing precise vulnerability metadata in an easy-to-query database (complementing existing vulnerability databases).

Automation

OSV aims to simplify the vulnerability reporting process for an open source package maintainer by accurately determining the list of affected versions and commits. This requires providing both the commits that introduce and fix the bugs. If that information is not available, OSV requires providing a reproduction test case and steps to generate an application build, and then it performs bisection to find these commits in an automated fashion. OSV takes care of the rest of the analysis to figure out impacted commit ranges (accounting for cherry picks) and versions/tags.

How OSV works


OSV automates the triage workflow for an open source package consumer by providing an API to query for vulnerabilities. A typical OSV workflow for a package consumer looks like the picture above:
  1. A package consumer sends a query to OSV with a package version or commit hash as input.
    curl -X POST -d \
    '{"commit": "6879efc2c1596d11a6a6ad296f80063b558d5e0f"}' \
    'https://api.osv.dev/v1/query?key=$API_KEY'

     curl -X POST -d \
      '{"version": "1.0.0", "package": {"name": "pkg", "ecosystem""pypi"}' \
      'https://api.osv.dev/v1/query?key=$API_KEY'
  1. OSV looks up the set of vulnerabilities affecting that particular version and returns a list of vulnerabilities impacting the package. The vulnerability metadata is returned in a machine-readable JSON format.
  2. The package consumer uses this information to either cherry-pick security fixes (based on precise fix metadata) or update to a later version.

Ongoing work

OSV currently provides access to thousands of vulnerabilities from 380+ critical OSS projects integrated with OSS-Fuzz. We are planning to work with open source communities to extend with data from various language ecosystems (e.g. NPM, PyPI) and work out a pipeline for package maintainers to submit vulnerabilities with minimal work.

Our goal with OSV is to rethink and promote better, scalable vulnerability tracking for open source. In an ideal world, vulnerability management should be done closer to the actual open source development process, aided by automated infrastructure. Projects that depend on open source should be promptly notified and fixes uptaken quickly when a vulnerability is reported.

You can access the OSV website and documentation at https://osv.dev. You can explore the open source repo or contribute to the project on GitHub, and join the mailing list to stay up to date with OSV and share your thoughts on vulnerability tracking. 

By Oliver Chang and Kim Lewandowski, Google Security Team

Know, Prevent, Fix: A framework for shifting the discussion around vulnerabilities in open source

Wednesday, February 3, 2021

Executive Summary:
The security of open source software has rightfully garnered the industry’s attention, but solutions require consensus about the challenges and cooperation in the execution. The problem is complex and there are many facets to cover: supply chain, dependency management, identity, and build pipelines. Solutions come faster when the problem is well-framed; we propose a framework (“Know, Prevent, Fix”) for how the industry can think about vulnerabilities in open source and concrete areas to address first, including:
  • Consensus on metadata and identity standards: We need consensus on fundamentals to tackle these complex problems as an industry. Agreements on metadata details and identities will enable automation, reduce the effort required to update software, and minimize the impact of vulnerabilities.
  • Increased transparency and review for critical software: For software that is critical to security, we need to agree on development processes that ensure sufficient review, avoid unilateral changes, and transparently lead to well-defined, verifiable official versions.
The following framework and goals are proposed with the intention of sparking industry-wide discussion and progress on the security of open source software.

.