opensource.google.com

Menu

Posts from 2022

Season of Docs 2022 program results

Wednesday, December 14, 2022

Season of Docs is a Google program that provides support for open source projects to improve their documentation and gives professional technical writers an opportunity to gain experience in open source. We’re delighted to announce the 2022 program results!

From April 14 to November 14, 2022 selected open source organizations worked with their chosen technical writer to complete their documentation project.

  • 30 open source organizations finished their projects
  • 93% of organizations had a positive experience
  • 90% of organizations felt their documentation project was successful

Take a look at the list of completed projects to see the wide range of subjects covered!

We’d also like to share that the 2021 case study report has been published on the website. The results are based on the three post-program followup surveys sent to the organizations to determine whether or not their initial metrics had been met. A few highlights from the report include:

  • A diverse range of open source projects participated in the 2021 program: languages, Python ecosystem projects, education, climate, machine learning, fintech, robotics, developer tools, documentation tools.
  • Most projects focused on creating documentation to reduce maintainer burden through reducing issues and questions, and/ or increasing project participation either by project users or contributors.
  • 18 projects reported they were still working with their technical writer (four technical writers are participating in a paid role).

Looking forward to Season of Docs 2023? Stay tuned and watch for posts on the Google Open Source blog and sign up for the announcements email list. For organizations and technical writers interested in applying for next year’s program, check out the guides, the FAQ, and the accepted project proposals from 2022 and previous seasons.

If you were excited about participating, please do write social media posts. See the promotion and press page for images and other promotional materials you can include, and be sure to use the tag #SeasonOfDocs when promoting your project on social media. To include the tech writing and open source communities, add #WriteTheDocs, #techcomm, #TechnicalWriting, and #OpenSource to your posts.

By Romina Vicente and Erin McKean – Google Open Source Programs Office

Google Announces OpenChain ISO/IEC 5230:2020 Conformant Program

Wednesday, December 7, 2022

Google is proud to be an OpenChain Governing Board member. As an early adopter of the first generation of OpenChain, we are announcing formal adoption of ISO/IEC 5230, the International Standard for open source license compliance.

The OpenChain Project maintains ISO/IEC 5230, the International Standard for open source compliance. This allows companies of all sizes and in all sectors to adopt the key requirements of a quality open source compliance program. This is an open standard and all parties are welcome to engage with the OpenChain community, to share their knowledge, and to contribute to the future of the standard.

Google has been at the forefront of open source development and the compliant use of open source from its earliest days. The Google Open Source Programs Office prides itself on bringing the best of open source to Google and the best of Google to open source. Responsible use of open source includes respecting developers through compliant use of their code. Google’s participation in the OpenChain project is an important part of supporting industry maturity and predictability in open source compliance.

Google previously announced its conformance with OpenChain 1.2 and collaborated with the OpenChain Project on the creation of both open source compliance program standards.

By Hilary Richardson and Sonal Bhoraniya – Google Open Source Programs Office Licensing & Compliance Advisors

GSoC 2022: It’s a wrap!

Tuesday, December 6, 2022

We just wrapped up the final projects for Google Summer of Code 2022 and want to share some highlights from this year’s program. We are pleased to announce that a total of 1,054 GSoC contributors successfully completed the 2022 cycle.

2022 saw some considerable changes to the Google Summer of Code program. Let’s start with some stats around those three major changes:

    • The standard 12-week project length was used by 71.2% of contributors while 19.21% spent between 13–18 weeks on their project, while 9.33% of GSoC contributors took advantage of the 19–22 week project lengths. It is clear from feedback written by mentors and contributors alike the option for extended project lengths was a hit with participants.
    • GSoC 2022 allowed both medium-size (~175 hours) and large-size (~350 hours) projects. For 2022, 47% of the contributor projects were medium while 53% were large projects.
    • This year the program was also open to more than students for the first time and 10.4% of the accepted GSoC contributors were non-students.

In the final weeks of the program we asked contributors and mentors questions about their experiences with the program this year. Here are some of the key takeaways from the participants:

Favorite part of GSoC 2022

There were a few themes that rose to the top when contributors were asked what their favorite part of the program was:

  1. Getting involved in their organization’s open source community with folks from all around the world and their amazing mentors.
  2. Learning new skills (programming languages, skills, new technologies) and learning more about open source communities.
  3. Contributing to a meaningful community and project.
  4. Learning from experienced and thoughtful developers (their mentors and their whole community).

Improved programming skills

96% of contributors think that GSoC helped their programming skills. The most common responses to how GSoC improved their skills were:

  • Improving the quality of their code through feedback from mentors, collaboration and learning more about the importance of code reviews.
  • Gaining confidence in their coding skills and knowledge about best practices. Learning how to write more efficient code and to meet the org’s coding standards.
  • Ability to read and understand real complex codebases, and learning how to implement code with other developer’s code.

Most challenging parts of GSoC

And the most common struggles included:
  • Managing their time effectively with many other commitments.
  • Initial days starting with the organization, understanding the codebase, and sometimes learning a new programming language along the way.
  • Communicating with mentors and community members in different time zones and collaborating remotely.

Additional fun stats from GSoC Contributors

  • 99% of GSoC contributors would recommend their GSoC mentors
  • 98% of GSoC contributors plan to continue working with their GSoC organization
  • 99% of GSoC contributors plan to continue working on open source
  • 35% of GSoC contributors said GSoC has already helped them get a job or internship
  • 84% of GSoC contributors said they would consider being a mentor
  • 95% of GSoC contributors said they would apply to GSoC again

We know that’s a lot of numbers to read through, but folks ask us for more information and feedback on GSoC each year. Our hope is that we succeeded in providing additional details for this 2022 program. Every mentor and GSoC contributor took the time to fill in their evaluations and give us great written feedback on how the program affected them so we wanted to highlight this.

As we look forward to Google Summer of Code 2023, we want to thank all of our mentors, organization administrators, and contributors for a successful and smooth GSoC 2022. Thank you all for the time and energy you put in to make open source communities stronger and healthier.

Remember GSoC 2023 will be open for organization applications from January 23–February 7, 2023. We will announce the 2023 accepted GSoC organizations February 22 on the program site: g.co/gsoc. GSoC contributor applications will be open March 20–April 4, 2023.

By Stephanie Taylor, Program Manager – Google Open Source

Open sourcing the attention center model

Thursday, December 1, 2022

When you look at an image, what parts of an image do you pay attention to first? Would a machine be able to learn this? We provide a machine learning model that can be used to do just that. Why is it useful? The latest generation image format (JPEG XL) supports serving the parts that you pay attention to first, which results in an improved user experience: images will appear to load faster. But the model not only works for encoding JPEG XL images, but can be used whenever we need to know where a human would look first.

An open sourcing attention center model

What regions in an image will attract the majority of human visual attention first? We trained a model to predict such a region when given an image, called the attention center model, which is now open sourced. In addition to the model, we provide a script to use it in combination with the JPEG XL encoder: google/attention-center.

Some example predictions of our attention center model are shown in the following figure, where the green dot is the predicted attention center point for the image. Note that in the “two parrots” image both parrots’ heads are visually important, so the attention center point will be in the middle.

Four images in quadrants as follows: A red door with brass doorknob in top left quadrant, headshot of a brown skinned girl waering a colorful sweater and ribbons in her hair and painted face smiling at the camera in the top right quadrant, A teal shuttered catherdral style window against a sand colored stucco wall with pink and red hibiscus in the forefront in the bottom left quadrant, A blue and yellow macaw and red and green macaw next to each other in the bottom right quadrant
Images are from Kodak image data set: http://r0k.us/graphics/kodak/

The model is 2MB and in the TensorFlow Lite format. It takes an RGB image as input and outputs a 2D point, which is the predicted center of human attention on the image. That predicted center is the place where we should start with operations (decoding and displaying in JPEG XL case). This allows the most visually salient/import regions to be processed as early as possible. Check out the code and continue to build upon it!

Attention center ground-truth data

To train a model to predict the attention center, we first need to have some ground-truth data from the attention center. Given an image, some attention points can either be collected by eye trackers [1], or be approximated by mouse clicks on a blurry version of the image [2]. We first apply temporal filtering to those attention points and keep only the initial ones, and then apply spatial filtering to remove noise (e.g., random gazes). We then compute the center of the remaining attention points as the attention center ground-truth. An example illustration figure is shown below for the process of obtaining the ground-truth.

Five images in a row showing the original image of a person standing on a rock by the ocean; the first is the original image, the second showing gaze/attention points, the third shoing temporal filtering, the fourth spatial filtering, and fifth, attention center

Attention center model architecture

The attention center model is a deep neural net, which takes an image as input, and uses a pre-trained classification network, e.g, ResNet, MobileNet, etc., as the backbone. Several intermediate layers that output from the backbone network are used as input for the attention center prediction module. These different intermediate layers contain different information e.g., shallow layers often contain low level information like intensity/color/texture, while deeper layers usually contain higher and more semantic information like shape/object. All are useful for the attention prediction. The attention center prediction applies convolution, deconvolution and/or resizing operator together with aggregation and sigmoid function to generate a weighting map for the attention center. And then an operator (the Einstein summation operator in our case) can be applied to compute the (gravity) center from the weighting map. An L2 norm between the predicted attention center and the ground-truth attention center can be computed as the training loss.

Attention center model architecture

Progressive JPEG XL images with attention center model

JPEG XL is a new image format that allows the user to encode images in a way to ensure the more interesting parts come first. This has the advantage that when viewing images that are transferred over the web, we can already display the attention grabbing part of the image, i.e. the parts where the user looks first and as soon as the user looks elsewhere ideally the rest of the image already has arrived and has been decoded. Using Saliency in progressive JPEG XL images | Google Open Source Blog illustrates how this works in principle. In short, in JPEG XL, the image is divided into square groups (typically of size 256 x 256), and the JPEG XL encoder will choose a starting group in the image and then grow concentric squares around that group. It was this need for figuring out where the attention center of an image is that led us to open source the attention center model, together with a script to use it in combination with the JPEG XL encoder. Progressive decoding of JPEG XL images has recently been added to Chrome starting from version 107. At the moment, JPEG XL is behind an experimental flag, which can be enabled by going to chrome://flags, searching for “jxl”.

To try out how partially loaded progressive JPEG XL images look, you can go to https://google.github.io/attention-center/.

By Moritz Firsching, Junfeng He, and Zoltan Szabadka – Google Research

References

[1] Valliappan, Nachiappan, Na Dai, Ethan Steinberg, Junfeng He, Kantwon Rogers, Venky Ramachandran, Pingmei Xu et al. "Accelerating eye movement research via accurate and affordable smartphone eye tracking." Nature communications 11, no. 1 (2020): 1-12.

[2] Jiang, Ming, Shengsheng Huang, Juanyong Duan, and Qi Zhao. "Salicon: Saliency in context." In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1072-1080. 2015.

Explore the new Learn Kubernetes with Google website!

Thursday, November 17, 2022

As Kubernetes has become a mainstream global technology, with 96% of organizations surveyed by the CNCF1 using or evaluating Kubernetes for production use, it is now estimated that 31%2 of backend developers worldwide are Kubernetes developers. To add to the growing popularity, the 2021 annual report1 also listed close to 60 enhancements by special interest and working groups to the Kubernetes project. With so much information in the ecosystem, how can Kubernetes developers stay on top of the latest developments and learn what to prioritize to best support their infrastructure?

The new website Learn Kubernetes with Google brings together under one roof the guidance of Kubernetes experts—both from Google and across the industry—to communicate the latest trends in building your Kubernetes infrastructure. You can access knowledge in two formats.

One option is to participate in scheduled live events, which consist of virtual panels that allow you to ask questions to experts via a Q&A forum. Virtual panels last for an hour, and happen once quarterly. So far, we’ve hosted panels on building a multi-cluster infrastructure, the Dockershim deprecation, bringing High Performance Computing (HPC) to Kuberntes, and securing your services with Istio on Kubernetes. The other option is to pick one of the multiple on-demand series available. Series are made up of several 5-10 minute episodes and you can go through them at your own leisure. They cover different topics, including the Kubernetes Gateway API, the MCS API, Batch workloads, and Getting started with Kubernetes. You can use the search bar on the top right side of the website to look up specific topics.
ALT TEXT
As the cloud native ecosystem becomes increasingly complex, this website will continue to offer evergreen content for Kubernetes developers and users. We recently launched a new content category for ecosystem projects, which started by covering how to run Istio on Kubernetes. Soon, we will also launch a content category for developer tools, starting with Minikube.

Join hundreds of developers that are already part of the Learn Kubernetes with Google community! Bookmark the website, sign up for an event today, and be sure to check back regularly for new content.

By María Cruz, Program Manager – Google Open Source Programs Office

Get ready for Google Summer of Code 2023!

Thursday, November 10, 2022

We are thrilled to announce the 2023 Google Summer of Code (GSoC) program and share the timeline with you to get involved! 2023 will be our 19th consecutive year of hosting GSoC and we could not be more excited to welcome more organizations, mentors, and new contributors into the program.

With just three weeks left in the 2022 program, we had an exciting year with 958 GSoC contributors completing their projects with 198 open source organizations.

Our 2022 contributors and mentors have given us extensive feedback and we are keeping the big changes we made this year, with one adjustment around eligibility described below.
  • Increased flexibility in project lengths (10-22 weeks, not a set 12 weeks for everyone) allowed many people to be able to participate and to not feel rushed as they wrapped up their projects. We have 109 GSoC contributors wrapping up their projects over the next three weeks.
  • Choice of project time commitment there are now two options, medium at ~175 hours or large at ~350 hours, with 47% and 53% GSoC contributors, respectively.
  • Our most talked about change was GSoC being open to contributors new to open source software development (and not just to students anymore). For 2023, we are expanding the program to be open to students and to beginners in open source software development.
We are excited to launch the 2023 GSoC program and to continue to help grow the open source community. GSoC’s mission of bringing new contributors into open source communities is centered around mentorship and collaboration. We are so grateful for all the folks that continue to contribute, mentor, and get involved in open source communities year after year.

Interested in applying to the Google Summer of Code Program?

Open Source Organizations
Check out our website to learn what it means to be a participating organization. Watch our new GSoC Org Highlight videos and get inspired about projects that contributors have worked on in the past.

Think you have what it takes to participate as a mentor organization? Take a look through our mentor guide to learn about what it means to be part of Google Summer of Code, how to prepare your community, gather excited mentors, create achievable project ideas, and tips for applying. We welcome all types of open source organizations and encourage you to apply—it is especially exciting for us to welcome new orgs into the program and we hope you are inspired to get involved with our growing community.

Want to be a GSoC Contributor?
Are you new to open source development or a student? Are you eager to gain experience on real-world software development projects that will be used by thousands or millions of people? It is never too early to start thinking about what kind of open source organization you’d like to learn more about and how the application process works!

Watch our new ‘Introduction to GSoC’ video to see a quick overview of the program. Read through our contributor guide for important tips from past participants on preparing your proposal, what to think about if you wish to apply for the program, and everything you wanted to know about the program. We also hope you’re inspired by checking out the nearly 200 organizations that participated in 2022 and the 1,000+ projects that have been completed so far!

We encourage you to explore our website for other resources and continue to check for more information about the 2023 program.

You are welcome and encouraged to share information about the 2023 GSoC program with your friends, family, colleagues, and anyone you think may be interested in joining our community. We are excited to welcome many more contributors and mentoring organizations in the new year!

By Stephanie Taylor, Program Manager, and Perry Burnham, Associate Program Manager for the Google Open Source Programs Office

Google funds open source silicon manufacturing shuttles for GlobalFoundries PDK

Monday, October 31, 2022

In August, we released the Process Design Kit (PDK) for the GlobalFoundries 180nm MCU technology platform under the Apache 2.0 license. This open source PDK, resulting from our ongoing pathfinding partnership with GlobalFoundries technology, offers open source silicon designers new capabilities for high volume production, affordability, and more voltage options by including the following standard cells:
  • Digital standard cells' libraries (7-track and 9-track)
  • Low (3.3V), Medium (5V, 6V) and High (10V) voltage devices
  • SRAM macros (64x8, 128x8, 256x8, 512x8)
  • I/O and primitives (Resistors, Capacitors, Transistors, eFuses) cells' libraries
Following the announcement about GlobalFoundries joining Google’s open source silicon initiative, we are now sponsoring a series of no-cost OpenMPW shuttle runs for the GF180MCU PDK in the coming months.


Those shuttles will leverage the existing OpenMPW shuttle infrastructure based on the OpenLane automated design flow with the same Caravel harness and the Efabless platform for project submissions.

Each shuttle run will select 40 projects based on the following criteria:
  • Design sources must be released publicly under an open source license.
  • Projects must be reproducible from design sources and the GF180MCU PDK.
  • Projects must be submitted within the shuttle deadline (projects submitted earlier get additional chances to be selected).
  • Projects must pass the pre-manufacturing checks.
The first shuttle GF-MPW-0 will be a test shuttle, with submissions open from Oct. 31, 2022 to Dec. 5, 2022. It will be used as a way to validate together with the community the integration of the new PDK with the open source silicon toolchain and the Caravel harness; further shuttles will have a longer project application window and improved testing.

We encourage you to re-submit your previous OpenMPW shuttle projects to this shuttle as a way to validate their portability across open source PDKs:
  • Go to developers.google.com/silicon.
  • Navigate to the "Create a new Project" link.
  • Follow the instructions to integrate your project into the last version of the caravel_user_project template.
  • Make sure you select the right variant of the GF180MCU PDK (5LM_1TM_9K) by exporting the following environment variable PDK=gf180mcuC in your workspace prior to running any commands.
  • Submit your project for manufacturing on the Efabless platform.
We're excited to see designers and researchers leveraging this program by porting their existing projects that were submitted previously to OpenMPW shuttles, but also by designing new projects that target the GF180MCU PDK, finding paths together to research and advance the silicon ecosystem.

By Ethan Mahintorabi, Software Engineer and Johan Euphrosine, Developer Programs Engineer – Hardware Toolchains Team, and Aaron Cunningham, Technical Program Manager – Google Open Source Programs Office

Sigstore project announces general availability and v1.0 releases

Tuesday, October 25, 2022


Today, the Sigstore community announced the general availability of their free, community-operated certificate authority and transparency log services. In addition, two of Sigstore’s foundational projects, Fulcio and Rekor, published v1.0 releases denoting a commitment to API stability. Google is proud to celebrate these open source community milestones. 🎉

Sigstore is a standard for signing, verifying, and protecting open source software. With increased industry attention being given to software supply chain security, including the recent Executive Order on Cybersecurity, the ability to know and trust where software comes from has never been more important. Sigstore simplifies and automates the complex parts of digitally signing software—making this more accessible and trustworthy than ever before.

Beginning in 2020 as an open source collaboration between Red Hat and Google, the Sigstore project has grown into a vendor-neutral, community operated and designed project that is part of the Open Source Security Foundation (OpenSSF). The ecosystem has also continued to grow spanning multiple package managers and ecosystems, and now if you download a new release by open source projects like Python or Kubernetes, you’ll see that they’ve been signed by Sigstore.

Google is an active, contributing member of the Sigstore community. In addition to upstream code contributions, Google has contributed in several other ways:
We are part of a larger open source community helping develop and run Sigstore, and welcome new adopters and contributors! To learn more about getting started using Sigstore, the project documentation helps guide you through the process of signing and verifying your software. To get started contributing, several individual repositories within the Sigstore GitHub organization use “good first issue” labels to give a hint of approachable tasks. The project maintains a Slack community (use the invite to join) and regularly holds community meetings.

By Dave Lester – Google Open Source Programs Office, and Bob Callaway – Google Open Source Security Team

Kubeflow applies to become a CNCF incubating project

Monday, October 24, 2022

Google has pioneered AI and ML and has a history of innovative technology donations to the open source community (e.g. TensorFlow and Jax). Google is also the initial developer and largest contributor to Kubernetes, and brings with it a wealth of experience to the project and its community. Building an ML Platform on our state-of-the-art Google Kubernetes Engine (GKE), we have learned best practices from our users, and in 2017, we used that experience to create and open source the Kubeflow project.

In May 2020, with the v1.0 release, Kubeflow reached maturity across a core set of its stable applications. During that year, we also graduated Kubeflow Serving as an independent project, KServe, which is now incubating in Linux Foundation AI & Data.

Today, Kubeflow has developed into an end-to-end, extendable ML platform, with multiple distinct components to address specific stages of the ML lifecycle: model development (Kubeflow Notebooks), model training (Kubeflow Pipelines and Kubeflow Training Operator), model serving (KServe), and automated machine learning (Katib).

The Kubeflow project now has close to 200 contributors from over 30 organizations, and the Kubeflow community has hosted several summits and contributor meetups across the world. The broader Kubeflow ecosystem includes a number distributions across multiple cloud service providers and on-prem environments. Kubeflow’s powerful development experience helps data scientists build, train, and deploy their ML models, enabling enterprise ML operation teams to deploy and scale advanced workflows in a variety of infrastructures.

Google’s application for Kubeflow to become a CNCF incubating project is the next big milestone for the Kubeflow community, and we’re thrilled to see how developers will continue to build and innovate in ML using this project.

What's next? The pull request we’ve opened today to join the CNCF as an incubating project is only the first step. Google and the Kubeflow community will work with the CNCF and their Technical Oversight Committee (TOC), to meet the incubation stage requirements. While the due diligence and eventual TOC decision can take a few months, the Kubeflow project will continue developing and releasing throughout this process.

If Kubeflow is accepted into CNCF, the project’s assets will be transferred to the CNCF, including the source code, trademark, and website, and other collaboration and social media accounts. At Google, we believe that using open source comes with a responsibility to contribute, maintain, and improve those projects. In that spirit, we will continue supporting the Kubeflow project and work with the community towards the next level of innovation.

Thanks to everyone who has contributed to Kubeflow over the years! We are excited for what lays ahead for the Kubeflow community.

By Thea Lamkin, Senior Program Manager and Mark Chmarny, Senior Technical Program Manager – Google Open Source

ko applies to become a CNCF sandbox project

Tuesday, October 18, 2022

Back in 2018, the team at Google working on Knative needed a faster way to iterate on Kubernetes controllers. They created a new tool dedicated to deploying Go applications to Kubernetes without having to worry about container images. That tool has proven to be indispensable to the Knative community, so in March 2019, Google released it as a stand-alone open source project named ko.

Since then, ko has gained in popularity as a simple, fast, and secure container image builder for Go applications. More recently, the ko community has added, amongst many other features, multi-platform support and automatic SBOM generation. Today, like the original team at Google, many open source and enterprise development teams depend on ko to improve their developer productivity. The ko project is also increasingly used as a solution for a number of build use-cases, and is being integrated into a variety of third party CI/CD tools.

At Google, we believe that using open source comes with a responsibility to contribute, sustain, and improve the projects that make our ecosystem better. To support the next phase of community-driven innovation, enable net-new adoption patterns, and to further raise the bar in the container tool industry, we are excited to announce today that we have submitted ko as a sandbox project to the Cloud Native Computing Foundation (CNCF).

This step begins the process of transferring the ko trademark, IP, and code to CNCF. We are excited to see how the broader open source community will continue innovating with ko.

By Mark Chmarny – Google Open Source Programs Office

Announcing KataOS and Sparrow

Friday, October 14, 2022

As we find ourselves increasingly surrounded by smart devices that collect and process information from their environment, it's more important now than ever that we have a simple solution to build verifiably secure systems for embedded hardware. If the devices around us can't be mathematically proven to keep data secure, then the personally-identifiable data they collect—such as images of people and recordings of their voices—could be accessible to malicious software.

Unfortunately, system security is often treated as a software feature that can be added to existing systems or solved with an extra piece of ASIC hardware— this generally is not good enough. Our team in Google Research has set out to solve this problem by building a provably secure platform that's optimized for embedded devices that run ML applications. This is an ongoing project with plenty left to do, but we're excited to share some early details and invite others to collaborate on the platform so we can all build intelligent ambient systems that have security built-in by default.

To begin collaborating with others, we've open sourced several components for our secure operating system, called KataOS, on GitHub, as well as partnered with Antmicro on their Renode simulator and related frameworks. As the foundation for this new operating system, we chose seL4 as the microkernel because it puts security front and center; it is mathematically proven secure, with guaranteed confidentiality, integrity, and availability. Through the seL4 CAmkES framework, we're also able to provide statically-defined and analyzable system components. KataOS provides a verifiably-secure platform that protects the user's privacy because it is logically impossible for applications to breach the kernel's hardware security protections and the system components are verifiably secure. KataOS is also implemented almost entirely in Rust, which provides a strong starting point for software security, since it eliminates entire classes of bugs, such as off-by-one errors and buffer overflows.

The current GitHub release includes most of the KataOS core pieces, including the frameworks we use for Rust (such as the sel4-sys crate, which provides seL4 syscall APIs), an alternate rootserver written in Rust (needed for dynamic system-wide memory management), and the kernel modifications to seL4 that can reclaim the memory used by the rootserver. And we've collaborated with Antmicro to enable GDB debugging and simulation for our target hardware with Renode.

Internally, KataOS also is able to dynamically load and run third-party applications built outside of the CAmkES framework. At the moment, the code on Github does not include the required components to run these applications, but we hope to publish these features in the near future.

To prove-out a secure ambient system in its entirety, we're also building a reference implementation for KataOS called Sparrow, which combines KataOS with a secured hardware platform. So in addition to the logically-secure operating system kernel, Sparrow includes a logically-secure root of trust built with OpenTitan on a RISC-V architecture. However, for our initial release, we're targeting a more standard 64-bit ARM platform running in simulation with QEMU.

Our goal is to open source all of Sparrow, including all hardware and software designs. For now, we're just getting started with an early release of KataOS on GitHub. So this is just the beginning, and we hope you will join us in building a future where intelligent ambient ML systems are always trustworthy.

By Sam, Scott, and June – AmbiML Developers

Flutter SLSA Progress & Identity and Access Management through Infrastructure As Code

Tuesday, October 4, 2022

We are excited to announce several new achievements in Dart and Flutter's mission to harden security. We have achieved Supply Chain Levels for Software Artifacts (SLSA) Level 2 security on Flutter’s Cocoon application, reduced our Identity and Access Management permissions to the minimum required access, and implemented Infrastructure-as-Code to manage permissions for some of our applications. These achievements follow our recent success to enable Allstar and Security Scorecards.

Highlights

Achieving Flutter’s Cocoon SLSA level 2: Cocoon application provides continuous integration orchestration for Flutter Infrastructure. Cocoon also helps integrate several CI services with Github and provides tools to make Github development easier. Achieving SLSA Level 2 for Cocoon means we have addressed all the security concerns of levels 1 and 2 across the application. Under SLSA Level 2, Cocoon has “extra resistance to specific threats” to its supply chain. The Google Open Source Security team has audited and validated our achievement of SLSA Level 2 for Cocoon.


Implementing Identity & Access Management (IAM) via Infrastructure-as-Code: We have implemented additional security hardening features by onboarding docs-flutter-dev, master-docs-flutter-dev, and flutter-dashboard to use Identity and Access Management through an Infrastructure-as-Code system. These projects host applications, provide public documentation for Flutter, and contain a dashboard website for Flutter build status.

Using our Infrastructure-as-Code approach, security permission changes require code changes, ensuring approval is granted before the change is made. This also means that changes to security permissions are audited through source control and contain associated reasoning for the change. Existing IAM roles for these applications have been pared so that the applications follow the Principle of Least Privilege.

Advantages

  • Achieving SLSA Level 2 for Cocoon means we have addressed all the security concerns of levels 1 and 2 across the application. Under SLSA Level 2, Cocoon has “extra resistance to specific threats” to its supply chain.
  • Provenance is now generated for both, flutter-dashboard and auto-submit, artifacts through Cocoon’s automated build process. Provenance on these artifacts shows proof of their code source and tamper-proof build evidence. This work helps harden the security on the multiple tools used during the Cocoon build process: Google Cloud Platform, Cloudbuild, App Engine, and Artifact Registry.
  • Overall we addressed 83% of all SLSA requirements across all levels for the Cocoon application. We have identified the work across the application which will need to be completed for each level and category of SLSA compliance. Because of this, we know we are well positioned to continue future work toward SLSA Level 4.

Learnings and Best Practices

  1. Relatively small changes to the Cocoon application’s build process significantly increased the security of its supply chain. Google Cloud Build made this simple, since provenance metadata is created automatically during the Cloud Build process.
  2. Regulating IAM permissions through code changes adds many additional benefits and can make granting first time access simpler.
  3. Upgrading the SLSA level of an application sometimes requires varying efforts depending on the different factors of the application build process. Working towards SLSA level 4 will likely necessitate different configuration and code changes than required for SLSA level 2.

Coming Soon

Since this is the beginning of the Flutter and Dart journey toward greater SLSA level accomplishments, we hope to apply our learnings to more applications. We hope to begin work toward SLSA level 2 and beyond for more complex repositories like Flutter/flutter. Also, we hope to achieve an even higher level of SLSA compliance for the Cocoon application.

References

Supply Chain Levels for Software Artifacts (SLSA) is a security framework which outlines levels of supply chain security for an application as a checklist.

By Jesse Seales, Software Engineer – Dart and Flutter Security Working Group

Announcing the second group of Open Source Peer Bonus winners in 2022

Monday, October 3, 2022

We’re excited to announce our second group of Open Source Peer Bonus winners in 2022! 
The Google Open Source Peer Bonus program is designed to recognize external open source contributors nominated by Googlers for their open source contributions. This cycle, we are pleased to announce a total of 141 winners across 110+ projects, residing in 36 countries.

All open source contributors external to Google are eligible to be nominated. Whether you’re a software engineer, technical writer, community advocate, mentor, user experience designer, security expert, or educator, etc. you can be nominated for a peer bonus.

Our awards often come as a surprise to some while also providing motivation to others to responsibly contribute to open source. Learn more about what the Google Open Source Peer Bonus program means to our winners from this cycle:

“It was a very nice surprise to receive the Open Source Peer Bonus notification. I hope it can help lift contributors off, not only for their code contributions but for community contributions too.” – Oriol Abril Pla, ArviZ, PyMC

“The Kubernetes and CNCF ecosystem is massive. So, there are tons of opportunities to carve out your own niche in them. One of my key goals has been to make the project(s) more secure than how they were when I joined them. These awards are a welcome sprinkle of motivation to keep being a responsible open source contributor.” – Pushkar Joglekar, Kubernetes and CNCF

“I’m very pleased and proud to receive a Google Open Source Peer Bonus award. I was nominated for my contributions to The Good Docs Project where we are creating technical writing templates to help other projects create high-quality documentation. I’m passionate about the work we’re doing there, and have been hanging around the project since its inception in 2019. This is a friendly, inclusive community creating a safe space for folk to dip their toe into open source. We are global, and new folk are always welcome.” – Felicity Brand, The Good Docs Project

“I've been actively working on open source projects since my time at NIST with the FDS project starting in 2006. More recently with The Good Docs Project (TGDP) since 2020. It's been a very rewarding experience to contribute to TGDP, with such an amazing diversity of participants, perspectives and interests involved. To be given recognition through the OSPB program was a pleasant and unexpected surprise. While it's not at all what I am participating in the project for, it feels great to have someone else in the project bring my name up for this award. Thank you to TGDP and to Google for this honor.” – Bryan Klein, The Good Docs Project

“The Open Source Peer Bonus program is more than an appreciation for our contribution to the open source world. It encourages people to share their talent. To be the hero of the ones who are benefiting from your work, put your codes in the open source world.” – Nan YE, Orange Innovation China

“The TFX team and community is by far the most responsive, helpful and knowledgeable open-source project that I have worked on. It's a great feeling to be a part of the democratizing of productionised ML workflows, and being officially recognised on your efforts and contributions is the cherry on top.” – Jens Wiren, Analytical Impact Solutions

“The HTTP Archive team is welcoming to contributors and happily showed me the ropes until I got going. The project is invaluable to the web community, and working on the Web Almanac allowed me to work with domain experts on several topics, including Performance, JavaScript, and Third Parties.” – Kevin Farrugia, HTTP Archive

“Participating in these projects has been a great learning experience and has given me the opportunity to connect with a lot of great people. I am humble and grateful for the recognition and appreciation this program gives to the contributions made to these projects.” – Ole Markus With, kOps/etcdadm

“Google has been very generous in recognising VertFlow, which is a tool still in its infancy after the idea popped into my head a few months ago in conversation with a Google Cloud Customer Engineer. I hope this will encourage users to adopt VertFlow to reduce their carbon footprint when using GCP.” – Jack Lockyer-Stevens, VertFlow

Below is the list of current winners who gave us permission to thank them publicly:

Project

Winner

abap2xlsx

Gregor Wolf

ABC A System for Sequential Synthesis and Verification

Alan Mishchenko

Accelerated HW Synthesis

Zihao Li

Agones

Daniel Oliveira

Android, Pithus, Exodus Privacy, PiRogue, Frida

Esther Onfroy

AndroidX Jetpack

Michał Zieliński

Angular

Dario Piotrowicz

Angular Language Service

Ivan Wan

Apache Airflow

Elad Kalif

Apache Beam

Alex Van Boxel

Apache Beam

Austin Bennett

Apache Beam

Moritz Mack

Apache Hop

Matt Casters

aroman

Avi Romanoff

ArviZ and PyMC

Oriol Abril Pla

Babel

Nicolò Ribaudo

Bazel

Fabian Meumertzheim

Beam

Alex Kosolapov

Blockly

Johnny Oshika

BRLTTY

Dave Mielke

Bun

Jarred Sumner

cargo-make

Sagie Gur-Ari

Chrome DevTools Frontend

Percy Ley

Chromium

Juba Borgohain

Chromium

David Sanders

Chromium

Amos Lim

ClangBuiltLinux

Nathan Chancellor

cloud-data-quality

Amandeep Singh

CNCF

Ragashree M C

Contibuting.today Open Source meetup

Floor Drees

CoreDNS and Kubernetes

Chris O'Haver

cpu_features

Mykola Hohsadze

DartPad

Tim Maffett

dbus

Simon McVittie

Dill

Mike McKerns

distroless

Ole-Martin Bratteng

Don't kill my app and merge to Google Android CTS

Petr Nálevka

ecma262

Richard Gibson

Firebase Admin .NET SDK

Levi Muriuki

Firebase Admin Node.js SDK

Igor Savin

Firebase Admin Node.js SDK

Aras Abbasi

Firebase Apple SDK

Mike Hardy

Firebase Apple SDK

Jake Krog

Firebase Apple SDK

Alex Zchut

Firebase Arduino Client Library for ESP8266 and ESP32.

Suwatchai Klakerdpol

Firebase Crashlytics

Sergio Campamá

firebase-ios-sdk

Fumito Ito

firebase-ios-sdk

Tito Ciuro

firebase-js-sdk

Andi Pätzold

fish-shell

Peter Ammon

Flashrom

Thomas Heijligen

Flashrom

Felix Singer

FreeCAD

Lei Zheng

Fuchsia

Alexander Popov

Git

Jorawar Singh

git and openssh

Fabian Stelzer

GNU Guix

Ludovic Courtès

GNU Mes

Janneke Nieuwenhuizen

go-clean-arch

Iman Tumorang

golang/protobuf

Cassondra Foesch

google-cloud-pricing-cost-calculator

Nils Knieling

gopls

Ruslan Nigmatullin

GrapheneOS

Daniel Micay

GSYVideoPlayer

Asher Guo

Hello World gRPC-Gateway

Rajiv Singh

Lichess

Thibault Duplessis

JRuby

Charles Nutter

Keras

Sayak Paul

KernelWireguard

Jason Donenfeld

Knative

Mahamed Ali

Knative

Gabriel Freites

Kubernetes, CNCF

Pushkar Joglekar

Kubernetes (kOps, etcdadm etc)

Ciprian Hacman

Kubernetes (particularly kOps / etcdadm)

Ole Markus With

Kubernetes (particularly kOps / etcdadm)

Peter Rifel

Kubernetes Gateway API

Keith Mattix

KUnit/Linux kernel

Shuah Khan

Leaflet

Volodymyr Agafonkin

libyuv

Yuan Tong

lnav

Tim Stack

Log4J

Ralph Goers

Magit

Jonas Bernoulli

medium_stats

Oliver Tosky

Mockk

Oleksii Pylypenko

moja global

Harsh Bardhan Mishra

mvt (Mobile Verification Toolkit)

Claudio Guarnieri

OSS educator and collaborator

José Luis Chiquete

notcurses

nick black

Nudge

Erik Gomez

OpenSSF Allstar

Yori Yano

Oppia

Om Khandade

Oppia

Chantel Chan

OR-Tools

Xiang Chen

pcileech (and LeechCore subproject)

Ulf Frisk

Project Jupyter

Min Ragan-Kelley

Protocol Buffers

Yannic Bonenberger

pyinfra

Nick Mills-Barrett

PyPI

Jack Lockyer-Stevens

PyTorch / XLA

Ronghang Hu

QGIS

Nyall Dawson

react-native-firebase

Minsik Kim

Rich, Textualize

Will McGugan

Rust for Linux

Björn Roy Baron

sableangle

Miki Huang

Samba

David Mulder

Scorecards

Varun Sharma

Scorecards

Naveen Srinivasan

SimpleWebAuthn

Matthew Miller

SLSA

Michael Lieberman

Spock

Leonard Brünings

SQLAlchemy

Michael Bayer

stage0

Jeremiah Orians

styler

Lorenz Walthert

Surelog

Alain Dargelas

Svelte

Rich Harris

TC39

Jordan Harband

Tekton

Parth Patel

Tekton

Andrew Bayer

TensorFlow

Stefano Fabri

TensorFlow

Jason Zaman

TensorFlow Lite Examples - Android

Nan Ye

TFX

Ukjae Jeong

TFX

Jens Wiren

TFX-Addons

Gerard Casas Saez

TFX-Addons

Hannes Hapke

TFX-BSL

Martin Bomio

tfx-helper

Tomasz Mackowiak

The Good Docs Project

Aaron Peters

The Good Docs Project

Felicity Brand

The Good Docs Project

Ian Nguyen

The Good Docs Project

Bryan Klein

The Good Docs Project

Serena Jolley

Tow-Boot

Samuel Dionne-Riel

Trivy

Teppei Fukuda

TUF, CNCF

Marina Moore

V8

Ao Wang

ViSQOL

Feargus O'Gorman

W3C WebGPU standard

Mehmet Oguz Derin

wdi5

Volker Buzek

Web Almanac

Kevin Farrugia

WebRTC

Byoungchan Lee

Congratulations to our winners above and thank you for your open source contributions. We look forward to your continued support and efforts in the open source communities. Additionally, thank you to all of the Googlers who submitted nominations and our review committee members for reviewing nominations.

By Joe Sylvanovich – Google Open Source Programs Office
.