opensource.google.com

Menu

Dart and Flutter enable Allstar and Security Scorecards

Tuesday, June 21, 2022

We are thrilled to announce that Dart and Flutter have enabled Allstar and Security Scorecards on their open source repositories. This achievement marks the first milestone in our journey towards SLSA compliance to secure builds and releases from supply chain attacks.

Allstar is a GitHub app that provides automated continuous enforcement of security checks such as the OpenSSF Security Scorecards. With Allstar, owners can check for security policy adherence, set desired enforcement actions, and continuously implement those enforcement actions when triggered by a setting or file change in the org or repo.

Security Scorecards is an automated tool that assesses several key heuristics ("checks") associated with software security and assigns each check a score of 0-10. These scores can be used to evaluate the security posture of the project and help assess the risks introduced by dependencies.

Scorecards have been enabled on the following open source repositories, prioritized by their criticality score.

Org

Repos

Flutter

github.com/flutter/flutter

github.com/flutter/engine

github.com/flutter/plugins

github.com/flutter/packages

github.com/flutter/samples

github.com/flutter/website

github.com/flutter/flutter-intellij

github.com/flutter/gallery

github.com/flutter/codelabs

Dart

github.com/dart-lang/linter

github.com/dart-lang/sdk

github.com/dart-lang/dartdoc

github.com/dart-lang/site-www

github.com/dart-lang/test

With these security scanning tools, the Dart and Flutter team have found and resolved more than 200 high and medium security findings. The issues can be classified in the following categories:
  • Pinned Dependencies: The project should pin its dependencies. A "pinned dependency" is a dependency that is explicitly set to a specific hash instead of allowing a mutable version or range of versions. This reduces several security risks related to dependencies.
  • Token Permissions: The project's automated workflow tokens should be set to read-only by default. This follows the principle of least privilege.
  • Branch Protection: Github project's default and release branches should be protected with GitHub's branch protection settings. Branch protection allows maintainers to define rules that enforce certain workflows for branches, such as requiring review or passing certain status.
  • Code Review: The project should enforce a code review before pull requests (merge requests) are merged.
  • Dependency update tool: A dependency update tool should be used by the project to identify and update outdated and insecure dependencies.
  • Binary-Artifacts: The project should not have generated executable (binary) artifacts in the source repository. Embedded binary artifacts in the project cannot be reviewed, allowing possible obsolete or maliciously subverted executables in the source code.
Additionally, the Dart and Flutter teams have an aligned vulnerability management process. Details of these processes can be found on our respective developer sites at https://dart.dev/security and https://docs.flutter.dev/security. Internal process used by the team to handle vulnerabilities can be found on Flutter github wiki.

Learnings and Best Practices

  1. AllStar and Scorecards allowed Dart and Flutter to quickly identify areas of opportunity to improve security across hundreds of repositories triggering the removal of binaries, standardizing branch protection and enforcing code reviews.
  2. Standardizing third-party dependency management and running vulnerability scanning were identified as the next milestones in the Dart and Flutter journey to improve their overall security posture.
  3. It is safer to not embed binary artifacts in your code. However, there are cases when this is unavoidable.
  4. It is important to track your dependencies and to keep them up to date using tools like Dependabot.

By Khyati Mehta, Technical Program Manager – Dart-Flutter

DC-SCM compatible FPGA based open source BMC hardware platform

Friday, June 3, 2022

Open source software is omnipresent in the server and cloud world, and is giving rise to impressive successes in the SaaS space where useful products can be rapidly created from open source components: operating systems, container runtimes, frameworks for device management, monitoring and data pipelining, workload execution, etc.

Mirroring this trend in software, cloud providers and users are increasingly looking at building their servers using open source hardware, collaborating in initiatives such as the Open Compute Project or OpenPOWER.

With Google, Antmicro has developed two open source hardware FPGA-based Baseboard Management Controller (BMC) platforms compliant with OCP’s DC-SCM ver 1.0 specification to help increase the security, configurability and flexibility of server management and monitoring infrastructure. These have since been adopted by OpenPOWER’s LibreBMC workgroup as the base hardware platform.

The DC-SCM spec

The Datacenter-ready Secure Control Module (DC-SCM) specification aims to move common server management, security and control features from a typical motherboard into a module designed in a normalized form factor, which can be used across various datacenter platforms.

Currently rolling out in first DC-SCM compliant servers, the spec will help Cloud providers share costs, risks and increase reuse for the critical BMC component. Coupled with a fully open source implementation based on popular, inexpensive FPGA platforms will not only allow for more configurability and a tighter integration between hardware and software, but also tap into the momentum behind the broader open source hardware community via groups like CHIPS Alliance, OpenPOWER and RISC-V.

The hardware

Antmicro has developed two implementations of the DC-SCM-compatible BMC. Both designs meet the Open Compute Project specification for a Horizontal Form Factor 90x120 mm DC-SCM ver 1.0.

The BMCs role is central to the server’s faultless operation, responsible for monitoring the system while preventing and mitigating failures. Essentially, acting as an external watchdog.

To be able to provide this functionality, according to the requirements, the module offers a feature-packed Secure Control Interface for communication with the host platform, including:
  • PCI Express
  • USB
  • QSPI
  • SGPIO
  • NCSI
  • multiple I2C, I3C and UART channels
The unique property of Antmicro’s implementation of the standard is merging DC-SCM’s central Baseband Management Controller and the usual programmable SCM CPLD block into one powerful FPGA. This solution was chosen to enable a greater flexibility of the design, allowing remote updates of DC-SCM peripherals, and most notably, placing OpenPOWER or RISC-V IP cores as central processing units of the module.

One of our designs is based on Xilinx Artix-7, whereas the other one features a Lattice ECP5 - both low-cost and open source friendly FPGAs supported by the open source F4PGA toolchain project.

The FPGA is complemented by 512MB of DDR3 memory, 16GB of eMMC flash, as well as a dedicated Gigabit-Ethernet interface. To ensure the security of the Datacenter Control Module, external cryptographic modules: Root of Trust (RoT) and Trusted Platform Module (TPM) can be connected. This will allow future integration with e.g. open hardware Root of Trust projects such as OpenTitan and implementing various boot flow and authentication approaches.

Use in LibreBMC / OpenPOWER

Open source, configurable hardware platforms based on FPGA, open tooling and standards can make BMC more flexible for tomorrow's challenges. Antmicro’s DC-SCM boards have been adopted by the LibreBMC Workgroup, operating under the umbrella of the OpenPOWER Foundation, in a push to build a complete, fully transparent BMC solution. The workgroup, with participation from IBM, Google and Antmicro, among others, will be involved with creating FPGA gateware and software needed to make the hardware fully operational in real-world server solutions.

Variants involving both Linux (in its default open source BMC- distribution, OpenBMC), and Zephyr RTOS, as well as with both POWER and RISC-V cores are planned and thanks to the flexibility of FPGA all of those options will be just one gateware update away. Of course, both the gateware and software will be open source as well.

If you’re looking to develop a secure and transparent DC-SCM spec-compatible BMC solution, reach out to contact@antmicro.com. See how you can collaborate with partners such as Antmicro, Google, and IBM around the open source FPGA-based hardware platform.

By Peter Katarzynski – Antmicro

Vectorized and performance-portable Quicksort

Thursday, June 2, 2022

Today we're sharing open source code that can sort arrays of numbers about ten times as fast as the C++ std::sort, and outperforms state of the art architecture-specific algorithms, while being portable across all modern CPU architectures. Below we discuss how we achieved this.

First, some background. There is a recent trend towards columnar databases that consecutively store all values from a particular column, as opposed to storing all fields of a record or "row" before those of the next record. This can be faster to filter or sort, which are key building blocks for SQL queries; thus we focus on this data layout.

Given that sorting has been heavily studied, how can we possibly find a 10x speedup? The answer lies in SIMD/vector instructions. These carry out operations on multiple independent elements in a single instruction—for example, operating on 16 float32 at once when using the AVX-512 instruction set, or four on Arm NEON:
Summit supercomputer

If you are already familiar with SIMD, you may have heard of it being used in supercomputers, linear algebra for machine learning applications, video processing, or image codecs such as JPEG XL. But if SIMD operations only involve independent elements, how can we sort them, which involves re-arranging adjacent array elements?

Imagine we have some special way to sort, for instance 256 element arrays. Then, the Quicksort algorithm for sorting a larger array consists of partitioning it into two sub-arrays: those less than a "pivot" value (ideally the median), and all others; then recursing until a sub-array is at most 256 elements large, and using our special method for sorting those. Partitioning accounts for most of the CPU time, so if we can speed it up using SIMD, we have a fast sort.

Happily, modern instruction sets (Arm SVE, RISC-V V, x86 AVX-512) include a special instruction suitable for partitioning. Given a separate input of yes/no values (whether an element is less than the pivot), this "compress-store" instruction stores to consecutive memory only the elements whose corresponding input is "yes". We can then logically negate the yes/no values and apply the instruction again to write the elements to the other partition. This strategy has been used in an AVX-512-specific Quicksort. But what about other instruction sets such as AVX2 that don't have compress-store? Previous work has shown how to emulate this instruction using permute instructions.

We build on these techniques to achieve the first vectorized Quicksort that is portable to six instruction sets across three architectures, and in fact outperforms prior architecture-specific sorts. Our implementation uses Highway's portable SIMD functions, so we do not have to re-implement about 3,000 lines of C++ for each platform. Highway uses compress-store when available and otherwise the equivalent permute instructions. In contrast to the previous state of the art—which was also specific to 32-bit integers—we support a full range of 16-128 bit inputs.

Despite our single portable implementation, we reach record-setting speeds on both AVX2, AVX-512 (Intel Skylake) and Arm NEON (Apple M1). For one million 32/64/128-bit numbers, our code running on Apple M1 can produce sorted output at rates of 499/471/466 MB/s. On a 3 GHz Skylake with AVX-512, the speeds are 1123/1119/1120 MB/s. Interestingly, AVX-512 is 1.4-1.6 times as fast as AVX2 - a worthwhile speedup for zero additional effort (Highway checks what instructions are available on the CPU and uses the best available ones). When running on AVX2, we measure 798 MB/s, whereas the prior state of the art optimized for AVX2 only manages 699 MB/s. By comparison, the standard library reaches 58/128/117 MB/s on the same CPU, so we have managed a 9-19x speedup depending on the type of numbers.

Previously, sorting has been considered expensive. We are interested to see what new applications and capabilities will be unlocked by being able to sort at 1 GB/s on a single CPU core. The Apache2-licensed source code is available on Github (feel free to open an issue if you have any questions or comments) and our paper offers a detailed explanation and evaluation of the implementation (including the special case for 256 elements).

By Jan Wassenberg – Brain Computer Architecture Research

Build Open Silicon with Google

Wednesday, June 1, 2022

silicon design render
TLDR; the Google Hardware Toolchains team is launching a new developer portal, developers.google.com/silicon, to help the developer community get started with its Open MPW shuttle program. This will allow anyone to submit open source integrated circuit designs to get manufactured at no-cost.

Since November 2020, when Skywater Technologies announced their partnership with Google to open source their Process Design Kit for the SKY130 process node, the Hardware Toolchains team here at Google has been on a journey to make building open silicon accessible to all developers. Having access to an open source and manufacturable PDK changes the status-quo in the custom silicon design industry and academia:
  • Designers are now free to start their projects liberated from NDAs and usage restrictions
  • Researchers are able to make their research reproducible by their fellow peers
  • Open source EDA tools can integrate deeply with the manufacturing process
Together we've built a community of more than 3,000 members, where hardware designers and software developers alike, can all contribute in their own way to advance the state of the art of open silicon design.

Between the opposite trends of the Moore law coming to an end and the exponential growth of connected devices (IoT), there is a real need to find more sustainable ways to scale computing. We need to go beyond cramming more transistors into smaller areas and toward more efficient dedicated hardware accelerators. Given the recent global chip supply chain struggles, and the lead time for popular ICs sometimes going over a year, we need to do this by leveraging more of the existing global foundry capacity that provides access to older and proven process node technologies.

Mature process nodes like SKY130 (a 130nm technology) offer a great way to prototype IoT applications that often need to balance cost and power with performance and leverage a mix of analog blocks and digital logic in their designs. They offer a faster turnaround rate than bleeding-edge process nodes for a fraction of the price; reducing the temporal and financial cost of making the right mistakes necessary to converge toward the optimal design.

By combining open access to PDKs, and recent advancements in the development of open source ASIC toolchains like OpenROAD, OpenLane, and higher level synthesis toolchain like XLS, we are getting us one step closer to bringing software-like development methodology and fast iteration cycles to the silicon design world.

Free and open source licensing, community collaboration, and fast iteration transformed the way we all develop software. We believe we are at the edge of a similar revolution for custom accelerator development, where hardware designers compete by building on each other's works rather than reinventing the wheel.

Towards this goal, we've been sponsoring a series of Open MPW shuttles on the Efabless platform, allowing around 250 open source projects to manufacture their own silicon. 

mpw silicon wafer zoomed view mpw chips die

With the last MPW-5 shuttle that closed up in March this year, we've seen a record level of engagement with 75 open silicon projects submitted for inclusion from 19 different countries.

Each project gets a fixed 2.92mm x 3.52mm user area and 38 I/O pins in a predefined harness to harden their design. It’s also provided with the necessary test infrastructure to validate chip specifications and behavior before being submitted for tape out.

We've seen a variety of designs submission to previous editions of the shuttle including:
floor plan for mpw5 submission

Our partner, Efabless announced that the next MPW-6 shuttle will accept open source project submissions until Monday, June 8, 2022. We can't wait to see the variety of projects the open silicon community creates, building on top of the corpus of open source designs steadily growing one Open MPW shuttle after the next. 

To help you on-board on future shuttles, we created a new developer portal that provides pointers to get started with the various tools of the open silicon ecosystem: so make sure to check out the portal and start your open silicon journey!

website preview



By Johan Euphrosine – Google Hardware Toolchains


GSoC 2022 accepted Contributors announced!

Friday, May 20, 2022

May is here and we’re pleased to announce the Google Summer of Code (GSoC) Contributors for 2022. Our 196 mentoring organizations have spent the last few weeks making the difficult decisions on which applicants they will be mentoring this year as GSoC Contributors.

Some notable results from this year’s application period:
  • Over 4,000 applicants from 96 countries
  • 5,155 proposals submitted
  • 1,209 GSoC contributors accepted from 62 countries
  • 1,882 mentors and organization administrators
For the next few weeks our GSoC 2022 Contributors will be actively engaging with their new open source community and learning the ins and outs of how their new community works. Mentors will help guide them through the documentation and processes the community uses as well as helping the GSoC Contributors with planning their milestones and projects for the summer. This Community Bonding period helps familiarize the GSoC Contributors with the languages and tools they will need to successfully complete their projects. Coding begins June 13th and for most folks will wrap up September 5th, however this year GSoC Contributors can request a longer coding period wrapping up their projects by mid November.

Thank you to all the applicants who reached out to our mentoring organizations to learn more about the work they do and for the time they spent crafting their project proposals. We hope you all learned more about open source and maybe even found a community you want to contribute to even outside of GSoC. Staying connected with the community or reaching out to other organizations is a great way to set the stage for future opportunities. Open source communities are always looking for new, excited contributors to bring fresh perspectives and ideas to the table. We hope you connect with an open source community or apply to a future GSoC.

There are many changes to this 18th year of GSoC and we are excited to see how our GSoC Contributors and mentoring organizations take advantage of these adjustments. A big thank you to all our mentors and organization administrators who make this program so special.

GSoC Contributors—have fun this summer and keep learning! Your mentors and community members have dozens and in some cases, hundreds of years of experience, let them share their knowledge with you and help you become awesome open source contributors!

By Stephanie Taylor, Google Open Source
.