opensource.google.com

Menu
Showing posts with label community. Show all posts
Showing posts with label community. Show all posts

BazelCon 2024: A celebration of community and the launch of Bazel 8

Friday, December 13, 2024


The Bazel community celebrated a landmark year at BazelCon 2024. With a record-breaking 330+ attendees, 125+ talk proposal submissions, and a renewed focus on community-driven development, BazelCon marked a significant step forward for the build system and its users.


BazelCon 2024: Key highlights

A cross section of ther audience facingthe stage at BazelCon 2024

The 8th annual build conference was held at the Computer History Museum in Mountain View, CA, on October 14 - 15, 2024. This was the first BazelCon not solely organized by Google; instead, it was organized by The Linux Foundation together with sponsors Google, BuildBuddy, EngFlow, NativeLink, AspectBuild, Gradle, Modus Create, and VirtusLab. The conference welcomed build enthusiasts from around the world to explore the latest advancements in build technologies, share learnings, and connect with each other.

The conference kicked off with an opening keynote delivered by Mícheál Ó Foghlú and Tobias Werth (Google), Alex Eagle (Aspect Build Systems), Helen Altshuler (EngFlow), and Chuck Grindel (Reveal Technology). The keynote highlighted the vital role of community contributions and charted a course for a future where Bazel thrives through shared stewardship.


Following the keynote, John Field and Tobias Werth (Engineering Managers at Google) delivered a state-of-the-union address, celebrating the year's top contributors and highlighting key achievements within the Bazel ecosystem.


Over the course of the conference, members of the Bazel community showcased their expertise and shared key insights through a series of live presentations. Some highlights include:

  • Spotify's compelling Bazel adoption journey
  • EngFlow's insightful post-mortems on remote execution
  • Explorations of cutting-edge features like BuildBuddy's "Remote Bazel”

Take a look at our playlist of BazelCon 2024 Talks at your convenience.

In addition to main stage talks, BazelCon provided ample opportunities for attendees to connect and collaborate. Birds of a Feather sessions fostered lively discussions on topics ranging from generating SBOM using Bazel, to IDE integrations, to external dependency management, allowing community members to provide direct feedback and shape the future of Bazel. Make sure to check out raw BazelCon '24 Birds of a Feather notes from these sessions.

BazelCon 2024 also served as the launchpad for Bazel 8, a long-term support (LTS) release that brings significant enhancements to modularity, performance, and dependency management.

Bazel 8 logo

What’s new in Bazel 8?

  • Starlark-powered modularity: Many core language rules traditionally shipped with Bazel are now Starlarkified and split into their own modules, including all Android, Java, Protocol Buffers, Python, and shell rules.
  • WORKSPACE deprecation: The legacy WORKSPACE mechanism for external dependency management is disabled by default in Bazel 8, and is slated for removal in Bazel 9. Bzlmod, the default since Bazel 7, is the recommended solution going forward.
  • Symbolic macros: Bazel 8 introduces a new way to write macros for build files, addressing many pitfalls and footguns of legacy macros. Symbolic macros offer better visibility encapsulation, type safety, and are amenable to lazy evaluation, coming in a future Bazel release.

Read the full release notes for Bazel 8.


Stay connected with the Bazel community

We extend our gratitude to everyone that contributed to the success of BazelCon 2024! We look forward to seeing you again next year.

To stay informed about the latest developments in the Bazel world, connect with us through the following channels:

We encourage you to share your Bazel projects and experiences with us at product@bazel.build. We're always excited to hear from you!

By Keerthana Kumar and Xudong Yang, on behalf of the Google Bazel Team

OpenXLA Dev Lab 2024: Building Groundbreaking ML Systems Together

Thursday, May 9, 2024


AMD, Arm, AWS, Google, NVIDIA, Intel, Tesla, SambaNova, and more come together to crack the code for colossal AI workloads

As AI models grow increasingly complex and compute-intensive, the need for efficient, scalable, and hardware-agnostic infrastructure has never been greater. OpenXLA is a deep learning compiler framework that makes it easy to speed up and massively scale AI models on a wide range of hardware types—from GPUs and CPUs to specialized chips like Google TPUs and AWS Trainium. It is compatible with popular modeling frameworks—JAX, PyTorch, and TensorFlow—and delivers leading performance. OpenXLA is the acceleration infrastructure of choice for global-scale AI-powered products like Amazon.com Search, Google Gemini, Waymo self-driving vehicles, and x.AI's Grok.


The OpenXLA Dev Lab

On April 25th, the OpenXLA Dev Lab played host to over 100 expert ML practitioners from 10 countries, representing industry leaders like AMD, Arm, AWS, ByteDance, Cerebras, Cruise, Google, NVIDIA, Intel, Tesla, SambaNova, and more. The full-day event, tailored to AI hardware vendors and infrastructure engineers, broke the mold of previous OpenXLA Summits by focusing purely on “Lab Sessions”, akin to office hours for developers, and hands-on Tutorials. The energy of the event was palpable as developers worked side-by-side, learning and collaborating on both practical challenges and exciting possibilities for AI infrastructure.

World map showing where developers come from across countries to the OpenXLA Dev Lab
Figure 1: Developers from around the world congregated at the OpenXLA Dev Lab.

The Dev Lab was all about three key things:

  • Educate and Empower: Teach developers how to implement OpenXLA's essential workflows and advanced features through hands-on tutorials.
  • Offer Expert Guidance: Provide personalized office hours led by OpenXLA experts to help developers refine their ideas and contributions.
  • Foster Community: Encourage collaboration, knowledge-sharing, and lasting connections among the brilliant minds in the OpenXLA community.

Tutorials

The Tutorials included:

Integrating an AI Compiler & Runtime into PJRT

  • Learn how PJRT connects ML frameworks to AI accelerators, standardizing their interaction for easy model deployment on diverse hardware.
  • Explore the PJRT C API for framework-hardware communication.
  • Implement a PJRT Plugin, a Python package that implements the C API.
  • Discover plugin examples for Apple Metal, CUDA, Intel GPU, and TPU.

Led by Jieying Luo and Skye Wanderman-Milne


Extracting StableHLO Graphs + Intro to StableHLO Quantizer

  • Learn to export StableHLO from JAX, PyTorch, and TensorFlow using static/dynamic shapes and SavedModel format.
  • Hack along with the tutorial using the JAX, PyTorch, and TensorFlow Colab notebooks provided on OpenXLA.org.
  • Simplify quantization with StableHLO Quantizer; a framework and device-agnostic tool.
  • Explore streamlined parameter selection and model rewriting for lower precision.

Led by Kevin Gleason, Jen Ha, and Xing Liu


Optimizing PyTorch/XLA Auto-sharding for Your Hardware

  • Discover this experimental feature that automates distributing large-scale PyTorch models across XLA devices.
  • Learn how it partitions and distributes for out-of-the-box performance without manual intervention
  • Explore future directions such as customizable cost models for different hardware

Led by Yeounoh Chung and Pratik Fegade


Optimizing Compute and Communication Scheduling with XLA

  • Scale ML models on multi-GPUs with SPMD partitioning, collective communication, HLO optimizations.
  • Explore tensor parallelism, latency hiding scheduler, pipeline parallelism.
  • Learn collective optimizations, pipeline parallelism for efficient large-scale training.

Led by Frederik Gossen, TJ Xu, and Abhinav Goel


Lab Sessions

Lab Sessions featured use case-specific office hours for AMD, Arm, AWS, ByteDance, Intel, NVIDIA, SambaNova, Tesla, and more. OpenXLA engineers were on hand to provide development teams with dedicated support and walkthrough specific pain points and designs. In addition, Informational Roundtables that covered broader topics like GPU ML Performance Optimization, JAX, and PyTorch-XLA GPU were available for those without specific use cases. This approach led to productive exchanges and fine-grained exploration of critical contribution areas for ML hardware vendors.

four photos of participants and vendors at OpenXLA Dev Lab

Don’t just take our word for it – here’s some of the feedback we received from developers:

"OpenXLA is awesome, and it's great to see the community interest around it. We're excited about the potential of PJRT and StableHLO to improve the portability of ML workloads onto novel hardware such as ours. We appreciate the support that we have been getting." 
      — Mark Gottscho, Senior Manager and Technical Lead at SambaNova
"Today I learned a lot about Shardy and about some of the bugs I found in the GSPMD partitioner, and I got to learn a lot of cool stuff." 
      — Patrick Toulme, Machine Learning Engineer at AWS
“I learned a lot, a lot about how XLA is making tremendous progress in building their community.” 
      — Tejash Shah, Product Manager at NVIDIA
“Loved the format this year - please continue … lots of learning, lots of interactive sessions. It was great!” 
      — Om Thakkar, AI Software Engineer at Intel

Technical Innovations and The Bold Road Ahead

The event kicked off with a keynote by Robert Hundt, Distinguished Engineer at Google, who outlined OpenXLA's ambitious plans for 2024, particularly three major areas of focus:

  • Large-scale training
  • GPU and PyTorch compute performance
  • Modularity and extensibility

Empowering Large-Scale Training

OpenXLA is introducing powerful features to enable model training at record-breaking scales. One of the most notable additions is Shardy, a tool coming soon to OpenXLA that automates and optimizes how large AI workloads are divided across multiple processing units, ensuring efficient use of resources and faster time to solution. Building on the success of its predecessor, SPMD, Shardy empowers developers with even more fine-grained control over partitioning decisions, all while maintaining the productivity benefits that SPMD is known for.

Diagram of sharding representation with a simple rank 2 tensor and 4 devices.
Figure 2: Sharding representation example with a simple rank 2 tensor and 4 devices.

In addition to Shardy, developers can expect a suite of features designed to optimize computation and communication overlap, including:

  • Automatic profile-guided latency estimation
  • Collective pipelining
  • Heuristics-based collective combiners

These innovations will enable developers to push the boundaries of large-scale training and achieve unprecedented performance and efficiency.


OpenXLA Delivers on TorchBench Performance

OpenXLA has also made significant strides in enhancing performance, particularly on GPUs with key PyTorch-based generative AI models. PyTorch-XLA GPU is now neck and neck with TorchInductor for TorchBench Full Graph Models and has a TorchBench pass rate within 5% of TorchInductor.

A bar graph showing a performance comparison of TorchInductor vs. PyTorch-XLA GPU on Google Cloud NVIDIA H100 GPUs
Figure 3: Performance comparison of TorchInductor vs. PyTorch-XLA GPU on Google Cloud NVIDIA H100 GPUs. “Full graph models” represent all TorchBench models that can be fully represented by StableHLO

Behind these impressive gains lies XLA GPU's global cost model, a game-changer for developers. In essence, this cost model acts as a sophisticated decision-making system, intelligently determining how to best optimize computations for specific hardware. The cost model delivers state-of-the-art performance through a priority-based queue for fusion decisions and is highly extensible, allowing third-party developers to seamlessly integrate their backend infrastructure for both general-purpose and specialized accelerators. The cost model's adaptability ensures that computation optimizations are tailored to specific accelerator architectures, while less suitable computations can be offloaded to the host or other accelerators.

OpenXLA is also breaking new ground with novel kernel programming languages, Pallas and Mosaic, which empower developers to write highly optimized code for specialized hardware. Mosaic demonstrates remarkable efficiency in programming key AI accelerators, surpassing widely used libraries in GPU code generation efficiency for models with 64, 128, and 256 Q head sizes, as evidenced by its enhanced utilization of TensorCores.

A bar graph showing a performance comparison of Flash Attention vs. Mosaic GPU on NVIDIA H100 GPUs
Figure 4: Performance comparison of Flash Attention vs. Mosaic GPU on NVIDIA H100 GPUs.

Modular and Extensible AI Development

In addition to performance enhancements, OpenXLA is committed to making the entire stack more modular and extensible. Several initiatives planned for 2024 include:

  • Strengthening module interface contracts
  • Enhancing code sharing between platforms
  • Enabling a shared high-level compiler flow through runtime configuration and component registries

A flow diagram showing modules and subcomponents of the OpenXLA stack.
Figure 5: Modules and subcomponents of the OpenXLA stack.

These improvements will make it easier for developers to build upon and extend OpenXLA.

Alibaba's success with PyTorch XLA FSDP within their TorchAcc framework is a prime example of the benefits of OpenXLA's modularity and extensibility. By leveraging these features, Alibaba achieved state-of-the-art performance for the LLaMa 2 13B model, surpassing the previous benchmark set by Megatron. This demonstrates the power of the developer community in extending OpenXLA to push the boundaries of AI development.

A bar graph showing a performance comparison of TorchAcc and Megatron for  LLaMa 2 13B at different number of GPUs.
Figure 6: Performance comparison of TorchAcc and Megatron for LLaMa 2 13B at different numbers of GPUs.

Join the OpenXLA Community

If you missed the Dev Lab, don't worry! You can still access StableHLO walkthroughs on openxla.org, as well as the GitHub Gist for the PJRT session. Additionally, the recorded keynote and tutorials are available on our YouTube channel. Explore these resources and join our global community – whether you're an AI systems expert, model developer, student, or just starting out, there's a place for you in our innovative ecosystem.

four photos of participants and vendors at OpenXLA Dev Lab

Acknowledgements

Adam Paszke, Allen Hutchison, Amin Vahdat, Andrew Leaver, Andy Davis, Artem Belevich, Abhinav Goel, Bart Chrzaszcz, Benjamin Kramer, Berkin Ilbeyi, Bill Jia, Cyril Bortolato, David Dunleavy, Eugene Zhulenev, Florian Reichl, Frederik Gossen, George Karpenkov, Gunhyun Park, Han Qi, Jack Cao, Jacques Pienaar, Jaesung Chung, Jen Ha, Jianting Cao, Jieying Luo, Jiewen Tan, Jini Khetan, Kevin Gleason, Kyle Lucke, Kuy Mainwaring, Lauren Clemens, Manfei Bai, Marisa Miranda, Michael Levesque-Dion, Milad Mohammadi, Nisha Miriam Johnson, Penporn Koanantakool, Puneith Kaul, Robert Hundt, Sandeep Dasgupta, Sayce Falk, Shauheen Zahirazami, Skye Wanderman-Milne, Yeounoh Chung, Pratik Fegade, Peter Hawkins, Vaibhav Singh, Tamás Danyluk, Thomas Joerg, TJ Xu, and Tom Natan

By James Rubin, Aditi Joshi, and Elliot English – on behalf of the OpenXLA Project

Get ready for Google I/O: Program lineup revealed

Wednesday, May 1, 2024


Developers, get ready! Google I/O is just around the corner, kicking off live from Mountain View with the Google keynote on Tuesday, May 14 at 10 am PT, followed by the Developer keynote at 1:30 pm PT.

But the learning doesn’t stop there. Mark your calendars for May 16 at 8 am PT when we’ll be releasing over 150 technical deep dives, demos, codelabs, and more on-demand. If you register online, you can start building your 'My I/O' agenda today.

Here's a sneak peek at some of the exciting highlights from the I/O program preview:

Unlocking the power of AI: The Gemini era unlocks a new frontier for developers. We'll showcase the newest features in the Gemini API, Google AI Studio, and Gemma. Discover cutting-edge pre-trained models from Kaggle, and delve into Google's open-source libraries like Keras and JAX.

Android: A developer's playground: Get the latest updates on everything Android! We'll cover groundbreaking advancements in generative AI, the highly anticipated Android 15, innovative form factors, and the latest tools and libraries in the Jetpack and Compose ecosystem. Plus, discover how to optimize performance and streamline your development workflow.

Building beautiful and functional web experiences: We’ll cover Baseline updates, a revolutionary tool that empowers developers with a clear understanding of web features and API interoperability. With Baseline, you'll have access to real-time information on popular developer resource sites like MDN, Can I Use, and web.dev.

The future of ChromeOS: Get a glimpse into the exciting future of ChromeOS. We'll discuss the developer-centric investments we're making in distribution, app capabilities, and operating system integrations. Discover how our partners are shaping the future of Chromebooks and delivering world-class user experiences.

This is just a taste of what's in store at Google I/O. Stay tuned for more updates, and get ready to be a part of the future.

Don't forget to mark your calendars and register for Google I/O today!

Posted by Timothy Jordan – Director, Developer Relations and Open Source

Google Summer of Code 2024 Mentor Organization Applications Now Open

Monday, January 22, 2024

We are excited to announce that open source projects and organizations can now apply to participate as mentor organizations in the 2024 Google Summer of Code (GSoC) program. Applications for organizations will close on February 6, 2024 at 18:00 UTC.

We are celebrating a big milestone as we head into our 20th year of Google Summer of Code this year! In 2024 we are adding a third project size option which you can read more about in our announcement blog post.

Does your open source project want to learn more about becoming a mentor organization? Visit the program site and read the mentor guide to learn what it means to be a mentor organization and how to prepare your community (hint: have plenty of excited, dedicated mentors and well thought out project ideas!).

We welcome all types of organizations and are very eager to involve first-time mentor orgs in GSoC. We encourage new organizations to get a referral from experienced organizations that think they would be a good fit to participate in GSoC.

The open source projects that participate in GSoC as mentor organizations span many fields including those doing interesting work in AI/ML, security, cloud, development tools, science, medicine, data, media, and more! Projects can range from being relatively new (about 2 years old) to well established projects that started over 20 years ago. We welcome open source projects big, small, and everything in between.

This year we are looking to bring more open source projects in the AI/ML field into GSoC 2024. If your project is in the artificial intelligence or machine learning fields please chat with your community and see if you would be interested in applying to GSoC 2024.

One thing to remember is that open source projects wishing to apply need to have a solid community; the goal of GSoC is to bring new contributors into established and welcoming communities. While you don’t have to have 50+ community members, the project also can’t have as few as three people.

You can apply to be a mentor organization for GSoC starting today on the program site. The deadline to apply is February 6, 2024 at 18:00 UTC. We will publicly announce the organizations chosen for GSoC 2024 on February 21st.

Please visit the program site for more information on how to apply and review the detailed timeline for important deadlines. We also encourage you to check out the Mentor Guide, our ‘Intro to Google Summer of Code’ video, and our short video on why open source projects are excited to be a part of the GSoC program.

Good luck to all open source mentor organization applicants!

By Stephanie Taylor, Program Manager – Google Open Source Programs Office

Google Open Source Peer Bonus program announces second group of 2023 winners

Thursday, December 14, 2023



We are excited to announce the second group of winners for the 2023 Google Open Source Peer Bonus Program! This program recognizes external open source contributors who have been nominated by Googlers for their exceptional contributions to open source projects.

The Google Open Source Peer Bonus Program is a key part of Google's ongoing commitment to open source software. By supporting the development and growth of open source projects, Google is fostering a more collaborative and innovative software ecosystem that benefits everyone.

This cycle's Open Source Peer Bonus Program received 163 nominations and winners come from 35 different countries around the world, reflecting the program's global reach and the immense impact of open source software. Community collaboration is a key driver of innovation and progress, and we are honored to be able to support and celebrate the contributions of these talented individuals from around the world through this program.

We would like to extend our congratulations to the winners! Included below are those who have agreed to be named publicly.

Winner

Open Source Project

Tim Dettmers

8-bit CUDA functions for PyTorch

Odin Asbjørnsen

Accompanist

Lazarus Akelo

Android FHIR

Khyati Vyas

Android FHIR

Fikri Milano

Android FHIR

Veyndan Stuart

AndroidX

Alex Van Boxel

Apache Beam

Dezső Biczó

Apigee Edge Drupal module

Felix Yan

Arch Linux

Gerlof Langeveld

atop

Fabian Meumertzheim

Bazel

Keith Smiley

Bazel

Andre Brisco

Bazel Build Rules for Rust

Cecil Curry

beartype

Paul Marcombes

bigfunctions

Lucas Yuji Yoshimine

Camposer

Anita Ihuman

CHAOSS

Jesper van den Ende

Chrome DevTools

Aboobacker MK

CircuitVerse.org

Aaron Ballman

Clang

Alejandra González

Clippy

Catherine Flores

Clippy

Rajasekhar Kategaru

Compose Actors

Olivier Charrez

comprehensive-rust

John O'Reilly

Confetti

James DeFelice

container-storage-interface

Akihiro Suda

containerd, runc, OCI specs, Docker, Kubernetes

Neil Bowers

CPAN

Aleksandr Mikhalitsyn

CRIU

Daniel Stenberg

curl

Ryosuke TOKUAMI

Dataform

Salvatore Bonaccorso

Debian

Moritz Muehlenhoff

Debian

Sylvestre Ledru

DebianLLVM

Andreas Deininger

Docsy

Róbert Fekete

Docsy

David Sherret

dprint

Justin Grant

ECMAScript Time Zone Canonicalization Proposal

Chris White

EditorConfig

Charles Schlosser

Eigen

Daniel Roe

Elk - Mastodon Client

Christopher Quadflieg

FakerJS

Ostap Taran

Firebase Apple SDK

Frederik Seiffert

Firebase C++ SDK

Juraj Čarnogurský

firebase-tools

Callum Moffat

Flutter

Anton Borries

Flutter

Tomasz Gucio

Flutter

Chinmoy Chakraborty

Flutter

Daniil Lipatkin

Flutter

Tobias Löfstrand

Flutter go_router package

Ole André Vadla Ravnås

Frida

Jaeyoon Choi

Fuchsia

Jeuk Kim

Fuchsia

Dongjin Kim

Fuchsia

Seokhwan Kim

Fuchsia

Marcel Böhme

FuzzBench

Md Awsafur Rahman

GCViT-tf, TransUNet-tf,Kaggle

Qiusheng Wu

GEEMap

Karsten Ohme

GlobalPlatform

Sacha Chua

GNU Emacs

Austen Novis

Goblet

Tiago Temporin

Golang

Josh van Leeuwen

Google Certificate Authority Service Issuer for cert-manager

Dustin Walker

google-cloud-go

Parth Patel

GUAC

Kevin Conner

GUAC

Dejan Bosanac

GUAC

Jendrik Johannes

Guava

Chao Sun

Hive, Spark

Sean Eddy

hmmer

Paulus Schoutsen

Home Assistant

Timo Lassmann

Kalign

Stephen Augustus

Kubernetes

Vyom Yadav

Kubernetes

Meha Bhalodiya

Kubernetes

Madhav Jivrajani

Kubernetes

Priyanka Saggu

Kubernetes

DANIEL FINNERAN

kubeVIP

Junfeng Li

LanguageClient-neovim

Andrea Fioraldi

LibAFL

Dongjia Zhang

LibAFL

Addison Crump

LibAFL

Yuan Tong

libavif

Gustavo A. R. Silva

Linux kernel

Mathieu Desnoyers

Linux kernel

Nathan Chancellor

Linux Kernel, LLVM

Gábor Horváth

LLVM / Clang

Martin Donath

Material for MkDocs

Jussi Pakkanen

Meson Build System

Amos Wenger

Mevi

Anders F Björklund

minikube

Maksim Levental

MLIR

Andrzej Warzynski

MLIR, IREE

Arnaud Ferraris

Mobian

Rui Ueyama

mold

Ryan Lahfa

nixpkgs

Simon Marquis

Now in Android

William Cheng

OpenAPI Generator

Kim O'Sullivan

OpenFIPS201

Yigakpoa Laura Ikpae

Oppia

Aanuoluwapo Adeoti

Oppia

Philippe Antoine

oss-fuzz

Tornike Kurdadze

Pinput

Andrey Sitnik

Postcss (and others: Autoprefixer, postcss, browserslist, logux)

Marc Gravell

protobuf-net

Jean Abou Samra

Pygments

Qiming Sun

PySCF

Trey Hunner

Python

Will Constable

PyTorch/XLA

Jay Berkenbilt

qpdf

Ahmed El-Helw

Quran App for Android

Jan Gorecki

Reproducible benchmark of database-like ops

Ralf Jung

Rust

Frank Steffahn

Rust, ICU4X

Bhaarat Krishnan

Serverless Web APIs Workshop

Maximilian Keppeler

Sheets-Compose-Dialogs

Cory LaViska

Shoelace

Carlos Panato

Sigstore

Keith Zantow

spdx/tools-golang

Hayley Patton

Steel Bank Common Lisp

Qamar Safadi

Sunflower

Victor Julien

Suricata

Eyoel Defare

textfield_tags

Giedrius Statkevičius

Thanos

Michael Park

The Good Docs Project

Douglas Theobald

Theseus

David Blevins

Tomee

Anthony Fu

Vitest

Ryuta Mizuno

Volcago

Nicolò Ribaudo

WHATWG HTML Living Standard; ECMAScript Language Specification

Antoine Martin

xpra

Toru Komatsu

youki

We are incredibly proud of all of the nominees for their outstanding contributions to open source, and we look forward to seeing even more amazing contributions in the years to come. An additional thanks to Maria Tabak who has helped to lay the groundwork and management of this program for the past 5 years!

By Mike Bufano, Google Open Source Peer Bonus Program Lead

.