opensource.google.com

Menu

Secure-by-design firmware development with Wasefire

Tuesday, November 18, 2025

Improving firmware development

Building firmware for embedded devices—like microcontrollers and IoT hardware—is hard. It's often complex, it requires deep expertise, and most importantly it is prone to security bugs. One of the key challenges is the limited resources available on these devices, such as constrained processing power, memory, and storage capacity. These constraints make implementing robust security measures at odds with performance and functionality. Unsafe IoT devices are then recruited by cyber criminals into botnets, to perform DDoS attacks, steal information, and act as proxies to evade detection (e.g. the Mirai botnet).

Today, we introduce a new framework that makes it easier to build and maintain safer embedded systems: Wasefire.

Wasefire simplifies the development process and incorporates security best practices by default. This enables developers to create secure firmware without requiring extensive security expertise and only focusing on the business logic they want to implement. To this end, Wasefire provides for each supported device, a platform on which device-agnostic sandboxed applets can run. Wasefire currently supports nRF52840 DK, nRF52840 Dongle, nRF52840 MDK Dongle, and OpenTitan Earlgrey. There is also a Host platform for testing without embedded devices.

A Wasefire platform abstracts the hardware so Wasefire applets are portable

The platform is written in Rust for its performance and built-in memory safety. Embedded devices are one of the four target domains of the Rust 2018 roadmap. So today, it is quite simple to write embedded code in Rust, or even integrate Rust in existing embedded code.

The platform expects the applets to be written in—or more realistically, compiled to—WebAssembly for its simplicity, portability, and security. WebAssembly is a binary instruction format for a stack-based virtual machine. It is designed for high-performance applications on the web (hence its name) but it also supports non-web environments. Fun fact: Wasefire uses WebAssembly in both environments: the main usage is non-web for the virtual machine to run applets, but the web interface of the Host platform also relies on WebAssembly.

Incidentally, WebAssembly is another one of the four target domains of the Rust 2018 roadmap. This means that writing applets in Rust and compiling them to WebAssembly is very simple. For this reason, Rust is the primary language to write applets for Wasefire. Starting a new project is as simple as the following steps:

WebAssembly on microcontrollers

Running WebAssembly on microcontrollers may seem like overkill if it were only for sandboxing. But using a virtual machine also provides binary-level portability like Java Cards. In particular, the same WebAssembly applet can be distributed in binary form and run on multiple platforms.

On a microcontroller, every byte matters. To cater to a variety of needs, Wasefire provides multiple alternatives to balance between security, performance, footprint, and portability:

  • WebAssembly applets: Platforms may embed the Wasefire interpreter. This is a custom in-place interpreter for WebAssembly in the style of "A fast in-place interpreter for WebAssembly" with a very small footprint. The main drawback is that it doesn't support computation heavy applets.
  • Pulley applets: Platforms may embed Wasmtime and its Pulley interpreter. WebAssembly was not designed for interpretation, but for compilation. So WebAssembly interpreters will experience some form of challenge (either performance or footprint). On the contrary, Pulley was designed for fast interpretation and can be compiled from WebAssembly. The main drawback is the larger footprint of this solution and the need for applets to be signed (which is not yet implemented) since Pulley cannot be validated like WebAssembly.
  • Native applets: Platforms may link with an applet compiled as a static library for the target architecture. This solution is only provided as a last resort when no other existing alternative works. The main drawback is that almost all security benefits are nullified and binary-level portability is lost.
  • CHERI applets: This alternative is planned (but not yet started) and would provide the performance and footprint advantage of Native applets while retaining the sandboxing advantage of WebAssembly and Pulley applets. The main drawback is that the target device needs to support CHERI and binary-level portability is lost.

To illustrate this tradeoff, let's look at a few examples from the Wasefire repository:

  • The first example is a button-controlled blinking LED. This applet can run as a WebAssembly applet without problem.
  • The second example is a FIDO2 security key implemented using the OpenSK library. This applet reaches the limits of the WebAssembly in-place interpreter in terms of performance at the moment. By using a Pulley applet instead, performance can be improved by degrading applet size and memory footprint.
  • The third example is a BLE sniffer. Performance is a critical aspect of this applet. The in-place interpreter is too slow and many packets are dropped. Compiling this applet to Pulley doesn't drop any packet in a noisy BLE environment.

We can summarize the tradeoff in the table below. The platform size differs between examples because the second and third examples need optional drivers disabled by default. The platform is the nRF52840 DK. For the security key, applet performance is measured as the time between a FIDO2 GetInfo request and the last packet of its response. For the BLE sniffer, applet performance is measured as the number of processed packets per second. This metric saturates for Pulley and Native applets, so we only get a lower bound of performance in those cases.

Blinking LED WebAssembly Pulley Native
Platform size (KiB) 98 299 49
Applet size (KiB) 3.3 12 5.6
Platform memory (KiB) 10 80 5
Security key WebAssembly Pulley Native
Platform size (KiB) 133 334 80
Applet size (KiB) 125 247 73
Platform memory (KiB) 20 104 9
Applet performance (ms) 1191 60 23
BLE sniffer WebAssembly Pulley Native
Platform size (KiB) 102 303 53
Applet size (KiB) 7.2 18 7.6
Platform memory (KiB) 16 82 8.8
Applet performance (packet/s) = 55 (dropping packets) > 195 (not dropping) > 195 (not dropping)

Looking forward

Wasefire is still an experimental project. Many features are missing (including security features) and many improvements are planned. For example, the platform currently runs a single applet and provides all the resources this applet asks for. Ultimately, applets would come with a manifest describing which resources they are permitted to use, and those resources would be isolated to that single applet. It would also be possible to run multiple applets concurrently.

The project is open source, so bug reports, feature requests, and pull requests are welcome. The project is licensed under Apache-2.0, so commercial use is permitted.

Feel free to give it a try (no hardware needed) and share the word!

How JAX makes high-performance economics accessible

Tuesday, November 11, 2025

How JAX makes high-performance economics accessible

JAX is widely recognized for its power in training large-scale AI models, but its core design as a system for composable function transformations unlocks its potential in a much broader scientific landscape. We're seeing adoption for applications as disparate as AI-driven protein engineering to solving high-order Partial Differential Equations (PDEs). Today, we're excited to highlight another frontier where JAX is making a significant impact: enabling economists to model complex, real-world scenarios that shape national policy—computational economics.
I recently spoke with economist John Stachurski, a co-founder of QuantEcon and an early advocate for open-source scientific computing. His story of collaborating with the Central Bank of Chile demonstrates how JAX makes achieving performance easy and accessible. John's journey shows how JAX's intuitive design and abstractions allow domain experts to solve scientific problems without needing to become parallel programming specialists. John shares the story in his own words.


A Tale of Two Implementations: The Central Bank of Chile's Challenge
Due to my work with QuantEcon, I was contacted by the Central Bank of Chile (CBC), which was facing a computational bottleneck with one of their core models. The bank's work is high-stakes; their role is to set monetary policy and act as the lender of last resort during financial crises. Such crises are inherently non-linear in nature, involving self-reinforcing cycles and feedback loops that make them challenging to model and assess.
To better prepare themselves for such crises, the CBC began working on a model originally developed by Jarvier Bianchi, in which an economic shock worsens the balance sheets of domestic economic agents, reducing collateral and tightening credit constraints. This leads to further deterioration in balance sheets, which again tightens credit constraints, and so on. The result is a downward spiral. The ramifications can be large in a country such as Chile, where economic and political instability are historically linked.

The Problem:

The task of implementing this model was led by talented CBC economist Carlos Rondon. Carlos wrote the first version using a well-known proprietary package for mathematical modeling that has been used extensively by economists over the past few decades. The completed model took 12 hours to run -- that is, to generate prices and quantities implied by a fixed set of parameters -- on a $10,000 mainframe with 356 CPUs and a terabyte of RAM. A 12 hour run-time made it almost impossible to calibrate the model and run useful scenarios. A better solution had to be found.

Carlos and I agreed that the problem was rooted in the underlying software package. The issue was that, to avoid using slow loops, all operations needed to be vectorized, so that they could be passed to precompiled binaries generated from Fortran libraries such as LAPACK. However, as users of these traditional vectorization-based environments will know, it is often necessary to generate many intermediate arrays in order to obtain a given output array. When these arrays are high-dimensional, this process is slow and extremely memory intensive. Moreover, while some manual parallelization is possible, truly efficient parallelization is difficult to achieve.

The JAX Solution:

I flew to Santiago and we began a complete rewrite in JAX. Working side-by-side, we soon found that JAX was exactly the right tool for our task. In only two days we were able to reimplement the model and — running on a consumer-grade GPU — we observed a dramatic improvement in wall-clock time . The algorithm was unchanged, but even a cheap GPU outperformed the industrial server by a factor of a thousand. Now the model was fully operational: fast, clean, and ready for calibration.
There were several factors behind the project's success. First, JAX's elegant functional style allowed us to express the economic model's logic in a way that closely mirrored the underlying mathematics. Second, we fully exploited JAX's vmap by layering it to represent nested for loops. This allowed us to work with functions that operate on scalar values (think of a function that performs the calculations on the inside of a nested for loop), rather than attempting to operate directly on high dimensional arrays — a process that is inherently error-prone and difficult to visualize.

Third, JAX automates parallelization and does it extremely efficiently. We both had experience with manual parallelization prior to using JAX. I even fancied I was good at this task. But, at the end of the day, the majority of our expertise is in economics and mathematics, not computer science. Once we handed over parallelization to JAX's compiler OpenXLA we saw a massive speed up. Of course, the fact that XLA generates specialised GPU kernels on the fly was a key part of our success.
I have to stress how much I enjoyed completing this project with JAX. First, we could write code on a laptop and then run exactly the same code on any GPU, without changing a single line. Second, for scientific computing, the pairing of an interpreted language like Python with a powerful JIT compiler provides the ideal combination of interactivity and speed. To my mind, everything about the JAX framework and compilers is just right. A functional programming style makes perfect sense in a world where functions are individually JIT-compiled. Once we adopt this paradigm, everything becomes cleaner. Throw in automatic differentiation and NumPy API compatibility and you have a close-to-perfect environment for writing high performance code for economic modeling.


Unlocking the Next Generation of Economic Models

John's story captures the essence of JAX's power. By making high performance accessible to researchers, JAX is not just accelerating existing workloads; it's democratizing access to performance and enabling entirely new avenues of research.
As economists build models that incorporate more realistic heterogeneity—such as varying wealth levels, firm sizes, ages, and education—JAX enables them to take full advantage of modern accelerators like GPUs and Google TPUs. JAX's strengths in both scientific computing and deep learning make it the ideal foundation to bridge this gap.

Explore the JAX Scientific Computing Ecosystem

Stories like John's highlight a growing trend: JAX is much more than a framework for building the largest machine learning models on the planet. It is a powerful, general-purpose framework for array-based computing across all sciences which, together with accelerators such as Google TPUs and GPUs, is empowering a new generation of scientific discovery. The JAX team at Google is committed to supporting and growing this vibrant ecosystem, and that starts with hearing directly from you.

  • Share your story: Are you using JAX to tackle a challenging scientific problem? We would love to learn how JAX is accelerating your research.
  • Help guide our roadmap: Are there new features or capabilities that would unlock your next breakthrough? Your feature requests are essential for guiding the evolution of JAX.

Please reach out to the team via GitHub to share your work or discuss what you need from JAX. You can also find documentation, examples, news, events, and more at jaxstack.ai and jax.dev.

Sincere thanks to John Stachurski for sharing his insightful journey with us. We're excited to see how he and other researchers continue to leverage JAX to solve the world's most complex scientific problems.

Unleashing autonomous AI agents: Why Kubernetes needs a new standard for agent execution

Unleashing autonomous AI agents: Why Kubernetes needs a new standard for agent execution

The arrival of autonomous AI Agents capable of reasoning, planning, and executing actions by generating their own code and interacting with the runtime environment marks a paradigm shift in how applications are built and operated. However, these new capabilities also introduce a fundamental security gap: how to safely allow agents to run untrusted, unverified generated code, perform actions and get access to data in runtime environments, especially pertaining to mission-critical infrastructure and environments that may have proprietary data.

We are excited to announce a major initiative within the Kubernetes community to address this exact challenge: we are launching Agent Sandbox as a formal subproject of Kubernetes SIG Apps, hosted under kubernetes-sigs/agent-sandbox.

This is more than just a tool; it is designed to standardize and evolve Kubernetes into the most secure and scalable platform for the agentic workloads.

The Latency Crisis for Interactive AI

Agent behavior often involves quick, iterative tool calls — checking a file, running a calculation, or querying an API. For security reasons, each of these calls requires its own isolated sandbox.

The challenge is that these sandboxes must be created from scratch, extremely quickly, to ensure isolated environments between executions. Because security and isolation are non-negotiable, the "spin-up" time becomes the critical bottleneck. If the secure execution environment takes too long to spin up, the entire agent application stalls, killing the interactive experience.

The Bottleneck of Massive Throughput

Enterprise platforms require infrastructure that can handle overwhelming scale. Users engaged in complex AI agent workloads demand support for up to tens of thousands of parallel sandboxes, processing thousands of queries per second. To meet this challenge, we are extending Kubernetes' proven capabilities of managing high-capacity, low latency applications, models and infrastructure to fit a growing class of single-instance workloads, like AI agent runtimes or dev environments, that require a lightweight, VM-like abstraction. A standardized, controller-based Sandbox API provides Kubernetes-native solution for these use cases, avoiding the workarounds required today, paving the way for the next generation of cloud-native AI applications.

The Agent Sandbox: A new Agent Standard for Kubernetes

To solve these problems, we are introducing a new, declarative resource focused strictly on the Sandbox primitive, designed from the ground up to be backend-agnostic.
The goal is to provide a persistent, isolated instance for single-container, stateful, singleton workloads, managed entirely through familiar Kubernetes constructs. The core APIs include:

  • Sandbox : The core resource defining the agent sandbox workload for running an isolated instance of the agent's environment
  • SandboxTemplate : Defines the secure blueprint of a sandbox archetype, including resource limits, base image, and initial security policies
  • SandboxClaim : A transactional resource allowing users or higher-level frameworks (like ADK or LangChain) to request an execution environment, abstracting away the complex provisioning logic.


In addition to the Sandbox primitive we are also launching with additional features that make the experience as a whole better for the user:

  • WarmPools — In order to support fast instance startup time, which is an important part of the usability of agenting sandboxes, we introduced the Warm Pool extension. The Sandbox Warm Pool Orchestrator utilizes a dedicated CRD to maintain a pool of pre-warmed pods, allowing the Sandbox Controller to claim a ready instance upon creation and reduce cold startup latency to less than one second.
  • Shutdown Time — Since agentic behaviour can be unpredictable, this feature supports clean termination and cleanup of sandboxes, it automates the deletion by providing an absolute time for the sandbox to terminate.
  • Python API/SDK — For better usability and a developer-friendly interface to programmatically interact with these CRDs, we provide an example SDK that abstracts away Kubernetes complexities with simple Pythonic functions.

The standard is designed to seamlessly support multiple isolation backends like gVisor and Kata Containers allowing developers to choose the technology that best fits their security and performance trade-offs.

The new Agent Sandbox features and implementations are available now in the github repo kubernetes-sigs/agent-sandbox and on our website agent-sandbox.sigs.k8s.io. We invite all developers, partners, and experts to join this critical community effort to define the secure scalable future of autonomous AI on Kubernetes.

We will be presenting a technical deep dive and officially launching the project at Kubecon Atlanta, November 2025. We hope to see you there!

Announcing Magika 1.0: now faster, smarter, and rebuilt in Rust

Thursday, November 6, 2025

Announcing Magika 1.0: now faster, smarter, and rebuilt in Rust

Early last year, we open sourced Magika, Google's AI-powered file type detection system. Magika has seen great adoption by open source communities since that alpha release, with over one million monthly downloads. Today, we are happy to announce the release of Magika 1.0, a first stable version that introduces new features and a host of major improvements since last announcement. Here are the highlights:

  • Expanded file type support for more than 200 types (up from ~100).
  • A brand-new, high-performance engine rewritten from the ground up in Rust.
  • A native Rust command-line client for maximum speed and security.
  • Improved accuracy for challenging text-based formats like code and configuration files.
  • A revamped Magika Python and TypeScript module for even easier integrations.

Smarter Detection: Doubling Down on File Types

Magika 1.0 now identifies more than 200 content types, doubling the number of file-types supported from the initial release. This isn't just about a bigger number; it unlocks far more granular and useful identification, especially for specialized, modern file types.

Some of the notable new file types detected include:

  • Data Science & ML: We've added support for formats such as Jupyter Notebooks (ipynb), Numpy arrays (npy, npz), PyTorch models (pytorch), ONNX (onnx) files, Apache Parquet (parquet), and HDF5 (h5).
  • Modern Programming & Web: The model now recognizes dozens of languages and frameworks. Key additions include Swift (swift), Kotlin (kotlin), TypeScript (typescript), Dart (dart), Solidity (solidity), Web Assembly (wasm), and Zig (zig).
  • DevOps & Configuration: We've expanded detection for critical infrastructure and build files, such as Dockerfiles (dockerfile), TOML (toml), HashiCorp HCL (hcl), Bazel (bazel) build files, and YARA (yara) rules.
  • Databases & Graphics: We also added support for common formats like SQLite (sqlite) databases, AutoCAD (dwg, dxf) drawings, Adobe Photoshop (psd) files, and modern web fonts (woff, woff2).
  • Enhanced Granularity: Magika is now smarter at differentiating similar formats that might have been grouped together. For example, it can now distinguish:
    • JSONL (jsonl) vs. generic JSON (json)
    • TSV (tsv) vs. CSV (csv)
    • Apple binary plists (applebplist) from regular XML plists (appleplist)
    • C++ (cpp) vs. C (c)
    • JavaScript (javascript) vs. TypeScript(typescript)

Expanding Magika's detection capabilities introduced two significant technical hurdles: data volume and data scarcity.

First, the scale of the data required for training was a key consideration. Our training dataset grew to over 3TB when uncompressed, which required an efficient processing pipeline. To handle this, we leveraged our recently released SedPack dataset library. This tool allows us to stream and decompress this large dataset directly to memory during training, bypassing potential I/O bottlenecks and making the process feasible.

Second, while common file types are plentiful, many of the new, specialized, or legacy formats presented a data scarcity challenge. It is often not feasible to find thousands of real-world samples for every file type. To overcome this, we turned to generative AI. We leveraged Gemini to create a high-quality, synthetic training set by translating existing code and other structured files from one format to another. This technique, combined with advanced data augmentation, allowed us to build a robust training set, ensuring Magika performs reliably even on file types for which public samples are not readily available.

The complete list of all 200+ supported file types is available in our revamped documentation.

Under the Hood: A High-Performance Rust Engine

We completely rewrote Magika's core in Rust to provide native, fast, and memory-safe content identification. This engine is at the heart of the new Magika native command line tool that can safely scan hundreds of files per second.

Output of the new Magika Rust based command line tool

Magika is able to identify hundreds of files per second on a single core and easily scale to thousands per second on modern multi-core CPUs thanks to the use of the high-performance ONNX Runtime for model inference and Tokio for asynchronous parallel processing, For example, as visible in the chart below, on a MacBook Pro (M4), Magika processes nearly 1,000 files per second.

Getting Started

Ready to try it out? Getting started with the native command-line client is as simple as typing a single command line:

  • On Linux and MacOS: curl -LsSf https://securityresearch.google/magika/install.sh | sh
  • On Windows (PowerShell): powershell -ExecutionPolicy ByPass -c "irm https://securityresearch.google/magika/install.ps1 | iex"

Alternatively, the new Rust command-line client is also included in the magika python package, which you can install with: pipx install magika.

For developers looking to integrate Magika as a library into their own applications in Python, JavaScript/TypeScript, Rust, or other languages, head over to our comprehensive developer documentation to get started.

What's next

We're incredibly excited to see what you will build using Magika's enhanced file detection capabilities.

We invite you to join the community:

  • Try Magika: Install it and run it on your files, or try it out in our web demo.
  • Integrate Magika into your software: Visit our documentation to get started.
  • Give us a star on GitHub to show your support.
  • Report issues or suggest new file types you'd like to see by opening a feature request.
  • Contribute new features and bindings by opening a pull request.

Thank you to everyone who has contributed, provided feedback, and used Magika over the past year. We can't wait to see what the future holds.

Acknowledgements

Magika's continued success was made possible by the help and support of many people, including: Ange Albertini, Loua Farah, Francois Galilee, Giancarlo Metitieri, Alex Petit-Bianco, Kurt Thomas, Luca Invernizzi, Lenin Simicich, and Amanda Walker.

This Week in Open Source #11

Friday, October 31, 2025

This Week in Open Source for October 31, 2025

A look around the world of open source

Happy Halloween. Here is your treat in the form of news and events from the world of open source.

Upcoming Events

  • November 10 - 13: Kubecon NA is coming to Atlanta, Georgia along with Cloud Native Con. It brings together adopters and technologists from leading open source and cloud native communities.
  • December 5 - 7: PyLadiesCon is happening online and in multiple languages across many timezones. This event is dedicated to empowerment, learning, and diversity within the Python community!
  • December 8-10: Open Source Summit Japan is happening in Tokyo. Open Source Summits are The Linux Foundation's premier event for open source developers and contributors around the world. If you can make it to Japan there are many sessions to learn from.

Open Source Reads and Links

  • A new breed of analyzers - AI-powered code analyzers have recently found many real, useful bugs in curl that earlier tools missed. They scanned all source variations without a build and reported high-quality issues like memory leaks and protocol faults. The curl team fixed dozens of them and now works with the reporters to keep improving security.
  • A national recognition; but science and open source are bitter victories - Gaël Varoquaux received France's national order of merit for his work in science, open source, and AI. He celebrates how open tools and collective effort changed the world but warns that economic power can turn those tools to harmful ends. He urges building a collective narrative and economic ambition so science and free software serve a better future for our children. (disponible en français aussi)
  • If Open Source Stops Being Global, It Stops Being Open - Geopolitics is pushing technology toward national control. Open source preserves sovereignty because code is user-controlled and global. Should governments buy and support global open source? If it stops being global, does it stop being open?
  • Vibe Coding Is the New Open Source—in the Worst Way Possible - Developers are using AI-generated "vibe coding" like they used open source, but it can hide insecure or outdated code. AI often produces inconsistent, hard-to-trace code that increases software supply-chain risk. That danger hits small, vulnerable groups hardest and could create widespread security failures.
  • New Open Source Tool from Angular Scores Vibe Code Quality - One of the Angular developers took up the challenge [of evaluating the best LLM for Angular] and vibe-coded a prototype tool that could test how well vibe code works with Angular. That early experiment led to the creation of an open source tool that tests LLM-generated code for frontend development considerations, such as following best practices for a framework, using accessibility best practices and identifying security problems. Called Web Codegen Scorer, the tool is designed to test all of these in vibe-coded applications.

What spooky open source events and news are you being haunted by? Let us know on our @GoogleOSS X account. We will share some of the best on our next This Week in Open Source post.

.