The landscape of Machine Learning software libraries and models is evolving rapidly, and to satisfy the ever-increasing demand for memory and compute while managing latency, power and security considerations, hardware must be developed in an iterative process alongside the workloads it is meant to run.
With its open architecture, custom instructions support and flexible vector extensions, the RISC-V ISA offers an unprecedented capacity for such co-design. And by energizing the open hardware ecosystem, RISC-V has supercharged research and innovation into how to improve chipmaking itself to better leverage the methods and suit the needs of software. Initiatives such as Google’s OpenMPW Shuttle show how a more open and software-focused approach to building hardware, are key to enabling a new wave of more powerful and transparent ML-focused solutions.
A RISC-V-based ML accelerator with a HW/SW co-design flow
In the past months, Google Research has joined efforts with Antmicro to work on a silicon project that can serve as a template for efficient hardware-software co-design. For their secure ML solution, the Google Research team supported by Antmicro has been developing a completely open source, rapid pre-silicon ML development flow using Renode, Antmicro’s open source simulation framework.
This builds on the result of cooperation from last year in which Antmicro implemented Renode support for RISC-V Vector extensions, which are used in the Google team’s RISC-V based ML accelerator codenamed Springbok. To allow a more well-rounded developer experience, as part of the project Antmicro is also working on improving the support for the underlying SoC and a large number of user oriented features such as OS-aware debugging, performance optimizations, payload profiling and performance measurement capabilities.
Springbok is part of Google’s AmbiML project that aims to create an open source ML development ecosystem centered on privacy and security. By using the RISC-V Vector extensions, the Google Research team has a standard but flexible way to parallelize the matrix multiply and accumulate operations that are universal in ML payloads. And thanks to Renode, the team can make informed choices as to how exactly to leverage RISC-V’s flexibility by analyzing tradeoffs between speed, complexity and specialization in a practical, iterative fashion using data generated by Renode and the text-based configuration capabilities that let them play around with hardware composition and functionality in a matter of minutes, not days.
On the ML software side, the ecosystem revolves around IREE—Google’s research project developing an open source ML compiler and runtime for constrained devices, based on LLVM MLIR.
IREE allows you to load models from typical ML frameworks such as TensorFlow or TensorFlow Lite and then convert them to Intermediate Representation (MLIR), which later goes through optimizations on graph level and then through an LLVM compilation flow to get the best-fitted runtime for a specific target. When it comes to deploying models on target devices, IREE provides APIs for both the C and Python programming languages as well as a TFLite C API which provides the same convention as TFLite for model loading, tensor management and inference invoking.
Using these runtimes, the model can be deployed and tested, debugged, benchmarked and executed on the target device or in a simulation environment like Renode.
Demoing the flow at Spring 2022 RISC-V Week
In the build up to the Spring 2022 RISC-V Week in Paris, the first such large open hardware meeting in years, an initial version of the AmbiML bare metal ML flow was released as open source. This includes both the ability to run interactively and an example CI using Antmicro’s GitHub Renode Action showing how such a workflow can be tested automatically on each commit. As a Google Cloud partner, Antmicro is currently working with Google Cloud to make Renode available for massive scale CI testing and deployments for scenarios similar to this one.
In a joint talk at the Paris event, Antmicro and Google presented the software co-development flow, together with a demo of a heterogeneous multi-core solution, with one core running the AmbiML Springbok payload and another core running Zephyr.
In the presented scenario the Springbok core, acting as a ML compute offload unit to the main CPU, executed inference on the MobileNetv1 network and reported the work done to the application core via a RISC-V custom instruction. Adding and modifying custom instructions is trivial in Renode, either via a single line of Python, C#, or even co-simulated in RTL.
Renode helps ML developers and silicon designers not only to run and test their solutions, but also to learn more about what their software is actually doing. As part of the Paris demonstration, Antmicro and Google showed how you can count executed instructions and how often specific opcodes are used to measure how well your solution is performing. These features, accompanied by execution metrics analysis, executed functions logging, and recently developed execution trace generation, give you great insight into every detail of your emulated ML environment.
These capabilities join the wide arsenal of hardware/software co-development solutions in Renode, such as RTL co-simulation which Antmicro has been developing with Microchip and support for verilated custom instructions developed with another ML-focused Google team responsible for RISC-V Custom Function Units and also used in the EU-funded VEDLIoT project.
Future plans
This is just the beginning of a wider activity from the Google Research team Antmicro is working with to release software and hardware components as well as tools supporting a collaborative co-design ecosystem for secure ML development. If you think Renode, RISC-V and co-development could help in building your next ML-focused product, go ahead and try the AmbiML flow yourself!
Visit the iree-rv32-springbok repository on GitHub, clone it locally and follow the instructions from README.md.
You can also grab Renode from the official repository and start playing with the available demos, or head to the Renode documentation to read up on features helpful for ML acceleration development such as Verilator co-simulation.
By Peter Zierhoffer – Antmicro