TensorFlow Lite for Microcontrollers in the Gecko SDK
TensorFlow Lite for Microcontrollers is a framework which provides a set of tools for running neural network inference on microcontrollers. The framework is limited to model inference and does not support training. It contains a wide selection of kernel operators with good support for 8-bit quantized networks.
Silicon Labs provides an integration of TensorFlow Lite for Microcontrollers with the Gecko SDK by introducing TensorFlow component support. This is an overview that explains how to get started with TensorFlow Lite for Microcontrollers with the Gecko SDK.
SDK Component Overview
The components required to use TensorFlow Lite for Microcontrollers are under
Platform|Machine Learning|TensorFlow
. Important components are described here.
- TensorFlow Lite for Microcontrollers - This component contains the TensorFlow Lite for Micro framework and includes all necessary code to set up and perform model inference.
- TensorFlow Lite Micro Reference Kernels This component includes the necessary framework around all the supported kernels, and is automatically included with the TensorFlow Lite for Microcontrollers component. In addition, this component provides unoptimized software implementations of all kernels. This is a default implementation, which is easy to understand and can run on any platform. As a result, these kernels may run more slowly than an optimal implementation.
- TensorFlow Lite Micro Optimized Kernels Some kernels have implementations which have been optimized for certain CPU architectures. Using these kernels when available can improve inference performance significantly. By enabling this component, the available optimized kernel implementations are added to the project, replacing the corresponding reference kernel implementations. The remaining kernels fall back to use the reference implentations.
- TensorFlow Lite Micro Debug Logging IO Stream/None - Debug logging is used in TensorFlow to display debug information and error information. Additionally, it can be used to display inference results. Debug logging is enabled by default, with an implementation that uses I/O Stream to print over UART to VCOM, and can be disabled by ensuring that the component "TensorFlow Lite Micro Debug Log - None" is included in the project.
- TensorFlow Lite Micro Audio Frontend - This component provides an audio frontend library , developed by TensorFlow, which can be used to generate filter bank feature data from audio input. This front end implementation uses the KISS FFT library to generate configurable filter banks from the raw audio data.
TensorFlow Third Party Dependencies
A specific version of the CMSIS-NN library is used as a part of TensorFlow Lite for Microcontrollers to optimize certain kernels. This library is included in the project together with TensorFlow Lite for Microcontrollers. Because of a mismatch between this version and the CMSIS library included by the Gecko SDK, avoid using functions from the Gecko SDK version of CMSIS-DSP and CMSIS-NN elsewhere in the project.
Sample Applications
There are two applications which demonstrate the TensorFlow Lite for Microcontrollers framework with the Gecko SDK. Both applications are developed by TensorFlow and have been ported as-is to run on Silicon Labs hardware. The applications are currently alpha quality, and are in no way optimized in terms of inference accuracy, energy consumption, size or speed.
TensorFlow Hello World
This application demonstrates a model trained to replicate a sine function, and use the inference results to fade an LED. The application is originally written by TensorFlow, but has been ported to the Gecko SDK.
The model used is approximately 2.5KB. The entire application takes around 157KB flash and 15KB RAM. This application uses large amounts of flash memory because it does not manually specify which operations are used in the model, and, as a result, compiles all kernel implementations.
The application displays a minimal inference application and serves as a good starting point for understanding the TensorFlow Lite for Microcontrollers model interpretation flow.
TensorFlow Micro Speech
This application demonstrates a 20KB model trained to detect simple words from speech data recorded from a microphone. The application is originally written by TensorFlow, but has been ported to the Gecko SDK.
This application uses around 100KB flash and 37KB of RAM, around 10KB of the RAM usage is related to FFT frontend, also for storing enough historic audio data. With a clock speed of 38.4MHz, and using the optimized kernel implementations, the inference time on ~1s of audio data is approximately 111ms.
This application displays the process of generating features from audio data and doing detections in real time. It also demonstrates how to manually specify which operations are used in the network, which saves a significant amount of flash.
Getting Started with Machine Learning on Silicon Labs Devices
Getting Started with Machine Learning provides step-by-step instructions on how to build a machine learning applications using TensorFlow Lite Micro on Silicon Labs devices.
Version
TensorFlow version 2.4.1 is used in the Gecko SDK.