TensorFlow Lite for Microcontrollers
TensorFlow Lite for Microcontrollers is a framework that provides a set of tools for running neural network inference on microcontrollers. It contains a wide selection of kernel operators with good support for 8-bit integer quantized networks. The framework is limited to model inference and does not support training. For information about how to train a neural network, see the Silicon Labs Machine Learning Toolkit (MLTK).
Silicon Labs provides an integration of TensorFlow Lite for Microcontrollers with the Gecko SDK. This overview gives an overview of the integration. See the Getting Started Guides for step-by-step instructions on how to make use of Machine Learning in your project.
SDK Component Overview
The software components required to use TensorFlow Lite for Microcontrollers can be found under
Machine Learning > TensorFlow
in the software component browser UI in the Simplicity Studio project configurator.
TensorFlow Lite Micro
This component contains the full TensorFlow Lite for Microcontrollers framework, and automatically pulls in the most optimal implementation of kernels for the device selected for the project by default. To use TensorFlow Lite Micro, this component is the only one that needs to be explicitly installed. It is however possible to manually install different kernel implementations if so desired, for instance to compare inference performance or code size, and to manually install a different debug logging implementation.
By default, the TensorFlow Lite Micro component makes use of the
Flatbuffer Converter Tool
to convert a
.tflite
file into a C array and to initialize this neural network model automatically. See
the section on automatic initialization
for more details.
Kernel Implementations
Reference Kernels
This component provides unoptimized software implementations of all kernels. This is a default implementation that is designed to be easy to read and can run on any platform. As a result, these kernels may run more slowly than an optimal implementation.
CMSIS-NN Optimized Kernels
Some kernels have implementations that have been optimized for certain CPU architectures using the CMSIS-NN library. Using these kernels when available can improve inference performance significantly. By enabling this component, the available optimized kernel implementations are added to the project, replacing the corresponding reference kernel implementations. The remaining kernels fall back to using the reference implementations by depending on the reference kernel component.
MVP Accelerated Kernels
Some kernels have implementations optimized for the MVP accelerator available on select Silicon Labs parts. Using these kernels will improve inference performance. By enabling this component, the available accelerated kernel implementations are added to the project, replacing the corresponding optimized or reference kernel implementations. The remaining kernels fall back to use the optimized or reference implementations by depending on the corresponding components. See more details about the accelerator to learn what kernels are supported, and what constraints apply.
Debug Logging using I/O Stream / Disabled
Debug logging is used in TensorFlow to display debug and error information. Additionally, it can be used to display inference results. Debug logging is enabled by default, with an implementation that uses I/O Stream to print over UART to the virtual COM port on development kits (VCOM). Logging can be disabled by ensuring that the component "Debug Logging Disabled" is included in the project.
TensorFlow Third Party Dependencies
A specific version of the CMSIS-NN library is used as with TensorFlow Lite for Microcontrollers to optimize certain kernels. This library is included in the project together with TensorFlow Lite for Microcontrollers. TensorFlow depends on a bleeding-edge version of CMSIS-NN, while the rest of the Gecko SDK uses a stable CMSIS release. It is strongly recommended to avoid using functions from the Gecko SDK version of CMSIS-DSP and CMSIS-NN elsewhere in the project and instead use the version bundled with TensorFlow Lite for Microcontrollers to avoid versioning conflicts between the two.
Audio Feature Generator
The audio feature generator can be used to extract time-frequency features from an audio signal for use with machine learning (ML) audio classification applications. The generated feature array is a mel-scaled spectrogram, representing the frequency information of the signal of a given sample length of audio.
When used together with the
Flatbuffer Converter Tool
, the audio feature generator by default consumes its configuration settings from the model parameters of the
.tflite
flatbuffer. Such metadata can be added to the flatbuffer by using the
Silicon Labs Machine Learning Toolkit
. This ensures that the settings used during inference on the embedded device match the settings used during training. If models without such metadata are used, the configuration option "Enable Manual Frontend Configurations" can be enabled, and configuration values set in the configuration header
sl_ml_audio_feature_generation_config.h
.
Automatic Initialization of Default Model
When the TensorFlow Lite Micro component is added to the project, it will by default attempt to automatically initialize a default model using the TFLite Micro Init API . It performs initialization of TensorFlow Lite Micro by creating an opcode resolver and interpreter for the given flatbuffer. In addition, it creates the tensor arena buffer.
The model used by the automatic initialization code comes from the Flatbuffer Converter Tool . If the flatbuffer was produced using the MLTK , it may contain metadata about the necessary tensor arena size. If such information is present, it will be automatically initialized to the correct size. If a non-MLTK flatbuffer is used, the tensor arena size must be configured manually using the configuration file for the TensorFlow Lite Micro component.
If automatic initialization at startup is not desired, this can be turned off using the
Automatically initialize model
(
SL_TFLITE_MICRO_INTERPRETER_INIT_ENABLE
) configuration option.
Version
The Gecko SDK incorporates TensorFlow Lite for Microcontrollers version #
3e190e5389be49c94475e509452bdae245bd4fa6
in
util/third_party/tflite-micro/
. The core TensorFlow Lite for Microcontrollers offering is unpatched, all additional content for Silicon Labs devices is delivered in the
util/third_party/tensorflow_extra/
directory.
Third-party Tools and Partners
Tools:
-
Netron
is a visualization tool for neural networks, compatible with
.tflite
model files. This is useful for viewing the operations used in a model, the sizes of tensors and kernels, etc.
AI/ML Partners:
Silicon Labs AI/ML partners provide expertise and platforms for data collection, model development, and training. See the technology partner pages to learn more.