Getting Started with Machine Learning
Silicon Labs integrates TensorFlow Lite for Microcontrollers as a component within the Gecko SDK and Project Configurator for EFx32 series microcontrollers, making it simple to add machine learning capability to any application. This guide covers how TensorFlow Lite for Microcontrollers is integrated with the Gecko SDK for use Silicon Labs' EFx32 devices.
TensorFlow Lite for Microcontrollers
TensorFlow is a widely used deep learning framework, with capability for developing and executing neural networks across a variety of platforms. TensorFlow Lite provides an optimized set of tools specifically catered towards machine learning for mobile and embedded devices.
TensorFlow Lite for Microcontrollers (TFLM) specifically provides a C++ library for running machine learning models in embedded environments with tight memory constraints. Silicon Labs provides tools and support for loading and running pre-trained models that are compatible with this library.
Gecko SDK TensorFlow Integration
The Gecko SDK includes TensorFlow Lite for Microcontrollers (TFLM) as a third-party package, allowing for easy integration and testing with Silicon Labs' projects. Note that the included TFLM version may differ from the latest content upstream, as TFLM practices continuous delivery, and integration into the Gecko SDK only happens biannually.
Additionally, TensorFlow Software Components in the Project Configurator simplify the process of including the necessary dependencies to use TFLM in a project.
Training and Quantizing a Model
To perform neural network inference on an EFx32 device, one first needs a trained model in the TFLite Flatbuffer format. There are two approaches to consider for developers experienced with TensorFlow:
- Using the Silicon Labs Machine Learning Toolkit, a Python reference package that combines and simplifies all the necessary TensorFlow training steps.
- Following published tutorials for training neural networks using TensorFlow, as outlined in the section Developing a Model.
Developing an Inference Application Using Simplicity Studio and the Gecko SDK
After you have a trained and quantized TFLite model, the next step is to set up the TFLM libraries to run inference on the EFx32 device.
Project Configurator Setup
The Project Configurator includes TFLM libraries as software components. These software components may be added to any existing project. They are described in the SDK Component Overview. The core components needed for any machine learning project are as follows:
TensorFlow Lite Micro. This is the core software component that pulls in all the TFLM dependencies.
A supported TFLM kernel implementation. A kernel is a specific hardware/platform implementation of a low level operation used by TensorFlow. Kernel selection can drastically change the performance and computation time of a neural network. By default, the best kernel implementation for the given device is selected automatically.
A supported TFLM debug logger. The Project Configurator defaults to using the I/O Stream implementation of the logger. To disable logging entirely, add the "Debug Logging Disabled" component.
In addition to required TFLM components, software components for obtaining and pre-processing sensor data can be added to the project. As an example for audio applications, Silicon Labs provides an audio feature generator component that includes powerful DSP features to filter and extract features from raw audio data, to be used as a frontend for microphone-based applications. Silicon Labs developed drivers for microphones, accelerometers, and other sensors provide a simple interface for obtaining sensor data to feed to a network.