Getting Started with Machine Learning#

Introduction#

Silicon Labs TensorFlow Lite for Microcontrollers Integration#

Silicon Labs provides robust support for TensorFlow Lite for Microcontrollers (TFLM) as an extension to Simplicity Studio SDK (SiSDK), offering developers flexible options for deploying machine learning models on EFx32 and Si91x microcontrollers using Project Configurator for Studio. This guide covers how TensorFlow Lite for Microcontrollers is integrated with the SiSDK using AIML extension for use Silicon Labs' EFx32 and Si91x devices.

TensorFlow Lite for Microcontrollers#

TensorFlow is a widely used deep learning framework, with capability for developing and executing neural networks across a variety of platforms. TensorFlow Lite provides an optimized set of tools specifically catered towards machine learning for mobile and embedded devices.

TensorFlow Lite for Microcontrollers (TFLM) specifically provides a C++ library for running machine learning models in embedded environments with tight memory constraints. Silicon Labs provides tools and support for loading and running pre-trained models that are compatible with this library.

AIML Extension#

Installing the AI/ML Extension for Silicon Labs Simplicity Studio#

For detailed instructions on installing the AI/ML extension, refer to the AI/ML Extension Installation Guide. This extension empowers developers to integrate machine learning capabilities into their Silicon Labs-based projects.

Training and Quantizing a Model#

To perform neural network inference on an EFx32 device, one first needs a trained model in the TFLite Flatbuffer format. There are two approaches to consider for developers experienced with TensorFlow:

Developing an Inference Application Using Simplicity Studio, SiSDK and AIML extension#

After you have a trained and quantized TFLite model, the next step is to set up the TFLM libraries to run inference on the EFx32 device.

Project Configurator Setup#

The Project Configurator includes TFLM libraries as software components. These software components may be added to any existing project. They are described in the SDK Component Overview. The core components needed for any machine learning project are as follows:

  1. TensorFlow Lite Micro. This is the core software component that pulls in all the TFLM dependencies.

  2. A supported TFLM kernel implementation. A kernel is a specific hardware/platform implementation of a low level operation used by TensorFlow. Kernel selection can drastically change the performance and computation time of a neural network. By default, the best kernel implementation for the given device is selected automatically.

  3. A supported TFLM debug logger. The Project Configurator defaults to using the I/O Stream implementation of the logger. To disable logging entirely, add the Debug Logging Disabled component.

In addition to required TFLM components, software components for obtaining and pre-processing sensor data can be added to the project. As an example for audio applications, Silicon Labs provides an audio feature generator component that includes powerful DSP features to filter and extract features from raw audio data, to be used as a frontend for microphone-based applications. Silicon Labs developed drivers for microphones, accelerometers, and other sensors provide a simple interface for obtaining sensor data to feed to a network.