Add Machine Learning to a New or Existing Project#
This guide provides details of adding Machine Learning to a new or existing project, making use of the wrapper APIs for TensorFlow Lite for Microcontrollers provided by Silicon Labs for automatic initialization of the TFLM framework.
The guide assumes that a project already exists in the Simplicity Studio workspace. If you're starting from scratch, you may start with any sample application or the Empty C++ application. TFLM has a C++ API, so the application code interfacing with it will also need to be written in C++. If you're starting with an application that is predominantly C code, see the section on interfacing with C code for tips on how to structure your project by adding a separate C++ file for the TFLM interface.
Install the TensorFlow Lite Micro Component#
Browse the Software Components library in the project to find the TensorFlow Lite Micro component in Machine Learning > TensorFlow
.
If your project didn't already contain an I/O Stream implementation, you may get a dependency validation warning. This is not a problem, but simply means that a choice of I/O Stream backend needs to be made. The USART or EUSART backends are the most common, as these can communicate with a connected PC through the development kit virtual COM port (VCOM).
Accept the default suggestion of "vcom" as the instance name, which will automatically configure the pinout to connect to the development board's VCOM lines. If you're using your own hardware, you can set any instance name and configure the pinout manually.
Model Inclusion#
With the TensorFlow Lite Micro component added in the Project Configurator, the next step is to load the model file into the project. To do this, create a tflite
directory inside the config
directory of the project, and copy the .tflite
model file into it. The project configurator provides a tool that will automatically convert .tflite
files into sl_tflite_micro_model
source and header files. The full documentation for this tool is available at Flatbuffer Converter Tool.
Automatic Initialization#
The TensorFlow framework is automatically initialized using the system initialization framework described in SDK Programming Model. This includes allocating a tensor arena, instantiating an interpreter and loading the model.
Configuration#
If the model was produced using the Silicon Labs Machine Learning Toolkit (MLTK), it already contains metadata indicating the required size of the Tensor Arena, the memory area used by TensorFlow for runtime storage of input, output, and intermediate arrays. The required size of the arena depends on the model used.
If not using the MLTK, the arena size needs to be configured. This can be done in two ways:
[Automatic] Set the arena size to -1, and it will attempt to automatically infer the size upon initialization.
[Manual] Start with a large number during development, and reduce the allocation until initialization fails as part of size optimization.
Run the Model#
Include the Silicon Labs TensorFlow Init API#
#include "sl_tflite_micro_init.h"
For default behavior in bare metal application, it is recommended to run the model during app_process_action()
in app.cpp
to ensure that periodic inferences occur during the standard event loop. Running the model involves three stages:
Provide Input to the Interpreter#
Sensor data is pre-processed (if necessary) and then is provided as input to the interpreter.
TfLiteTensor* input = sl_tflite_micro_get_input_tensor();
// stores 0.0 to the input tensor of the model
input->data.f[0] = 0.;
Run Inference#
The interpreter is then invoked to run all layers of the model.
TfLiteStatus invoke_status = sl_tflite_micro_get_interpreter()->Invoke();
if (invoke_status != kTfLiteOk) {
TF_LITE_REPORT_ERROR(sl_tflite_micro_get_error_reporter(),
"Bad input tensor parameters in model");
return;
}
Read Output#
The output prediction is read from the interpreter.
TfLiteTensor* output = sl_tflite_micro_get_output_tensor();
// Obtain the output value from the tensor
float value = output->data.f[0];
At this point, application-dependent behavior based on the output prediction should be performed. The application will run inference on each iteration of app_process_action()
.
Full Code Snippet#
After following the steps above, the resulting app.cpp
now appears as follows:
#include "sl_tflite_micro_init.h"
/***************************************************************************//**
* Initialize application.
******************************************************************************/
void app_init(void)
{
// Init happens automatically
}
/***************************************************************************//**
* App ticking function.
******************************************************************************/
void app_process_action(void)
{
TfLiteTensor* input = sl_tflite_micro_get_input_tensor();
// stores 0.0 to the input tensor of the model
input->data.f[0] = 0.;
TfLiteStatus invoke_status = sl_tflite_micro_get_interpreter()->Invoke();
if (invoke_status != kTfLiteOk) {
TF_LITE_REPORT_ERROR(sl_tflite_micro_get_error_reporter(),
"Bad input tensor parameters in model");
return;
}
TfLiteTensor* output = sl_tflite_micro_get_output_tensor();
// Obtain the output value from the tensor
float value = output->data.f[0];
}
Addendum: Interfacing with C code#
If your project is written in C rather than C++, place the code interfacing with TFLM into a separate file that exports a C API through an interface header. For this example, a filename app_ml.cpp
is assumed that implements the function ml_process_action()
with the same content as in the example above.
app_ml.h
#ifdef __cplusplus
extern "C" {
#endif
void ml_process_action(void);
#ifdef __cplusplus
}
#endif
app_ml.cpp
#
#include "app_ml.h"
#include "sl_tflite_micro_init.h"
extern "C" void ml_process_action(void)
{
// ...
}
app.c
#
#include "app_ml.h
// ...
void app_process_action(void)
{
ml_process_action();
}