Developing a Machine Learning Model

Developing a Model using the MLTK

The Silicon Labs Machine Learning Toolkit (MLTK) is a Python package that implements a layer above TensorFlow to help the TensorFlow developer build models that can be successfully deployed on Silicon Labs chips. These scripts are a reference implementation for the audio use case, which includes the use of the Audio Feature Generator on both the training and inference side. This is modified version of the TensorFlow "microfrontend" audio front end. It is expected that the MLTK is used by an ML Expert with deep knowledge of TensorFlow and Python, or by a developer willing to learn.

The MLTK is offered as a self-serve, self-support, fully documented, Python reference package published through GitHub. We are delivering this as an Experimental package – which means it’s available “as-is”, un-tested, and without support.

See the MLTK documentation for more information.

Developing a Model Manually using TensorFlow and Keras

Block Diagram of TensorFlow Lite Micro Workflow

When developing and training neural networks for use in embedded systems, it is important to note the limitations on TFLM that apply to model architecture and training. Embedded platforms also have significant performance constraints that must be considered when designing and evaluating a model. The embedded TLFM documentation links describe these limitations and considerations in detail.

Additionally, the TensorFlow Software Components in Studio require a quantized *.tflite representation of the trained model. As a result, TensorFlow and Keras are the recommended platforms for model development and training because both platforms are supported by the TensorFlow Lite Converter that generates .tflite model representations.

Both TensorFlow and Keras provide guides on model development and training:

After a model has been created and trained in TensorFlow or Keras, it needs to be converted and serialized into a *.tflite file. During model conversion, it is important to optimize the memory usage of the model by quantizing it. It is highly recommended to use integer quantization on Silicon Labs devices.

A complete example demonstrating the training, conversion, and quantization of a simple TFLM compatible neural network is available from TensorFlow: