MVP Accelerator
The MVP accelerator is a co-processor designed to perform matrix and vector operations. Using hardware accelerated kernel implementations will reduce neural network inference time, as well as off-load the main processor to allow it to perform other tasks or go to sleep.
Silicon Labs has implemented common neural network operators as programs to be executed on the MVP and integrated these with TensorFlow Lite for Microcontrollers. The MVP has 5 array controllers, each of which can support iterating in 3 independent dimensions. Each dimension is limited to 1024 elements, with a stride between each element of 2047. The limiting factor for most neural network operations is therefore the product of the width and depth dimensions, since this becomes the stride in the height dimension.
All MVP-accelerated operations take signed 8-bit integers as input and output. If the inner dimension of the tensor has even size, each element can contain two int8 values, interpreted as a single complex int8 value by the accelerator. The accelerator can then effectively support 2048 int8 values. If the inner dimension is odd, the accelerator must perform one computation at a time, which reduces performance and limits the dimension size to 1024 int8 values.
The operators listed below will be accelerated using the MVP if tensor sizes allow. If a specific tensor cannot be accelerated, the implementation will automatically fall back to using optimized (CMSIS-NN) or reference kernel implementations at runtime. To maximize the likelihood that an operator is supported by the accelerator, use even-valued numbers of channels when designing the model.
Internally, the MVP accelerator uses 16-bit floating point math, even when taking 8-bit integers as input. This means that there is a slight reduction in accuracy of computations, which may be especially noticeable when performing operations that accumulate many elements.
For more information about the MVP hardware accelerator, see the reference manual for EFR32xG24 .
Accelerated TensorFlow operators
Add
TensorFlow operator name:
ADD
Any tensor size is supported.
FullyConnected (Dense)
TensorFlow operator name:
FULLY_CONNECTED
,
FULLY_CONNECTED_INT8
Supports tensors where all dimensions are within the 1024 element limit. Also supports larger tensors where the size of the last dimension is decomposable into two factors that are both within the 1024 element limit.
AveragePool2D
TensorFlow operator name:
AVERAGE_POOL_2D
Supports tensors where
width*channels
is within the 2047 element stride limit and all dimensions are within the 1024 element limit.
MaxPool2D
TensorFlow operator name:
MAX_POOL_2D
Supports tensors where
width*channels
is within the 2047 element stride limit and all dimensions are within the 1024 element limit.
Conv2D
TensorFlow operator name:
CONV_2D
Supports tensors where
width*channels
is within the 2047 element stride limit and all dimensions are within the 1024 element limit.
DepthwiseConv2D
TensorFlow operator name:
DEPTHWISE_CONV_2D
Supports tensors where
width*channels
is within the 2047 element stride limit and all dimensions are within the 1024 element limit. Dilation is not supported.
TransposeConv2D
TensorFlow operator name:
TRANSPOSE_CONV_2D
Supports tensors where
width*channels
is within the 2047 element stride limit and all dimensions are within the 1024 element limit. Dilation is not supported.
Suspending Execution While Waiting for Accelerator
The software API for the MVP accelerator is blocking, meaning that any call to the MVP driver will wait for completion before returning from the function call. To save energy, the driver can optionally suspend execution of the main processor while waiting for the accelerator to complete an operation. By default, the main processor busy-waits for the accelerator.
No sleep (0)
When the "No sleep" option is used, the MCU core will busy-wait for the MVP to finish. This is the option which provides the fastest MVP execution time. The "No sleep" option can be used in a bare metal application or an application using a real-time operating system (RTOS).
Enter EM1 (1)
When the "Enter EM1" option is used, the MCU will be put into Energy Mode 1 whenever the driver waits for an MVP program to complete. The "Enter EM1" option is not safe to use in an application using RTOS because it will prevent proper RTOS scheduling.
Yield RTOS thread (2)
When the "Yield RTOS thread" option is used, the task waiting for the MVP program to complete will yield, allowing other tasks in the system to run or potentially let the scheduler put the system into a sleep mode. The "Yield RTOS thread" requires that the application is using an RTOS.
The power mode of the MVP driver can be configured by setting the
SL_MVP_POWER_MODE
configuration option in the
sl_mvp_config.h
configuration header.