Memory Manager Appendix#

Glossary#

  • Block Type: Classification of memory blocks as long-term or short-term

  • Dynamic Allocation: Runtime memory allocation using malloc/free family functions

  • Heap Fragmentation: Condition where free memory is scattered in small blocks

  • Memory Pool: Pre-allocated collection of fixed-size memory blocks

  • Statistics API: Runtime interface for monitoring heap usage and performance

Acronyms#

  • API: Application Programming Interface

  • ISR: Interrupt Service Routine

  • LT: Long-Term (block type)

  • RTOS: Real-Time Operating System

  • ST: Short-Term (block type)

  • UC: Universal Configurator

Extended Examples#

Complex Memory Management#

This example demonstrates how to create a comprehensive memory management wrapper that combines dynamic pool handle allocation with proper error handling and cleanup procedures.

#include "sl_memory_manager.h"

sl_status_t init_memory_pool(sl_memory_pool_t *pool, uint32_t block_size, uint32_t count) {
  // Allocate pool handle dynamically
  sl_status_t status = sl_memory_pool_handle_alloc(&pool);
  if (status != SL_STATUS_OK) {
    return status;
  }

  // Create memory pool
  status = sl_memory_create_pool(block_size, count, pool);
  if (status != SL_STATUS_OK) {
    sl_memory_pool_handle_free(pool);
    return status;
  }

  return SL_STATUS_OK;
}

void cleanup_memory_pool(sl_memory_pool_t *pool) {
  if (pool != NULL) {
    sl_memory_delete_pool(pool);
    sl_memory_pool_handle_free(pool);
    pool = NULL;
  }
}

Multi-Threaded Memory Management#

This example shows how to implement thread-safe memory operations using RTOS mutexes to protect Memory Manager operations in multi-threaded environments.

#include "sl_memory_manager.h"
#include "FreeRTOS.h"
#include "semphr.h"

// FreeRTOS mutex handle
static SemaphoreHandle_t memory_mutex = NULL;

// Initialize the memory mutex (call this once during system initialization)
void init_memory_mutex(void) {
  memory_mutex = xSemaphoreCreateMutex();
  if (memory_mutex == NULL) {
    // Handle mutex creation failure
  }
}

// Thread-safe memory allocation wrapper
void* thread_safe_malloc(size_t size) {
  if (xSemaphoreTake(memory_mutex, portMAX_DELAY) == pdTRUE) {
    void* ptr = sl_malloc(size);
    xSemaphoreGive(memory_mutex);
    return ptr;
  }
  return NULL;
}

void thread_safe_free(void* ptr) {
  if (xSemaphoreTake(memory_mutex, portMAX_DELAY) == pdTRUE) {
    sl_free(ptr);
    xSemaphoreGive(memory_mutex);
  }
}

FAQ#

Q: When should I use memory pools vs dynamic allocation?

A: Use memory pools for real-time applications requiring deterministic timing, guaranteed allocation, or when you need to avoid heap fragmentation. Use dynamic allocation for general-purpose memory management with flexible sizing.

Q: How do I choose between long-term and short-term allocation?

A: Use long-term allocation for data structures that persist for the application lifetime or extended periods. Use short-term allocation for temporary buffers that are freed quickly to help reduce heap fragmentation.

Q: What's the difference between sl_malloc() and sl_memory_alloc()?

A: sl_malloc() is a standard-like function that returns a pointer or NULL, while sl_memory_alloc() is a variant function that returns an sl_status_t error code and provides the pointer through an output parameter. The variant function also allows specifying block type.

Q: How do I handle memory allocation failures?

A: Always check return values and implement appropriate error handling. For critical allocations, consider using memory pools for guaranteed availability. Monitor heap usage with the statistics API to optimize allocation patterns.

Q: Can I use Memory Manager in interrupt service routines?

A: Yes, but be aware that dynamic allocation in ISRs increases interrupt latency due to critical section protection. Consider using memory pools for deterministic timing in ISR contexts.

Q: How do I optimize memory usage in my application?

A: Use the statistics API to monitor heap usage, implement appropriate long-term/short-term allocation strategies, consider memory pools for critical allocations, and avoid frequent allocation/deallocation patterns that cause fragmentation.