Edge AI

Your data. Your model. Your edge.

From saving bandwidth and energy to more responsive real-time performance, implementing AI in your embedded applications offers massive benefits beyond the buzzwords. Nordic Semiconductor offers two unique technologies, Neuton models and Axon NPU, exclusively to our customers, to cover the industry's broadest range of devices, applications, and customer needs.

 Neuton models

 

Axon NPU

Custom Neuton models are ultra-tiny edge AI models built from your data using our patented network-growing algorithm, ideal for running edge AI on any Nordic SoC or SiP using its main application core (CPU).    The Axon NPU is our dedicated AI accelerator core, designed to increase the speed and efficiency of TensorFlow Lite models, built into our most capable SoCs. 
 Neuton_Logo_horizontal_Gradient_RGB.png    Axon_Logo_horizontal_Gradient_RGB copy.png
     
     

10x

 10x

< 5 KB 

 

15x 

8x

7x 

Smaller memory footprint than TensorFlow Lite models Faster and more energy efficicent than running TensorFlow Lite models on the CPU  Average memory footprint of custom Neuton models created with our framework    Faster and more energy efficient than running the same TensorFlow Lite model on the CPU  More energy efficient compared to the closest competing product Faster inference compared to the closest competing product
           

 

 

  

Axon NPU

  Axon_Logo_horizontal_Gradient_RGB copy.png   

 

Edge AI without compromise

For years, adding AI to wireless IoT devices meant trading battery life for performance. Running TensorFlow Lite on a CPU was often too slow and memory-intensive, while discrete NPUs added cost and complexity. Although Neuton models already enable efficient edge AI on the nRF54L Series CPU, demanding workloads like audio, imaging, and high-rate sensor data need dedicated acceleration.

Integrated AI acceleration

Axon is Nordic’s proprietary NPU, integrated into the high-memory nRF54LM20B SoC. It accelerates TensorFlow Lite models with up to 15x faster inference than the CPU, and delivers up to 7x higher performance and up to 8x better efficiency than the closest competing wireless NPU, bringing powerful edge AI to ultra-low-power devices.

Simpler designs and broader possibilities

By integrating the NPU on-chip, Axon removes the need for discrete accelerators, reducing power, BoM cost, and development complexity. Axon enables industry-leading energy efficiency across use cases ranging from anomaly detection and biometrics to sound, keyword, and image recognition – leveraging fully accelerated edge AI.

Axon NPU

Products containing our AI accelerator

nRF54LM20B SoC

Ultra-low-power wireless SoC with integrated Axon NPU, for hardware-accelerated edge AI applications, supporting Bluetooth LE, Channel Sounding, Bluetooth Mesh, Zigbee, Thread, Matter, Aliro, and 2.4 GHz proprietary protocols.

128 MHz Arm Cortex-M33
2 MB NVM, 512 KB RAM
128 MHz integrated Axon NPU
High-speed USB