Axelera AI edge computing accelerates innovative smart solutions.

Keywords :Axelera AIEdge ComputingMetis AIPUVoyager SDK

With the continuous advancement of artificial intelligence, edge computing has become a key engine for enterprise innovation and digital transformation. A primary challenge for businesses is achieving high-speed, high-accuracy AI inference tasks on limited computing resources. Axelera AI, with its exceptional AI acceleration solutions, is dedicated to helping enterprises rapidly deploy high-performance, low-power edge computing platforms. These platforms see wide application in fields like smart cities, intelligent transportation, and industrial inspection. 

 

Axelera AI's acceleration products offer the following features and technical advantages:

 

  1. Voyager SDK Usability and Integration: The SDK supports ONNX, an open file format designed for machine learning, used for storing trained models. This allows different AI frameworks, such as PyTorch and MXNet, to use the same format for storing and exchanging model data. The result is simplified integration and rapid deployment.
  2. High-Performance AI Model Inference Acceleration: Through proprietary hardware-software co-optimization, the solution significantly boosts inference efficiency while reducing power consumption. The Metis AIPU achieves up to 214 TOPS at INT8 precision. Its ResNet-50 inference speed can reach 3,200 FPS, far exceeding competing accelerators. According to benchmark results published by Axelera, the Metis platform combined with the Voyager SDK demonstrates average inference speeds surpassing major competitors. It is approximately 10 times faster than the Hailo-8 (26 TOPs) and 4 times faster than Qualcomm's Snapdragon X (45 TOPs). In some cases, its performance is comparable to the NVIDIA Jetson Orin AGX, with impressive price-performance. In the M.2 form factor, competitors are few, primarily including Hailo, Google Coral, Kneron, DeepX, and Sima.ai. In terms of raw performance, the Metis M.2 is one of the fastest options currently available.
  3. Hardware-Software Co-acceleration: The Voyager SDK automatically handles quantization, model optimization, and parallel execution of pre-processing and post-processing. Inference runs on the AIPU, while other processes can run in parallel on the CPU/GPU. This further shortens the end-to-end latency from image input to result output.
  4. Multi-core, Multi-model Processing: The Metis AIPU’s four cores can be configured to run concurrently on a single large model or execute multiple lightweight models independently. It supports multi-stream, multi-task AI pipelines for complex video analysis, such as multi-camera or multi-stage processing.
  5. Exceptional Power Efficiency: Axelera AI's products strike an ideal balance between high performance and low power consumption in edge devices. They deliver a highly competitive 15 TOPS/W efficiency. Typical operating power is only 8-15W. The Metis PCIe module's measured power consumption falls between 8 and 15 watts. The M.2 card's power draw is even lower, starting at 3.5W and peaking below 9W. This makes it especially suitable for power-sensitive, fanless embedded systems with strict power constraints.
  6. Dynamic Frequency Scaling (Downclocking): The system can adjust its clock speed based on real-time operational needs to further reduce power consumption and optimize the performance-per-watt ratio. The Metis AIPU has a built-in downclocking function, enabling dynamic adjustment of its operating frequency for specific scenarios. This provides flexible power reduction while maintaining high fps/W through intelligent configuration via the Voyager SDK. It is ideal for achieving a balance between power optimization and performance across different use cases.
  7. D-IMC Architecture for Optimized Energy Use: The use of Digital In-Memory Computing (D-IMC) reduces data movement, avoiding energy waste. This is a core element of its high-performance, low-power design.
  8. Complete Development Ecosystem: Through its open-source GitHub platform, Axelera provides detailed SDK installation guides, model conversion workflows, and a rich set of deployment examples, simplifying the development process.

 

Axelera Githubhttps://github.com/axelera-ai-hub/voyager-sdk/blob/release/v1.3/README.md

 

Axelera AI has demonstrated significant performance improvements in real-world applications. Below are several case analyses:

  1. Intelligent Transportation - Real-time AI Inference Performance: Using the Metis AIPU with the Voyager SDK to process video streams, the system can achieve over 780 FPS. This highlights its ultra-high throughput capability, making it highly suitable for real-time applications like traffic monitoring and vehicle detection. For traffic flow analysis, for example, the system can process raw data or live camera feeds to calculate vehicle volume during specific periods, enabling intelligent traffic planning.

  2. Industrial Automation and Quality Inspection: In the manufacturing sector, Axelera emphasizes that the Metis AIPU delivers GPU-level machine vision performance at lower power without sacrificing accuracy. This demonstrates its ability to perform real-time defect detection and sorting on production lines with greater speed and precision than human inspection. In a fruit factory, for instance, it can perform real-time defect detection and sorting on a conveyor belt, saving the time and labor costs associated with manual screening.

圖片出處: Axelera

3. Warehouse Automation - Multi-stream Real-time Recognition: One enterprise implemented the Metis M.2 card in its warehouse order verification system. The system uses four 1080p high-definition video streams for real-time hand and object detection. Its high compatibility and low power consumption allowed it to successfully replace traditional industrial PCs.

 

The Axelera AI edge computing acceleration solution, through its outstanding performance, excellent power efficiency, and streamlined deployment process, effectively helps enterprises improve operational efficiency and reduce overall costs. Businesses across various industries are encouraged to adopt this solution to enhance product and service quality, driving the development of enterprise-level intelligent innovation. 

►场景应用图

►产品实体图

►展示板照片

►方案方块图

►核心技术优势

1. Metis AIPU Chip: INT8 quantization reaching up to 214 TOPS, delivering excellent inference performance with low power consumption. 2. Digital In-Memory Computing (D-IMC): Significantly reduces data movement requirements, greatly enhancing energy efficiency. 3. Voyager SDK Software-Hardware Co-Optimization Technology: Automates model quantization, compilation, and deployment to maximize performance and improve usability. 4. Multi-Core Flexible Utilization: Supports multi-stream inference and parallel processing of large-scale models, adapting flexibly to complex application scenarios. 5. Low-Power Design: PCIe module operates at 8–15W, and M.2 module at just 3.5–9W, specifically designed for edge deployments. 6. Downclock Technology (Dynamic Clock Adjustment): Dynamically balances power consumption and performance, allowing flexible adjustments to meet varying practical demands.

►方案规格

1. Inference performance: Up to 214 TOPS (INT8 precision) 2. Model support: ONNX 3. Typical inference FPS: Over 3,200 FPS for ResNet-50 model 4. Power consumption for PCIe version: Typical 8–15W 5. Power consumption for M.2 version: Typical 3.5–9W 6. Hardware interface support: PCIe Gen3 x4, M.2 interface (2280 specification) 7. SDK: Voyager SDK (quantization, optimization, automatic deployment) 8. Supported operating system: Linux (Ubuntu) 9. Supported programming languages: Python, C++

Related videos

Axelera AI Edge Computing Accelerates Intelligent Innovation Solutions Video

Hello everyone, I’m SAC Apollo. Today, I’m here to introduce the Axelera Voyager SDK. As you can see on the screen, the Voyager SDK has already been deployed locally. It comes pre-configured with many models, such as MobileNet, the COCO dataset, and YOLO, which is commonly used for image computation. I’ll select one of these models for inference. You can use pre-recorded videos or real-time camera input, such as a standard USB camera, or perform inference via RTSP network streams and IP cameras.

 

Next are the common object detection methods, such as bounding boxes and automatically classifying the objects annotated within the dataset.

 

It can also perform target counting, such as calculating traffic flow in videos, which can be deployed in smart transportation applications.

 

Additionally, it can perform inference on multiple input sources, which are considered typical applications. That concludes my introduction. If you're interested, feel free to contact us anytime for discussion. Thank you!

 

Relevant solutions:Axelera AI Edge Computing Accelerates Intelligent Innovation Solutions