Edge AI and Vision Insights: August 6, 2025

NEW PROCESSORS ENABLE ULTRA-EFFICIENT EDGE INFERENCE

Key Requirements to Successfully Implement Generative AI in Edge Devices: Optimized Mapping to the Enhanced NPX6 Neural Processing Unit IP

In this 2025 Embedded Vision Summit talk, Gordon Cooper, Principal Product Manager at Synopsys, discusses emerging trends in generative AI for edge devices and the key role of transformer-based neural networks. He reviews the distinct attributes of transformers, their advantages over conventional convolutional neural networks and how they enable generative AI. Cooper then covers key requirements that must be met for neural processing units (NPU) to support transformers and generative AI in edge device applications. He uses transformer-based generative AI examples to illustrate the efficient mapping of these workloads onto the enhanced Synopsys ARC NPX NPU IP family.

Running Accelerated CNNs on Low-power Microcontrollers Using Arm Ethos-U55, TensorFlow and Numpy

In this 2025 Embedded Vision Summit presentation, Kwabena Agyeman, President of OpenMV, introduces the OpenMV AE3 and OpenMV N6 low-power, high-performance embedded machine vision cameras, which are 200x better than the company’s previous generation systems. He shows how you can run YOLO at 25 FPS on the OpenMV AE3 while drawing less than 0.25 W. He also explains how the OpenMV AE3 can go into deep sleep mode on demand to draw less than 0.25 mW, allowing you to create a smart machine vision camera that can run on batteries for years. Agyeman demonstrates how you can leverage TensorFlow to run accelerated CNNs on these cameras, and implement pre- and post-processing using MicroPython and Numpy. Finally, he shows how you can use OpenAMP with MicroPython running on the camera to leverage dual-core heterogenous processing and enable always-on NPU accelerated AI sensing.

AI PROCESSING IP AND CHIPLETS

MPU+: A Transformative Solution for Next-Generation AI at the Edge

In this 2025 Embedded Vision Summit talk, Petronel Bigioi, CEO of FotoNation, introduces MPU+, a novel programmable, customizable low-power platform for real-time, localized intelligence at the edge. The platform includes an AI-augmented image signal processor that enables leading image and video quality. In addition, it integrates ultra-low-power object and motion detection capabilities to enable always-on computer vision. A programmable neural processor provides flexibility to efficiently implement new neural networks. And additional specialized engines facilitate image stabilization and audio enhancements.

NPU IP Hardware Shaped Through Software and Use-case Analysis

True innovation in tiny machine learning (tinyML) emerges from a synergy between software ingenuity, real-world application insights and leading-edge processor IP. In this 2025 Embedded Vision Summit presentation, Yair Siegel, Senior Director for Wireless and Emerging Markets at Ceva, explores the process of integrating these elements to shape the design of Ceva’s latest NPU IP—the Ceva-NeuPro-Nano. Through real-world use cases, he examines how software architecture and detailed analysis of use cases were pivotal in guiding the NPU architecture design process, yielding a versatile and efficient single-core solution capable of handling control, digital signal processing and neural network inference tasks. Software is crucial in unlocking the potential of hardware and adapting to diverse application demands, and Siegel shows how Ceva’s software innovations, harnessing the capabilities of leading neural network inferencing frameworks, ensure that Ceva-NeuPro-Nano is highly effective in practical scenarios. He concludes by reviewing the exciting hardware and software extensibility features of NeuPro-Nano, which push the boundaries of customizability.

UPCOMING INDUSTRY EVENTS

Infrared Imaging: Technologies, Trends, Opportunities and Forecasts – Yole Group Webinar: September 23, 2025, 9:00 am PT

More Events

FEATURED NEWS

Renesas Introduces the 64-bit RZ/G3E MPU for High-performance HMI Systems Requiring AI Acceleration and Edge Computing

Recent Announcements from SiMa.ai Include $85M in New Funding to Scale Physical AI and Partnerships with Fellow Alliance Member Companies Macnica and Synopsys

Vision Components and Phytec’s’ New MIPI Camera Embedded Vision Kits are Based on NXP Semiconductors’ i.MX 8M Plus and 8M Mini Application Processors

STMicroelectronics to Strengthen Its Position in Sensors with the Acquisition of NXP’s MEMS Sensors Business

Basler’s IP67 Camera and Components Present a Complete Solution

More News

EDGE AI AND VISION PRODUCT OF THE YEAR WINNER SHOWCASE



MemryX MX3 M.2 AI Accelerator Module (Best Edge AI Computer or Board)

MemryX’s MX3 M.2 AI Accelerator Module is the 2025 Edge AI and Vision Product of the Year Award Winner in the Edge AI Computers and Boards category. The MemryX MX3 M.2 AI Accelerator delivers AI model-based computer vision processing with ultra-low power consumption averaging under 3W for multiple camera applications. The MX3 is based on an advanced on-chip memory architecture that reduces data movement, boosting efficiency and reducing power and cost. 16-bit inference processing delivers high accuracy without the need for retraining or hand-tuning. Model compilation for MX3 is straightforward – the MemryX software stack eases deployment, eliminating the need for deep hardware expertise. Thousands of computer vision models have been directly compiled with no intervention, shortening development cycles and speeding up time-to-market.

Developers can import models directly from popular frameworks like TensorFlow or PyTorch, and the MemryX compiler automates optimizations such as quantization and layer fusion. These tools can even run on resource-constrained devices like the Raspberry Pi, enabling cost-effective development and testing. This streamlined workflow eliminates the need for deep hardware expertise, significantly reducing development time and complexity. The MX3 hardware offers an innovative approach to scaling. It allows MX3 devices to be daisy-chained to add capacity for large models while also allowing fewer devices to be deployed to reduce cost and power when performance requirements are lower. The M.2 form factor enables quick integration into existing platforms with minimal thermal concerns.

Please see here for more information on MemryX’s MX3 M.2 AI Accelerator Module. The Edge AI and Vision Product of the Year Awards celebrate the innovation of the industry’s leading companies that are developing and enabling the next generation of edge AI and computer vision products. Winning a Product of the Year award recognizes a company’s leadership in edge AI and computer vision as evaluated by independent industry experts.

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top