Edge AI and Vision Insights: October 15, 2025

LETTER FROM THE EDITOR

Dear Colleague,

On Tuesday, November 18 at 9 am PT, the Yole Group will present the free webinar “How AI-enabled Microcontrollers Are Expanding Edge AI Opportunities” in partnership with the Edge AI and Vision Alliance. Running AI inference at the edge, versus in the cloud, has many compelling benefits; greater privacy, lower latency and real-time responsiveness key among them. But implementing edge AI in highly cost-, power-, or size-constrained devices has historically been impractical due to its compute, memory and storage implementation requirements.

Nowadays, however, the AI accelerators and related resources included in modern microcontrollers, in combination with technology developments and toolset enhancements that shrink the size of deep learning models, are making it possible to run computer vision, speech interfaces, and other AI capabilities at the edge.

In this webinar, Tom Hackenberg, Principal Analyst for Computing at the Yole Group, will explain that while scaling AI upward into massive data centers may dominate today’s headlines, scaling downward to edge applications may be even more transformative. Hackenberg will share market size and forecast data, along with supplier product and developer application case study examples, to support his contention that edge deployment is key to unlocking AI’s full potential across industries. A question-and-answer session will follow the presentation. For more information and to register, please see the event page.

Brian Dipert
Editor-In-Chief, Edge AI and Vision Alliance

DEVELOPING EFFICIENT EDGE AI/CV SOFTWARE

A Lightweight Camera Stack for Edge AI

Electronic products for virtual and augmented reality, home robots and cars deploy multiple cameras for computer vision and AI use cases. These cameras are used for detection, tracking, recognition, SLAM, biometric authentication and a variety of other use cases, all enabled via traditional computer vision algorithms or neural networks. Camera frames produced for these algorithms do not need sophisticated image signal processing (such as that performed in ISP hardware) if the neural network models were trained on data collected without use of such image processing. In addition, some camera functionality supported for photography applications (such as extensive metadata and the ability to change camera settings on a per-frame basis) may not be needed for AI use cases. These differing requirements create an opportunity to optimize the camera stack for better performance, reducing CPU load. In this 2025 Embedded Vision Summit talk, Jui Garagate, Camera Software Engineer, and Karthick Kumaran, Staff Software Engineer, both of Meta, share optimization strategies to create a lightweight AI camera stack that can be executed on low-power devices for AI use cases.

Simplifying Portable Computer Vision with OpenVX 2.0

The Khronos OpenVX API offers a set of optimized primitives for low-level image processing, computer vision and neural network operators. It provides a simple method for writing optimized code that is portable across multiple hardware vendors and processors, including CPUs, GPUs and special-function hardware. In this 2025 Embedded Vision Summit presentation, Kiriti Nagesh Gowda, Staff Engineer at AMD, introduces OpenVX 2.0, the latest version of the standard, and explains its key improvements. Using real-world use cases from Bosch, Texas Instruments and others, he shows how OpenVX is used to build, verify and coordinate computer vision and neural network graph executions, enabling software developers to spend more time on algorithmic innovations without worrying about the performance and portability of their applications.

DEPTH SENSING EVOLUTIONS

A New Era of 3D Sensing: Transforming Industries and Creating Opportunities

The 3D sensing market is projected to more than double by 2030, surpassing $18B. Key drivers include automotive and industrial applications, with significant advancements in ADAS, infrastructure monitoring, security and robotics. While the mobile sector remains dominant, 3D sensing has become critical for perception and intelligence across diverse applications, including biometrics, navigation, environmental tracking, people monitoring and medical diagnostics. The market remains highly concentrated, with the top nine companies capturing over 85% of revenue in the consumer segment. However, a competing ecosystem is emerging, bringing new value to the sector. Technological innovations, particularly in hybrid stacking, SWIR sensing, LiDAR and optical metasurfaces, are driving the evolution of 3D sensing solutions and are expected to unlock new opportunities for further growth. What are the most promising real-world applications emerging in 3D sensing? Which technologies hold the most potential, and what impact might they have on both the current and future market? Florian Domengie, Principal Technology and Market Analyst for Imaging at the Yole Group, answers these and other questions in this 2025 Embedded Vision Summit talk.

Introduction to Radar and Its Use for Machine Perception

Radar is a proven technology with a long history in various market segments and continues to play an increasingly important role in robust perception systems. In this 2025 Embedded Vision Summit presentation, Amol Borkar, Product Marketing Director, and Vencatesh Subramanian, Design Engineering Architect, both of Cadence, delve into the fundamental principles of non-imaging and imaging radar. They also discuss the signal processing algorithms used for extracting range, velocity and angle of arrival parameters from radar signals. Finally, they explore recent advances and current trends in radar signal processing using machine learning and AI.

UPCOMING INDUSTRY EVENTS

How AI-enabled Microcontrollers Are Expanding Edge AI Opportunities – Yole Group Webinar: November 18, 2025, 9:00 am PT

Embedded Vision Summit: May 11-13, 2026, Santa Clara, California

More Events

FEATURED NEWS

Learn How to Deploy PyTorch Models at the Edge via a Free Workshop from Qualcomm and Amazon in San Francisco on October 21

Vision Components Introduces the VC MIPI Multiview Cam, a MIPI CSI-2-based Camera for Light Field Measurement and Multispectral Imaging

Explore the Latest Innovations in Mobile Robotics via a Half-day Seminar from NXP and Avnet in Silicon Valley on October 22 

STMicroelectronics and Tobii Enter Mass Production of a Breakthrough Interior Sensing Technology

Andes Technology Expands Its Comprehensive AndeSentry Security Suite with Complete Trusted Execution Environment Support for Embedded Systems

More News

EDGE AI AND VISION PRODUCT OF THE YEAR WINNER SHOWCASE



Airy3D DepthIQ (Best Camera or Sensor)

Airy3D’s DepthIQ is the 2025 Edge AI and Vision Product of the Year Award Winner in the Camera or Sensor category. DepthIQ from Airy3D offers a simple and versatile 3D imaging solution that generates near-field depth data using just a single camera. This technology can be applied to a broader range of applications compared to current 3D imaging solutions, all at a fraction of the cost, resource requirements, and power consumption. DepthIQ is built upon a transmissive diffraction mask (TDM) that is applied over a CMOS image sensor, utilizing standard semiconductor technology to facilitate streamlined mass production. TDMs leverage diffraction to measure depth effectively. Because DepthIQ employs a single sensor, it avoids stereoscopic occlusions, making it an excellent choice for short-range applications.

Airy3D’s DepthIQ technology integrates a TDM with a proprietary lightweight inline processing method. It produces near-field 3D data that is precisely aligned with the traditional 2D data captured by the sensor, all from a single device. The TDM acts as an optical filter applied on top of any CMOS sensor during the final stage of imager post-production, following the application of color filters and micro lenses. This TDM creates a unique raw data set that inherently combines 2D images with depth information into one device, enhancing reliability and stability over conventional 3D capture methods. Depth information is extracted from these unique raw data sets using proprietary imaging algorithms that demand very low computational power and do not require frame buffering. The TDM stack is placed directly on top of the micro lens layer and is fabricated using conventional semiconductor materials and processes. Since the stack is only a few microns thick, the resulting 3D sensor can easily integrate into the downstream manufacturing process, including module and end-product assembly. This patented solution transforms any CMOS imaging device into a 3D sensor for use in various applications, including Advanced Driver Assistance Systems (ADAS), security, robotics, augmented reality/virtual reality (AR/VR), and the Internet of Things (IoT).

Please see here for more information on Airy3D’s DepthIQ. The Edge AI and Vision Product of the Year Awards celebrate the innovation of the industry’s leading companies that are developing and enabling the next generation of edge AI and computer vision products. Winning a Product of the Year award recognizes a company’s leadership in edge AI and computer vision as evaluated by independent industry experts.

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top