Edge AI and Vision Insights: November 25, 2025

 

LETTER FROM THE EDITOR

Dear Colleague,

Good morning. Before we get to the content today, I’d like to introduce myself, since you’ll be hearing from me regularly. I’m Erik Peters, the new editor of the Edge AI and Vision Insights newsletter. I’ve been helping the Alliance with market and industry ecosystem research (and the occasional ML project) for over seven years, and I’m pleased to have a new opportunity to bring some of that work directly to you. Each newsletter, we’ll showcase some outstanding technical content, and we’ll see some real-world applications as well. Without further ado, let’s dive in!

On Thursday, December 11, Jeff Bier, Founder of the Edge AI and Vision Alliance will present at two sessions as part of EE Times’ AI Everywhere 2025. In the first, “Edge AI Everywhere: What Will It Take To Get There?” Jeff will highlight some inspiring success stories and explore the key challenges that need to be overcome to enable more edge AI-based systems to reach massive scale. In the second, Jeff will be joined by speakers from Alliance Member companies Ambarella, STMicroelectronics, and Synopsys for a panel discussion, “Deploying and Scaling AI at the Edge – From Lab to Life Cycle.” More information on the event and how to register can be found here.

I’m also pleased to announce that registration for the 2026 Embedded Vision Summit is now open! The Summit will take place May 11-13 in Santa Clara, California, and we very much hope to see all of you there.

Our Call for Presentation Proposals for the 2026 Summit also remains open through December 5. We’re planning more than 100 expert sessions and would love to see your ideas—from physical AI case studies to efficient edge AI techniques to the latest advances in vision language models. Check out the 2026 topics list on the Call for Proposals page for inspiration and to submit your own proposal by December 5.

Erik Peters
Director of Ecosystem and Community Engagement, Edge AI and Vision Alliance

BUILDING AND DEPLOYING REAL-WORLD ROBOTS

Real-world Deployment of Mobile Material Handling Robotics in the Supply Chain

More and more of the supply chain needs to be, and can be, automated. Demographics, particularly in the developed world, are driving labor scarcity. Additionally, in manual material handling turnover, injury rates and absenteeism are rampant. Fortunately, modern warehouse robotic systems are becoming able to see and manipulate cartons and bags at levels of speed and dependability that can deliver strongly positive ROI to supply chain operators. However, moving such systems into production in the complexity of real-world warehouses requires exceptional levels of product capability and rigor around testing and deployment practices. Many equipment vendors reach the pilot stage, then fail to break through to production due to the absence of this rigor. Peter Santos, Chief Operating Officer of Pickle Robot Company, focuses on three core principles of successful production deployments: complexity acceptance, test set design and early customer collaboration.

Integrating Cameras with the Robot Operating System (ROS)

In this presentation, Karthik Poduval, Principal Software Development Engineer at Amazon Lab126, explores the integration of cameras within the Robot Operating System (ROS) for robust embedded vision applications. He delves into ROS’s core functionalities for camera data handling, including the ROS messages (data structures) used for transmitting image data and calibration parameters. Poduval discusses essential camera calibration techniques, highlighting the importance of determining accurate intrinsic and extrinsic parameters. He also explains open-source ROS nodes, such as those within image_proc and stereo_image_proc, that facilitate crucial post-processing steps, including distortion correction and rectification. The presentation equips you with practical knowledge to leverage ROS’s capabilities for building advanced vision-enabled robotic systems.

DESIGNING APPLICATION-SPECIFIC CAMERA SYSTEMS

Specifying and Designing Cameras for Computer Vision Applications

Designing a camera system requires a deep understanding of the fundamental principles of image formation and the physical characteristics of its components. Translating computer vision-based system requirements into camera system parameters is a crucial step. In this talk, Richard Crisp, Vice President and CTO at Etron Technology America, provides a comprehensive overview of the key concepts involved in designing or specifying a camera system. Crisp covers the basics of image formation and the associated physics. He discusses sensor and lens effects that impact image quality, such as diffraction, circle of confusion, and depth of field. He also discusses application-specific requirements, including lighting conditions, frame rate and motion blur. Finally, he presents a detailed example illustrating how to translate application requirements into camera parameters, highlighting cost-performance trade-offs. You will gain a thorough understanding of the key factors influencing camera system design and be able to make informed decisions when selecting or designing a camera system.

Developing a GStreamer-based Custom Camera System for Long-range Biometric Data Collection

In this presentation, Gavin Jager, Researcher and Lab Space Manager at Oak Ridge National Laboratory, describes Oak Ridge National Laboratory’s work developing software for a custom camera system based on GStreamer. The BRIAR project requires high-quality video capture at distances over 400 meters for biometric recognition and identification, but commercial cameras struggle to capture high-quality video at these distances. To address this, Oak Ridge National Laboratory developed a custom camera system using GStreamer, enabling advanced imaging capabilities and long-range data capture. The work included designing a GStreamer pipeline capable of managing multiple sensor formats, integrating UDP server hooks to manage recording, using GstRTSPServer to build an RTSP server and creating an extensible hardware control interface. By integrating with network video recorders, Oak Ridge National Laboratory simplified monitoring, data handling and curation, and successfully supported BRIAR’s complex data collection efforts. This presentation details the GStreamer-based implementation, highlighting technical challenges faced and how they were overcome.

UPCOMING INDUSTRY EVENTS

AI Everywhere 2025 – EE Times Virtual Event: December 10-11, 2025

Embedded Vision Summit: May 11-13, 2026, Santa Clara, California

FEATURED NEWS

Micron has shipped its automotive universal flash storage (UFS) 4.1 for enhanced ADAS and cabin experience

Au-Zone Technologies has expanded access to EdgeFirst Studio, an MLOps platform for spatial perception at the edge

Vision Components has enabled support for its MIPI Cameras on the SMARC IMX8M Plus Development Kit from ADLINK

Axelera has released its Metis PCIe with 4 Quad-Core AIPUs

More News

EDGE AI AND VISION PRODUCT OF THE YEAR WINNER SHOWCASE



Quadric Chimera QC Series GPNPU Processors (Best Edge AI Processor IP)

Quadric’s Chimera QC Series GPNPU Processors are the 2025 Edge AI and Vision Product of the Year Award Winner in the Edge AI Processor IP category. The Chimera GPNPU family, which scales up to 800 TOPS, is the only fully C++ programmable neural processor solution that can run complete AI and machine learning models on a single architecture. This eliminates the need to partition graphs between traditional CPUs, DSPs, and matrix accelerators. Chimera processors execute every known graph operator at high performance without having to rely on slower DSPs or CPUs for less commonly used layers. This full programmability ensures that hardware built with Quadric Chimera GPNPUs can support all future vision AI models, not just a limited selection of existing networks. Designed specifically to tackle the significant deployment challenges in machine learning inference faced by system-on-chip (SoC) developers, Quadric’s Chimera general-purpose neural processor (GPNPU) family features a simple yet powerful architecture that demonstrates improved matrix computation performance compared to traditional methods. Its key differentiator is the ability to execute diverse workloads with great flexibility within a single processor.

The Chimera GPNPU family offers a unified processor architecture capable of handling matrix and vector operations alongside scalar (control) code in one execution pipeline. In conventional SoC architectures, these tasks are typically managed separately by an NPU, DSP, and real-time CPU, necessitating the division of code and performance tuning across two or three heterogeneous cores. In contrast, the Chimera GPNPU operates as a single software-controlled core, enabling the straightforward expression of complex parallel workloads. Driven entirely by code, the Chimera GPNPU empowers developers to continuously optimize the performance of their models and algorithms throughout the device’s lifecycle. This makes it ideal for running classic backbone networks, today’s newest Vision Transformers and Large Language Models, as well as any future networks that may be developed.

Please see here for more information on Quadric’s Chimera QC Series GPNPU Processors. The Edge AI and Vision Product of the Year Awards celebrate the innovation of the industry’s leading companies that are developing and enabling the next generation of edge AI and computer vision products. Winning a Product of the Year award recognizes a company’s leadership in edge AI and computer vision as evaluated by independent industry experts.

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top