Videos on Edge AI and Visual Intelligence
We hope that the compelling AI and visual intelligence case studies that follow will both entertain and inspire you, and that you’ll regularly revisit this page as new material is added. For more, monitor the News page, where you’ll frequently find video content embedded within the daily writeups.
Alliance Website Videos

Nextchip Demonstration of Various Computing Applications Using the Company’s ADAS SoC
Jonathan Lee, Manager of the Global Strategy Team at Nextchip, demonstrates the company’s latest edge AI and vision technologies and products at the 2025 Embedded Vision Summit. Specifically, Lee demonstrates various computing applications using his company’s ADAS SoC. Lee showcases how Nextchip’s ADAS SoC can be used for applications such as: Sensor fusion of iToF

Microchip Technology Demonstration of Real-time Object and Facial Recognition with Edge AI Platforms
Swapna Guramani, Applications Engineer for Microchip Technology, demonstrates the company’s latest edge AI and vision technologies and products at the 2025 Embedded Vision Summit. Specifically, Guramani demonstrates her company’s latest AI/ML capabilities in action: real-time object recognition using the SAMA7G54 32-bit MPU running Edge Impulse’s FOMO model, and facial recognition powered by TensorFlow Lite’s Mobile

Microchip Technology Demonstration of AI-powered Face ID on the Polarfire SoC FPGA Using the Vectorblox SDK
Avery Williams, Channel Marketing Manager for Microchip Technology, demonstrates the company’s latest edge AI and vision technologies and products at the 2025 Embedded Vision Summit. Specifically, Williams demonstrates ultra-efficient AI-powered facial recognition on Microchip’s PolarFire SoC FPGA using the VectorBlox Accelerator SDK. Pre-trained neural networks are quantized to INT8 and compiled to run directly on

Chips&Media Demonstration of Its WAVE-N NPU In High-quality, High-resolution Imaging Applications
Andy Lee, Vice President of U.S. Marketing at Chips&Media, demonstrates the company’s latest edge AI and vision technologies and products at the 2025 Embedded Vision Summit. Specifically, Lee demonstrates example high-quality, high-resolution imaging applications in edge devices implemented using WAVE-N, his company’s custom NPU. Key notes: Extreme efficiency, up to 90% of MAC utilization, for

Chips&Media Introduction to Its WAVE-N Specialized Video Processing NPU IP
Andy Lee, Vice President of U.S. Marketing at Chips&Media, demonstrates the company’s latest edge AI and vision technologies and products at the 2025 Embedded Vision Summit. Specifically, Lee introduces WAVE-N, his company’s specialized video processing NPU IP. Key notes: Extreme efficiency, up to 90% of MAC utilization, for modern CNN computation Highly optimized for real-time

3LC Demonstration of Catching Synthetic Slip-ups with 3LC
Paul Endresen, CEO of 3LC, demonstrates the company’s latest edge AI and vision technologies and products at the 2025 Embedded Vision Summit. Specifically, Andresen demonstrates the investigation of a curious embryo classification study from Norway, where synthetic data was supposed to help train a model – but something didn’t quite hatch right. Using 3LC to

3LC Demonstration of Debugging YOLO with 3LC’s Training-time Truth Detector
Paul Endresen, CEO of 3LC, demonstrates the company’s latest edge AI and vision technologies and products at the 2025 Embedded Vision Summit. Specifically, Andresen demonstrates how to uncover hidden treasures in the COCO dataset – like unlabeled forks and phantom objects – using his platform’s training-time introspection tools. In this demo, 3LC eavesdrops on a

VeriSilicon Demonstration of the Open Se Cura Project
Chris Wang, VP of Multimedia Technologies and a member of CTO office at VeriSilicon, demonstrates the company’s latest edge AI and vision technologies and products at the 2025 Embedded Vision Summit. Specifically, Wang demonstrates examples from the Open Se Cura Project, a joint effort between VeriSilicon and Google. The project showcases a scalable, power-efficient, and

VeriSilicon Demonstration of Advanced AR/VR Glasses Solutions
Dr. Mahadev Kolluru, Senior VP of North America and India Business at VeriSilicon, demonstrates the company’s latest edge AI and vision technologies and products at the 2025 Embedded Vision Summit. Specifically, Dr. Kolluru demonstrates products developed using the company’s IP licensing and turnkey silicon design services. Dr. Kolluru highlights how VeriSilicon supports cutting-edge AI and

Synopsys and Visionary.ai Demonstration of a Low-light Real-time AI Video Denoiser Tailored for NPX6 NPU IP
Gordon Cooper, Principal Product Manager at Synopsys, and David Jarmon, Senior VP of Worldwide Sales at Visionary.ai, demonstrates the companies’ latest edge AI and vision technologies and products in Synopsys’ booth at the 2025 Embedded Vision Summit. Specifically, Cooper and Jarmon demonstrate the future of low-light imaging with Visionary.ai’s cutting-edge real-time AI video denoiser. This