Videos on Edge AI and Visual Intelligence
We hope that the compelling AI and visual intelligence case studies that follow will both entertain and inspire you, and that you’ll regularly revisit this page as new material is added. For more, monitor the News page, where you’ll frequently find video content embedded within the daily writeups.
Alliance Website Videos

Chips&Media Demonstration of Its WAVE-N NPU In High-quality, High-resolution Imaging Applications
Andy Lee, Vice President of U.S. Marketing at Chips&Media, demonstrates the company’s latest edge AI and vision technologies and products at the 2025 Embedded Vision Summit. Specifically, Lee demonstrates example high-quality, high-resolution imaging applications in edge devices implemented using WAVE-N, his company’s custom NPU. Key notes: Extreme efficiency, up to 90% of MAC utilization, for

Chips&Media Introduction to Its WAVE-N Specialized Video Processing NPU IP
Andy Lee, Vice President of U.S. Marketing at Chips&Media, demonstrates the company’s latest edge AI and vision technologies and products at the 2025 Embedded Vision Summit. Specifically, Lee introduces WAVE-N, his company’s specialized video processing NPU IP. Key notes: Extreme efficiency, up to 90% of MAC utilization, for modern CNN computation Highly optimized for real-time

3LC Demonstration of Catching Synthetic Slip-ups with 3LC
Paul Endresen, CEO of 3LC, demonstrates the company’s latest edge AI and vision technologies and products at the 2025 Embedded Vision Summit. Specifically, Andresen demonstrates the investigation of a curious embryo classification study from Norway, where synthetic data was supposed to help train a model – but something didn’t quite hatch right. Using 3LC to

3LC Demonstration of Debugging YOLO with 3LC’s Training-time Truth Detector
Paul Endresen, CEO of 3LC, demonstrates the company’s latest edge AI and vision technologies and products at the 2025 Embedded Vision Summit. Specifically, Andresen demonstrates how to uncover hidden treasures in the COCO dataset – like unlabeled forks and phantom objects – using his platform’s training-time introspection tools. In this demo, 3LC eavesdrops on a

VeriSilicon Demonstration of the Open Se Cura Project
Chris Wang, VP of Multimedia Technologies and a member of CTO office at VeriSilicon, demonstrates the company’s latest edge AI and vision technologies and products at the 2025 Embedded Vision Summit. Specifically, Wang demonstrates examples from the Open Se Cura Project, a joint effort between VeriSilicon and Google. The project showcases a scalable, power-efficient, and

VeriSilicon Demonstration of Advanced AR/VR Glasses Solutions
Dr. Mahadev Kolluru, Senior VP of North America and India Business at VeriSilicon, demonstrates the company’s latest edge AI and vision technologies and products at the 2025 Embedded Vision Summit. Specifically, Dr. Kolluru demonstrates products developed using the company’s IP licensing and turnkey silicon design services. Dr. Kolluru highlights how VeriSilicon supports cutting-edge AI and

Synopsys and Visionary.ai Demonstration of a Low-light Real-time AI Video Denoiser Tailored for NPX6 NPU IP
Gordon Cooper, Principal Product Manager at Synopsys, and David Jarmon, Senior VP of Worldwide Sales at Visionary.ai, demonstrates the companies’ latest edge AI and vision technologies and products in Synopsys’ booth at the 2025 Embedded Vision Summit. Specifically, Cooper and Jarmon demonstrate the future of low-light imaging with Visionary.ai’s cutting-edge real-time AI video denoiser. This

Synopsys Demonstration of Smart Architectural Exploration for AI SoCs
Guy Ben Haim, Senior Product Manager, and Gururaj Rao, Field Applications Engineer, both of Synopsys, demonstrate the company’s latest edge AI and vision technologies and products at the 2025 Embedded Vision Summit. Specifically, Ben Haim and Rao demonstrate how to optimize neural network performance with the Synopsys ARC MetaWare MX Development Toolkit. Ben Haim and

SqueezeBits Demonstration of On-device LLM Inference, Running a 2.4B Parameter Model on the iPhone 14 Pro
Taesu Kim, CTO of SqueezeBits, demonstrates the company’s latest edge AI and vision technologies and products at the 2025 Embedded Vision Summit. Specifically, Kim demonstrates a 2.4-billion-parameter large language model (LLM) running entirely on an iPhone 14 Pro without server connectivity. The device operates in airplane mode, highlighting on-device inference using a hybrid approach that

Sony Semiconductor Demonstration of AI Vision Devices and Tools for Industrial Use Cases
Zachary Li, Product and Business Development Manager at Sony America, demonstrates the company’s latest edge AI and vision technologies and products at the 2025 Embedded Vision Summit. Specifically, Li demonstrates his company’s AITRIOS products and ecosystem. Powered by the IMX500 intelligent vision sensor, Sony AITRIOS collaborates with Raspberry Pi for development kits and with leading