Videos on Edge AI and Visual Intelligence
We hope that the compelling AI and visual intelligence case studies that follow will both entertain and inspire you, and that you’ll regularly revisit this page as new material is added. For more, monitor the News page, where you’ll frequently find video content embedded within the daily writeups.
Alliance Website Videos

“A New Era of 3D Sensing: Transforming Industries and Creating Opportunities,” a Presentation from the Yole Group
Florian Domengie, Principal Technology and Market Analyst for Imaging at the Yole Group, presents the “A New Era of 3D Sensing: Transforming Industries and Creating Opportunities” tutorial at the May 2025 Embedded Vision Summit. The 3D sensing market is projected to more than double by 2030, surpassing $18B. Key drivers include automotive and industrial applications,

“The New OpenCV 5.0: Added Features, Performance Improvements and Future Directions,” a Presentation from OpenCV.org
Satya Mallick, CEO of OpenCV.org, presents the “New OpenCV 5.0: Added Features, Performance Improvements and Future Directions” tutorial at the May 2025 Embedded Vision Summit. In this presentation, Mallick delves into the latest version of OpenCV, the world’s most popular open-source computer vision library. He highlights the major innovations and improvements in OpenCV 5.0, including

“Introduction to Shrinking Models with Quantization-aware Training and Post-training Quantization,” a Presentation from NXP Semiconductors
Robert Cimpeanu, Machine Learning Software Engineer at NXP Semiconductors, presents the “Introduction to Shrinking Models with Quantization-aware Training and Post-training Quantization” tutorial at the May 2025 Embedded Vision Summit. In this presentation, Cimpeanu explains two neural network quantization techniques, quantization-aware training (QAT) and post-training quantization (PTQ), and explains when to use each. He discusses what

“Customizing Vision-language Models for Real-world Applications,” a Presentation from NVIDIA
Monika Jhuria, Technical Marketing Engineer at NVIDIA, presents the “Customizing Vision-language Models for Real-world Applications” tutorial at the May 2025 Embedded Vision Summit. Vision-language models (VLMs) have the potential to revolutionize various applications, and their performance can be improved through fine-tuning and customization. In this presentation, Jhuria explores the concept and shares insights on domain

“An Introduction to the MIPI CSI-2 Image Sensor Standard and Its Latest Advances,” a Presentation from the MIPI Alliance
Haran Thanigasalam, Camera and Imaging Systems Consultant for the MIPI Alliance, presents the “Introduction to the MIPI CSI-2 Image Sensor Standard and Its Latest Advances” tutorial at the May 2025 Embedded Vision Summit. In this presentation, Thanigasalam provides an overview of the MIPI CSI-2 image sensor interface standard, covering its fundamental features and capabilities, including

“Visual Search: Fine-grained Recognition with Embedding Models for the Edge,” a Presentation from Gimlet Labs
Omid Azizi, Co-Founder of Gimlet Labs, presents the “Visual Search: Fine-grained Recognition with Embedding Models for the Edge” tutorial at the May 2025 Embedded Vision Summit. In the domain of AI vision, we have seen an explosion of models that can reliably detect objects of various types, from people to license plates. While these models

“Optimizing Real-time SLAM Performance for Autonomous Robots with GPU Acceleration,” a Presentation from eInfochips
Naitik Nakrani, Solution Architect Manager at eInfochips, presents the “Optimizing Real-time SLAM Performance for Autonomous Robots with GPU Acceleration” tutorial at the May 2025 Embedded Vision Summit. Optimizing execution time of long-term and large-scale SLAM algorithms is essential for real-time deployments on edge compute platforms. Faster SLAM output means faster map refresh rates and quicker

“LLMs and VLMs for Regulatory Compliance, Quality Control and Safety Applications,” a Presentation from Camio
Lazar Trifunovic, Solutions Architect at Camio, presents the “LLMs and VLMs for Regulatory Compliance, Quality Control and Safety Applications” tutorial at the May 2025 Embedded Vision Summit. By using vision-language models (VLMs) or combining large language models (LLMs) with conventional computer vision models, we can create vision systems that are able to interpret policies and

“Simplifying Portable Computer Vision with OpenVX 2.0,” a Presentation from AMD
Kiriti Nagesh Gowda, Staff Engineer at AMD, presents the “Simplifying Portable Computer Vision with OpenVX 2.0” tutorial at the May 2025 Embedded Vision Summit. The Khronos OpenVX API offers a set of optimized primitives for low-level image processing, computer vision and neural network operators. It provides a simple method for writing optimized code that is

“Quantization Techniques for Efficient Deployment of Large Language Models: A Comprehensive Review,” a Presentation from AMD
Dwith Chenna, MTS Product Engineer for AI Inference at AMD, presents the “Quantization Techniques for Efficient Deployment of Large Language Models: A Comprehensive Review” tutorial at the May 2025 Embedded Vision Summit. The deployment of large language models (LLMs) in resource-constrained environments is challenging due to the significant computational and memory demands of these models.
