Videos on Edge AI and Visual Intelligence
We hope that the compelling AI and visual intelligence case studies that follow will both entertain and inspire you, and that you’ll regularly revisit this page as new material is added. For more, monitor the News page, where you’ll frequently find video content embedded within the daily writeups.
Alliance Website Videos

“Customizing Vision-language Models for Real-world Applications,” a Presentation from NVIDIA
Monika Jhuria, Technical Marketing Engineer at NVIDIA, presents the “Customizing Vision-language Models for Real-world Applications” tutorial at the May 2025 Embedded Vision Summit. Vision-language models (VLMs) have the potential to revolutionize various applications, and their performance can be improved through fine-tuning and customization. In this presentation, Jhuria explores the concept… “Customizing Vision-language Models for Real-world

“An Introduction to the MIPI CSI-2 Image Sensor Standard and Its Latest Advances,” a Presentation from the MIPI Alliance
Haran Thanigasalam, Camera and Imaging Systems Consultant for the MIPI Alliance, presents the “Introduction to the MIPI CSI-2 Image Sensor Standard and Its Latest Advances” tutorial at the May 2025 Embedded Vision Summit. In this presentation, Thanigasalam provides an overview of the MIPI CSI-2 image sensor interface standard, covering its… “An Introduction to the MIPI

“Visual Search: Fine-grained Recognition with Embedding Models for the Edge,” a Presentation from Gimlet Labs
Omid Azizi, Co-Founder of Gimlet Labs, presents the “Visual Search: Fine-grained Recognition with Embedding Models for the Edge” tutorial at the May 2025 Embedded Vision Summit. In the domain of AI vision, we have seen an explosion of models that can reliably detect objects of various types, from people to… “Visual Search: Fine-grained Recognition with

“Optimizing Real-time SLAM Performance for Autonomous Robots with GPU Acceleration,” a Presentation from eInfochips
Naitik Nakrani, Solution Architect Manager at eInfochips, presents the “Optimizing Real-time SLAM Performance for Autonomous Robots with GPU Acceleration” tutorial at the May 2025 Embedded Vision Summit. Optimizing execution time of long-term and large-scale SLAM algorithms is essential for real-time deployments on edge compute platforms. Faster SLAM output means faster… “Optimizing Real-time SLAM Performance for

“LLMs and VLMs for Regulatory Compliance, Quality Control and Safety Applications,” a Presentation from Camio
Lazar Trifunovic, Solutions Architect at Camio, presents the “LLMs and VLMs for Regulatory Compliance, Quality Control and Safety Applications” tutorial at the May 2025 Embedded Vision Summit. By using vision-language models (VLMs) or combining large language models (LLMs) with conventional computer vision models, we can create vision systems that are… “LLMs and VLMs for Regulatory

“Simplifying Portable Computer Vision with OpenVX 2.0,” a Presentation from AMD
Kiriti Nagesh Gowda, Staff Engineer at AMD, presents the “Simplifying Portable Computer Vision with OpenVX 2.0” tutorial at the May 2025 Embedded Vision Summit. The Khronos OpenVX API offers a set of optimized primitives for low-level image processing, computer vision and neural network operators. It provides a simple method for… “Simplifying Portable Computer Vision with

“Quantization Techniques for Efficient Deployment of Large Language Models: A Comprehensive Review,” a Presentation from AMD
Dwith Chenna, MTS Product Engineer for AI Inference at AMD, presents the “Quantization Techniques for Efficient Deployment of Large Language Models: A Comprehensive Review” tutorial at the May 2025 Embedded Vision Summit. The deployment of large language models (LLMs) in resource-constrained environments is challenging due to the significant computational and… “Quantization Techniques for Efficient Deployment

Texas Instruments Demonstration of Edge AI Inference and Video Streaming Over Wi-Fi
The demonstration shows how to use Texas Instruments’ AM6xA to capture live video, perform machine learning, and stream video over Wi-Fi. The video is encoded with H.264/H.265, and streamed via UDP over Wi-Fi using the CC33xx. At the receiver side, the video is decoded and displayed on a screen. The receiver side could be a

“Introduction to Data Types for AI: Trade-offs and Trends,” a Presentation from Synopsys
Joep Boonstra, Synopsys Scientist at Synopsys, presents the “Introduction to Data Types for AI: Trade-offs and Trends” tutorial at the May 2025 Embedded Vision Summit. The increasing complexity of AI models has led to a growing need for efficient data storage and processing. One critical way to gain efficiency is… “Introduction to Data Types for

Machine Vision Defect Detection: Edge AI Processing with Texas Instruments AM6xA Arm-based Processors
Texas Instruments’ portfolio of AM6xA Arm-based processors are designed to advance intelligence at the edge using high resolution camera support, an integrated image sensor processor and deep learning accelerator. This video demonstrates using AM62A to run a vision-based artificial intelligence model for defect detection for manufacturing applications. Watch the model test the produced units as