Videos on Edge AI and Visual Intelligence
We hope that the compelling AI and visual intelligence case studies that follow will both entertain and inspire you, and that you’ll regularly revisit this page as new material is added. For more, monitor the News page, where you’ll frequently find video content embedded within the daily writeups.
Alliance Website Videos

“Camera Interface Standards for Embedded Vision Applications,” an Interview with the MIPI Alliance
Haran Thanigasalam, Camera and Imaging Consultant for the MIPI Alliance, talks with Shung Chieh, Senior Vice President at Eikon Systems, for the “Exploring MIPI Camera Interface Standards for Embedded Vision Applications” interview at the May 2024 Embedded Vision Summit. This insightful interview delves into the relevance and impact of MIPI camera interface standards for embedded

“Identifying and Mitigating Bias in AI,” a Presentation from Intel
Nikita Tiwari, AI Enabling Engineer for OEM PC Experiences in the Client Computing Group at Intel, presents the “Identifying and Mitigating Bias in AI” tutorial at the May 2024 Embedded Vision Summit. From autonomous driving to immersive shopping, and from enhanced video collaboration to graphic design, AI is placing a wealth of possibilities at our

“The Fundamentals of Training AI Models for Computer Vision Applications,” a Presentation from GMAC Intelligence
Amit Mate, Founder and CEO of GMAC Intelligence, presents the “Fundamentals of Training AI Models for Computer Vision Applications” tutorial at the May 2024 Embedded Vision Summit. In this presentation, Mate introduces the essential aspects of training convolutional neural networks (CNNs). He discusses the prerequisites for training, including models, data and training frameworks, with an

“An Introduction to Semantic Segmentation,” a Presentation from Au-Zone Technologies
Sébastien Taylor, Vice President of Research and Development for Au-Zone Technologies, presents the “Introduction to Semantic Segmentation” tutorial at the May 2024 Embedded Vision Summit. Vision applications often rely on object detectors, which determine the nature and location of objects in a scene. But many vision applications require a different type of visual understanding: semantic

“Augmenting Visual AI through Radar and Camera Fusion,” a Presentation from Au-Zone Technologies
Sébastien Taylor, Vice President of Research and Development for Au-Zone Technologies, presents the “Augmenting Visual AI through Radar and Camera Fusion” tutorial at the May 2024 Embedded Vision Summit. In this presentation Taylor discusses well-known limitations of camera-based AI and how radar can be leveraged to address these limitations. He covers common radar data representations

“DNN Quantization: Theory to Practice,” a Presentation from AMD
Dwith Chenna, Member of the Technical Staff and Product Engineer for AI Inference at AMD, presents the “DNN Quantization: Theory to Practice” tutorial at the May 2024 Embedded Vision Summit. Deep neural networks, widely used in computer vision tasks, require substantial computation and memory resources, making it challenging to run these models on resource-constrained devices.

“Leveraging Neural Architecture Search for Efficient Computer Vision on the Edge,” a Presentation from NXP Semiconductors
Hiram Rayo Torres Rodriguez, Senior AI Research Engineer at NXP Semiconductors, presents the “Leveraging Neural Architecture Search for Efficient Computer Vision on the Edge” tutorial at the May 2024 Embedded Vision Summit. In most AI research today, deep neural networks (DNNs) are designed solely to improve prediction accuracy, often ignoring real-world constraints such as compute

“Introduction to Visual Simultaneous Localization and Mapping (VSLAM),” a Presentation from Cadence
Amol Borkar, Product Marketing Director, and Shrinivas Gadkari, Design Engineering Director, both of Cadence, co-present the “Introduction to Visual Simultaneous Localization and Mapping (VSLAM)” tutorial at the May 2024 Embedded Vision Summit. Simultaneous localization and mapping (SLAM) is widely used in industry and has numerous applications where camera or ego-motion needs to be accurately determined.

Untether AI Demonstration of Video Analysis Using the runAI Family of Inference Accelerators
Max Sbabo, Senior Application Engineer at Untether AI, demonstrates the company’s latest edge AI and vision technologies and products at the 2024 Embedded Vision Summit. Specifically, Sbabo demonstrates his company’s its AI inference technology with AI accelerator cards that leverage the capabilities of the runAI family of ICs in a PCI-Express form factor. This demonstration

EyePop.ai Demonstration of Effortless AI Integration
Andy Ballester, Co-Founder of EyePop.ai, demonstrates the company’s latest edge AI and vision technologies and products at the 2024 Embedded Vision Summit. Specifically, Ballester demonstrates the ease and accessibility of his company’s AI platform. Tailored for startups without machine learning teams, this demo showcases how to create a custom computer vision endpoint that can be