Videos on Edge AI and Visual Intelligence
We hope that the compelling AI and visual intelligence case studies that follow will both entertain and inspire you, and that you’ll regularly revisit this page as new material is added. For more, monitor the News page, where you’ll frequently find video content embedded within the daily writeups.
Alliance Website Videos
“Recent Trends in Industrial Machine Vision: Challenging Times,” a Presentation from the Yole Group
Axel Clouet, Technology and Market Analyst for Imaging at the Yole Group, presents the “Recent Trends in Industrial Machine Vision: Challenging Times” tutorial at the May 2024 Embedded Vision Summit. For decades, cameras have been increasingly used in industrial applications as key components for automation. After two years of rapid… “Recent Trends in Industrial Machine
“Camera Interface Standards for Embedded Vision Applications,” an Interview with the MIPI Alliance
Haran Thanigasalam, Camera and Imaging Consultant for the MIPI Alliance, talks with Shung Chieh, Senior Vice President at Eikon Systems, for the “Exploring MIPI Camera Interface Standards for Embedded Vision Applications” interview at the May 2024 Embedded Vision Summit. This insightful interview delves into the relevance and impact of MIPI… “Camera Interface Standards for Embedded
“Identifying and Mitigating Bias in AI,” a Presentation from Intel
Nikita Tiwari, AI Enabling Engineer for OEM PC Experiences in the Client Computing Group at Intel, presents the “Identifying and Mitigating Bias in AI” tutorial at the May 2024 Embedded Vision Summit. From autonomous driving to immersive shopping, and from enhanced video collaboration to graphic design, AI is placing a… “Identifying and Mitigating Bias in
“The Fundamentals of Training AI Models for Computer Vision Applications,” a Presentation from GMAC Intelligence
Amit Mate, Founder and CEO of GMAC Intelligence, presents the “Fundamentals of Training AI Models for Computer Vision Applications” tutorial at the May 2024 Embedded Vision Summit. In this presentation, Mate introduces the essential aspects of training convolutional neural networks (CNNs). He discusses the prerequisites for training, including models, data… “The Fundamentals of Training AI
“An Introduction to Semantic Segmentation,” a Presentation from Au-Zone Technologies
Sébastien Taylor, Vice President of Research and Development for Au-Zone Technologies, presents the “Introduction to Semantic Segmentation” tutorial at the May 2024 Embedded Vision Summit. Vision applications often rely on object detectors, which determine the nature and location of objects in a scene. But many vision applications require a different… “An Introduction to Semantic Segmentation,”
“Augmenting Visual AI through Radar and Camera Fusion,” a Presentation from Au-Zone Technologies
Sébastien Taylor, Vice President of Research and Development for Au-Zone Technologies, presents the “Augmenting Visual AI through Radar and Camera Fusion” tutorial at the May 2024 Embedded Vision Summit. In this presentation Taylor discusses well-known limitations of camera-based AI and how radar can be leveraged to address these limitations. He… “Augmenting Visual AI through Radar
“DNN Quantization: Theory to Practice,” a Presentation from AMD
Dwith Chenna, Member of the Technical Staff and Product Engineer for AI Inference at AMD, presents the “DNN Quantization: Theory to Practice” tutorial at the May 2024 Embedded Vision Summit. Deep neural networks, widely used in computer vision tasks, require substantial computation and memory resources, making it challenging to run… “DNN Quantization: Theory to Practice,”
“Leveraging Neural Architecture Search for Efficient Computer Vision on the Edge,” a Presentation from NXP Semiconductors
Hiram Rayo Torres Rodriguez, Senior AI Research Engineer at NXP Semiconductors, presents the “Leveraging Neural Architecture Search for Efficient Computer Vision on the Edge” tutorial at the May 2024 Embedded Vision Summit. In most AI research today, deep neural networks (DNNs) are designed solely to improve prediction accuracy, often ignoring… “Leveraging Neural Architecture Search for
“Introduction to Visual Simultaneous Localization and Mapping (VSLAM),” a Presentation from Cadence
Amol Borkar, Product Marketing Director, and Shrinivas Gadkari, Design Engineering Director, both of Cadence, co-present the “Introduction to Visual Simultaneous Localization and Mapping (VSLAM)” tutorial at the May 2024 Embedded Vision Summit. Simultaneous localization and mapping (SLAM) is widely used in industry and has numerous applications where camera or ego-motion… “Introduction to Visual Simultaneous Localization
Untether AI Demonstration of Video Analysis Using the runAI Family of Inference Accelerators
Max Sbabo, Senior Application Engineer at Untether AI, demonstrates the company’s latest edge AI and vision technologies and products at the 2024 Embedded Vision Summit. Specifically, Sbabo demonstrates his company’s its AI inference technology with AI accelerator cards that leverage the capabilities of the runAI family of ICs in a PCI-Express form factor. This demonstration