Videos

Videos on Edge AI and Visual Intelligence

We hope that the compelling AI and visual intelligence case studies that follow will both entertain and inspire you, and that you’ll regularly revisit this page as new material is added. For more, monitor the News page, where you’ll frequently find video content embedded within the daily writeups.

Alliance Website Videos

“Scaling i.MX Applications Processors’ Native Edge AI with Discrete AI Accelerators,” a Presentation from NXP Semiconductors

Ali Osman Ors, Director of AI ML Strategy and Technologies for Edge Processing at NXP Semiconductors, presents the “Scaling i.MX Applications Processors’ Native Edge AI with Discrete AI Accelerators” tutorial at the May 2025 Embedded Vision Summit. The integration of discrete AI accelerators with edge processors is poised to revolutionize… “Scaling i.MX Applications Processors’ Native

Read More »

Unlocking the Power of Edge AI With Microchip Technology

This blog post was originally published at Microchip Technology’s website. It is reprinted here with the permission of Microchip Technology. From the factory floor to the operating room, edge AI is changing everything. Here’s how Microchip is helping developers bring real-time intelligence to the world’s most power-constrained devices. Not long ago, Artificial Intelligence (AI) lived

Read More »

“A Re-imagination of Embedded Vision System Design,” a Presentation from Imagination Technologies

Dennis Laudick, Vice President of Product Management and Marketing at Imagination Technologies, presents the “A Re-imagination of Embedded Vision System Design” tutorial at the May 2025 Embedded Vision Summit. Embedded vision applications, with their demand for ever more processing power, have been driving up the size and complexity of edge… “A Re-imagination of Embedded Vision

Read More »

“MPU+: A Transformative Solution for Next-Gen AI at the Edge,” a Presentation from FotoNation

Petronel Bigioi, CEO of FotoNation, presents the “MPU+: A Transformative Solution for Next-Gen AI at the Edge” tutorial at the May 2025 Embedded Vision Summit. In this talk, Bigioi introduces MPU+, a novel programmable, customizable low-power platform for real-time, localized intelligence at the edge. The platform includes an AI-augmented image… “MPU+: A Transformative Solution for

Read More »

“Evolving Inference Processor Software Stacks to Support LLMs,” a Presentation from Expedera

Ramteja Tadishetti, Principal Software Engineer at Expedera, presents the “Evolving Inference Processor Software Stacks to Support LLMs” tutorial at the May 2025 Embedded Vision Summit. As large language models (LLMs) and vision-language models (VLMs) have quickly become important for edge applications from smartphones to automobiles, chipmakers and IP providers have… “Evolving Inference Processor Software Stacks

Read More »

“Efficiently Registering Depth and RGB Images,” a Presentation from eInfochips

Naitik Nakrani, Solution Architect Manager at eInfochips, presents the “Efficiently Registering Depth and RGB Images” tutorial at the May 2025 Embedded Vision Summit. As depth sensing and computer vision technologies evolve, integrating RGB and depth cameras has become crucial for reliable and precise scene perception. In this session, Nakrani presents… “Efficiently Registering Depth and RGB

Read More »

“How to Right-size and Future-proof a Container-first Edge AI Infrastructure,” a Presentation from Avassa and OnLogic

Carl Moberg, CTO of Avassa, and Zoie Rittling, Business Development Manager at OnLogic, co-present the “How Right-size and Future-proof a Container-first Edge AI Infrastructure” tutorial at the May 2025 Embedded Vision Summit. In this presentation, Moberg and Rittling provide practical guidance on overcoming key challenges in deploying AI at the… “How to Right-size and Future-proof

Read More »

“Image Tokenization for Distributed Neural Cascades,” a Presentation from Google and VeriSilicon

Derek Chow, Software Engineer at Google, and Shang-Hung Lin, Vice President of NPU Technology at VeriSilicon, co-present the “Image Tokenization for Distributed Neural Cascades” tutorial at the May 2025 Embedded Vision Summit. Multimodal LLMs promise to bring exciting new abilities to devices! As we see foundational models become more capable,… “Image Tokenization for Distributed Neural

Read More »

“Key Requirements to Successfully Implement Generative AI in Edge Devices—Optimized Mapping to the Enhanced NPX6 Neural Processing Unit IP,” a Presentation from Synopsys

Gordon Cooper, Principal Product Manager at Synopsys, presents the “Key Requirements to Successfully Implement Generative AI in Edge Devices—Optimized Mapping to the Enhanced NPX6 Neural Processing Unit IP” tutorial at the May 2025 Embedded Vision Summit. In this talk, Cooper discusses emerging trends in generative AI for edge devices and… “Key Requirements to Successfully Implement

Read More »

“Bridging the Gap: Streamlining the Process of Deploying AI onto Processors,” a Presentation from SqueezeBits

Taesu Kim, Chief Technology Officer at SqueezeBits, presents the “Bridging the Gap: Streamlining the Process of Deploying AI onto Processors” tutorial at the May 2025 Embedded Vision Summit. Large language models (LLMs) often demand hand-coded conversion scripts for deployment on each distinct processor-specific software stack—a process that’s time-consuming and prone… “Bridging the Gap: Streamlining the

Read More »

Eye-Catching Edge AI and Vision Industry Case Study Clips

Gemini Robotics Vision Language Model
Estes Express Lines and Samsara
Autonomous Crop Harvesting
Ray-Ban Meta Smart Glasses
Generative AI and Perceptual AI
Computer Vision in Agriculture
AI-powered Box Loading In Delivery Trucks
School Bus Safety
Autonomous Drones for Package Delivery
Whole-Body Health Tests via Retina Scans
Touchless Self-Checkout Retail System
Personalized-Info Airport Displays
Avoiding Autonomous Vacuuming Hazards
Vision-Enhanced Fitness
Coffee Pod Identification
Aerial Autonomy on Mars
Talking-Head Synthesis and Optimization
Tracking Down Litterers
Autonomous Infrastructure Inspection
AI-controlled Webcam
Underwater Image Enhancement
Colorizing B&W Images and Video
Hand Tracking on VR Headsets
Object ID for the Visually Impaired
Gesture-Based Mobile Device Control
Smart Vehicle Headlights
Insurance Valuation Via CV Image Analysis
Augmented Reality Shopping for Glasses
Blood Analysis for Malaria
Vision-Based Smart Oven
Facial Recognition for Flight Check-In
Diabetes Detection via AI Retina Scans
Vision Health Self-Analysis
Object Recognition for Children
Bossa Nova Robots in Retail
Autonomous Vehicle Parking

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top