Videos on Edge AI and Visual Intelligence
We hope that the compelling AI and visual intelligence case studies that follow will both entertain and inspire you, and that you’ll regularly revisit this page as new material is added. For more, monitor the News page, where you’ll frequently find video content embedded within the daily writeups.
Alliance Website Videos
![](https://www.edge-ai-vision.com/wp-content/uploads/2024/07/YcjTkVdIWT0-300x169.jpg)
DEEPX Demonstration of Its DX-M1 AIoT Booster
Taisik Won, the President of DEEPX USA, demonstrates the company’s latest edge AI and vision technologies and products at the 2024 Embedded Vision Summit. Specifically, Won demonstrates the company’s AIoT Booster, the DX-M1. The DX-M1 is DEEPX’s flagship AI chip, meticulously engineered for seamless integration into any AIoT application. This cutting-edge chip can simultaneously process
![](https://www.edge-ai-vision.com/wp-content/uploads/2024/07/SpZBjYyKSsQ-300x169.jpg)
DEEPX Demonstration of Its DX-V1 and DX-V3 AI Vision Processors
Aiden Song, PR Manager at DEEPX, demonstrates the company’s latest edge AI and vision technologies and products at the 2024 Embedded Vision Summit. Specifically, Song demonstrates the company’s DX-V1 and DX-V3 AI vision processors. The DX-V1 and DX-V3 are AI enabler chips for vision systems. The DX-V1 is a standalone edge AI chip that can
![](https://www.edge-ai-vision.com/wp-content/uploads/2024/07/pPfKBbb1pCE-300x169.jpg)
DEEPX Demonstration of Its DX-H1 Green AI Computing Card
Aiden Song, PR Manager at DEEPX, demonstrates the company’s latest edge AI and vision technologies and products at the 2024 Embedded Vision Summit. Specifically, Song demonstrates the company’s latest innovation, the DX-H1 Green AI Computing Card. The DX-H1 is designed for eco-friendly data centers, delivering 10 times better power and cost efficiency than GPGPU solutions.
![](https://www.edge-ai-vision.com/wp-content/uploads/2024/07/nsA6tUoxSnE-300x169.jpg)
Cadence Demonstration of a Large Vision Model for Generative AI on the Tensilica Vision P6 DSP
Amol Borkar, Director of Product Marketing for Cadence Tensilica DSPs and Automotive Segment Director, demonstrates the company’s latest edge AI and vision technologies and products at the 2024 Embedded Vision Summit. Specifically, Borkar demonstrates the use of a Tensilica Vision P6 DSP for the latest generative AI (GenAI) applications. The Vision P6 DSP is a
![](https://www.edge-ai-vision.com/wp-content/uploads/2024/07/tZ_mIzlvXyA--300x169.jpg)
Cadence Demonstration of Time-of-Flight Decoding on the Tensilica Vision Q7 DSP
Amol Borkar, Director of Product Marketing for Cadence Tensilica DSPs and Automotive Segment Director, demonstrates the company’s latest edge AI and vision technologies and products at the 2024 Embedded Vision Summit. Specifically, Borkar demonstrates the use of a Tensilica Vision Q7 DSP for Time-of-Flight (ToF) decoding. In this demonstration, the Tensilica Vision Q7 DSP integrated
![](https://www.edge-ai-vision.com/wp-content/uploads/2024/07/fsZ8UnIb2zg-300x169.jpg)
Cadence Demonstration of an Imaging Radar Applications on the Tensilica ConnX B10 DSP
Amol Borkar, Director of Product Marketing for Cadence Tensilica DSPs and Automotive Segment Director, demonstrates the company’s latest edge AI and vision technologies and products at the 2024 Embedded Vision Summit. Specifically, Borkar demonstrates the use of a Tensilica ConnX B10 DSP for automotive imaging radar applications. The ConnX B10 DSP is a highly efficient,
![](https://www.edge-ai-vision.com/wp-content/uploads/2024/07/lTKXjuS2FnM-300x169.jpg)
BrainChip Demonstration of the Power of Temporal Event-based Neural Networks (TENNs)
Todd Vierra, Vice President of Customer Engagement at BrainChip, demonstrates the company’s latest edge AI and vision technologies and products at the 2024 Embedded Vision Summit. Specifically, Vierra demonstrates the efficient processing of generative text using Temporal Event-based Neural Networks (TENNs) compared to ChatGPT. The TENN, an innovative, light-weight neural network architecture, combines convolution in
![](https://www.edge-ai-vision.com/wp-content/uploads/2024/07/yY_raEGgka0-300x169.jpg)
BrainChip Demonstration of Analyzing Head Pose, Eye Gaze and Emotion with Human Behavior AI
Todd Vierra, Vice President of Customer Engagement at BrainChip, demonstrates the company’s latest edge AI and vision technologies and products at the 2024 Embedded Vision Summit. Specifically, Vierra demonstrates how BrainChip’s Akida AKD1000 neuromorphic processor detects human emotion. Partnered with BeEmotion.ai, the system monitors the state of the user through real-time observation and perception of
![](https://www.edge-ai-vision.com/wp-content/uploads/2024/07/FhxNz402n2M-300x169.jpg)
BrainChip Demonstration of Neuromorphic AI in a Compact Form Factor
Todd Vierra, Vice President of Customer Engagement at BrainChip, demonstrates the company’s latest edge AI and vision technologies and products at the 2024 Embedded Vision Summit. Specifically, Vierra demonstrates inference on the edge using visual wake word and Yolo models using the Akida Edge AI Box to detect and identify people. The Akida Edge AI
![](https://www.edge-ai-vision.com/wp-content/uploads/2024/07/mNkL_JADTm4-300x169.jpg)
Axelera AI Demonstration of the High Performance Achievable with the Metis AIPU
Bram Verhoef, Co-founder of Axelera AI, demonstrates the company’s latest edge AI and vision technologies and products at the 2024 Embedded Vision Summit. Specifically, Verhoef demonstrates the high computer vision performance that can be achieved with his company’s Metis AIPU. A single Metis AIPU runs at over 200 TOPs and can run computer vision inference