Videos

Nextchip Demonstration of ADAS Functions on Its Pre-processor and ISP

Mathias Sunghoon Chung, Global Business Development Manager at Nextchip, demonstrates the company's latest embedded vision technologies and products at the May 2017 Embedded Vision Summit. Specifically, Chung demonstrates ADAS (advanced driver assistance systems) functions running on the company's pre-processor and ISP (image signal processor).

Nextchip Demonstration of ADAS Functions on Its Pre-processor and ISP Read More +

Nextchip Demonstration of HDR and LFM Processing on Its ISP

Mathias Sunghoon Chung, Global Business Development Manager at Nextchip, demonstrates the company's latest embedded vision technologies and products at the May 2017 Embedded Vision Summit. Specifically, Chung demonstrates HDR (high dynamic range) and LFM (LED flicker mitigation) processing running on the company's ISP (image signal processor).

Nextchip Demonstration of HDR and LFM Processing on Its ISP Read More +

“A Fast Object Detector for ADAS using Deep Learning,” a Presentation from Panasonic

Minyoung Kim, Senior Research Engineer at Panasonic Silicon Valley Laboratory, presents the "A Fast Object Detector for ADAS using Deep Learning" tutorial at the May 2017 Embedded Vision Summit. Object detection has been one of the most important research areas in computer vision for decades. Recently, deep neural networks (DNNs) have led to significant improvement

“A Fast Object Detector for ADAS using Deep Learning,” a Presentation from Panasonic Read More +

“Unsupervised Everything,” a Presentation from Panasonic

Luca Rigazio, Director of Engineering for the Panasonic Silicon Valley Laboratory, presents the "Unsupervised Everything" tutorial at the May 2017 Embedded Vision Summit. The large amount of multi-sensory data available for autonomous intelligent systems is just astounding. The power of deep architectures to model these practically unlimited datasets is limited by only two factors: computational

“Unsupervised Everything,” a Presentation from Panasonic Read More +

Luxoft Demonstration of Its Machine Learning Platform Toolkit

Ihor Starepravo, Embedded Practice Director at Luxoft, demonstrates the company's latest embedded vision technologies and products at the May 2017 Embedded Vision Summit. Specifically, Starepravo demonstrates how a machine learning platform identifies multiple faces it has already "seen" before. This technology includes all components necessary for multiple face recognition as well as a data pipeline

Luxoft Demonstration of Its Machine Learning Platform Toolkit Read More +

Luxoft Demonstration of an Optimized Stereo Depth Map

Ihor Starepravo, Embedded Practice Director at Luxoft, demonstrates the company's latest embedded vision technologies and products at the May 2017 Embedded Vision Summit. Specifically, Starepravo demonstrates how an embedded system platform extracts a depth map out of what’s being filmed. This complex process is done in real time, allowing devices to understand complex dynamic 3D

Luxoft Demonstration of an Optimized Stereo Depth Map Read More +

“How 3D Maps Will Change the World,” a Presentation from Augmented Pixels

Vitaliy Goncharuk, CEO and Founder of Augmented Pixels, presents the "How 3D Maps Will Change the World" tutorial at the May 2017 Embedded Vision Summit. In the very near future, cars, robots, mobile phones and augmented reality glasses will incorporate inexpensive and efficient depth sensing. This will quickly bring us to a new world in

“How 3D Maps Will Change the World,” a Presentation from Augmented Pixels Read More +

“Designing a Vision-based, Solar-powered Rear Collision Warning System,” a Presentation from Pearl Automation

Aman Sikka, Vision System Architect at Pearl Automation, presents the "Designing a Vision-based, Solar-powered Rear Collision Warning System" tutorial at the May 2017 Embedded Vision Summit. Bringing vision algorithms into mass production requires carefully balancing trade-offs between accuracy, performance, usability, and system resources. In this talk, Sikka describes the vision algorithms along with the system

“Designing a Vision-based, Solar-powered Rear Collision Warning System,” a Presentation from Pearl Automation Read More +

Imagination Technologies Demonstration of PowerVR GPUs’ Versatility for Running CNNs

Chris Longstaff, Senior Director of Marketing at Imagination Technologies, demonstrates the company's latest embedded vision technologies and products at the May 2017 Embedded Vision Summit. Specifically, Longstaff demonstrates the company's PowerVR GPU versatility for running CNNs…or as he likes to call it, "the banana detector, evolved." In the demo, he shows the PowerVR GPU, successfully

Imagination Technologies Demonstration of PowerVR GPUs’ Versatility for Running CNNs Read More +

Imagination Technologies Demonstration of the OpenVX Neural Network Extension on a PowerVR GPU

Chris Longstaff, Senior Director of Marketing at Imagination Technologies, demonstrates the company's latest embedded vision technologies and products at the May 2017 Embedded Vision Summit. Specifically, Longstaff demonstrates an OpenVX neural network extension running on a PowerVR GPU. He learns to count to 9; the demo shows how he can practice drawing the digits on

Imagination Technologies Demonstration of the OpenVX Neural Network Extension on a PowerVR GPU Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top