Videos

NXP Demonstration of Traffic Sign Recognition with Neural Networks

Rafal Malewski, Head of the Graphics Technology Engineering Center at NXP Semiconductors, demonstrates the company's latest embedded vision technologies and products at the May 2017 Embedded Vision Summit. Specifically, Malewski demonstrates traffic sign recognition with neural networks. This embedded solution supports real-time vision and rendering. Its object recognition capabilities employ traditional segmentation along with a […]

NXP Demonstration of Traffic Sign Recognition with Neural Networks Read More +

NXP Demonstration of a Real-time CNN Image Classifier and Pedestrian Detection on the S32V234 Processor

Ali Ors, Director of R&D, ADAS at NXP Semiconductors, demonstrates the company's latest embedded vision technologies and products at the May 2017 Embedded Vision Summit. Specifically, Ors demonstrates a real-time CNN image classifier and pedestrian detection on NXP’s automotive-grade S32V234 ADAS vision processor. He demonstrates an optimized implementation of a convolutional neural network used to

NXP Demonstration of a Real-time CNN Image Classifier and Pedestrian Detection on the S32V234 Processor Read More +

“Another Set of Eyes: Machine Vision Automation Solutions for In Vitro Diagnostics,” a Presentation from Microscan Systems

Sadie Zeller, Manager of Global Product Management and the Clinical Vertical Market at Microscan Systems, presents the "Another Set of Eyes: Machine Vision Automation Solutions for In Vitro Diagnostics" tutorial at the May 2017 Embedded Vision Summit. In vitro diagnostics (IVD) are tests that can detect diseases, conditions, or infections. The use of automation, including

“Another Set of Eyes: Machine Vision Automation Solutions for In Vitro Diagnostics,” a Presentation from Microscan Systems Read More +

“Using Markerless Motion Capture to Win Baseball Games,” a Presentation from KinaTrax

Steven Cadavid, President of KinaTrax, presents the "Using Markerless Motion Capture to Win Baseball Games" tutorial at the May 2017 Embedded Vision Summit. KinaTrax develops a markerless motion capture system that computes the kinematic data of an in-game baseball pitch. The system is installed in several Major League Baseball ballparks including Wrigley Field, home of

“Using Markerless Motion Capture to Win Baseball Games,” a Presentation from KinaTrax Read More +

Nextchip Demonstration of ADAS Functions on Its Pre-processor and ISP

Mathias Sunghoon Chung, Global Business Development Manager at Nextchip, demonstrates the company's latest embedded vision technologies and products at the May 2017 Embedded Vision Summit. Specifically, Chung demonstrates ADAS (advanced driver assistance systems) functions running on the company's pre-processor and ISP (image signal processor).

Nextchip Demonstration of ADAS Functions on Its Pre-processor and ISP Read More +

Nextchip Demonstration of HDR and LFM Processing on Its ISP

Mathias Sunghoon Chung, Global Business Development Manager at Nextchip, demonstrates the company's latest embedded vision technologies and products at the May 2017 Embedded Vision Summit. Specifically, Chung demonstrates HDR (high dynamic range) and LFM (LED flicker mitigation) processing running on the company's ISP (image signal processor).

Nextchip Demonstration of HDR and LFM Processing on Its ISP Read More +

“A Fast Object Detector for ADAS using Deep Learning,” a Presentation from Panasonic

Minyoung Kim, Senior Research Engineer at Panasonic Silicon Valley Laboratory, presents the "A Fast Object Detector for ADAS using Deep Learning" tutorial at the May 2017 Embedded Vision Summit. Object detection has been one of the most important research areas in computer vision for decades. Recently, deep neural networks (DNNs) have led to significant improvement

“A Fast Object Detector for ADAS using Deep Learning,” a Presentation from Panasonic Read More +

“Unsupervised Everything,” a Presentation from Panasonic

Luca Rigazio, Director of Engineering for the Panasonic Silicon Valley Laboratory, presents the "Unsupervised Everything" tutorial at the May 2017 Embedded Vision Summit. The large amount of multi-sensory data available for autonomous intelligent systems is just astounding. The power of deep architectures to model these practically unlimited datasets is limited by only two factors: computational

“Unsupervised Everything,” a Presentation from Panasonic Read More +

Luxoft Demonstration of Its Machine Learning Platform Toolkit

Ihor Starepravo, Embedded Practice Director at Luxoft, demonstrates the company's latest embedded vision technologies and products at the May 2017 Embedded Vision Summit. Specifically, Starepravo demonstrates how a machine learning platform identifies multiple faces it has already "seen" before. This technology includes all components necessary for multiple face recognition as well as a data pipeline

Luxoft Demonstration of Its Machine Learning Platform Toolkit Read More +

Luxoft Demonstration of an Optimized Stereo Depth Map

Ihor Starepravo, Embedded Practice Director at Luxoft, demonstrates the company's latest embedded vision technologies and products at the May 2017 Embedded Vision Summit. Specifically, Starepravo demonstrates how an embedded system platform extracts a depth map out of what’s being filmed. This complex process is done in real time, allowing devices to understand complex dynamic 3D

Luxoft Demonstration of an Optimized Stereo Depth Map Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top