Summit

Arm Demonstration of Image Classification Using Arm NN and the Compute Library

Gian Marco Iodice, Senior Software Engineer at Arm, delivers a product demonstration at the May 2018 Embedded Vision Summit. Specifically, Iodice demonstrates image classification using Arm NN and the Compute Library, showing how they provide Arm-based platforms with the flexibility to switch between the CPU and GPU for easy and performant image classification.

Arm Demonstration of Image Classification Using Arm NN and the Compute Library Read More +

Arm Demonstration of the Company’s Object Detection Processor

Alexey Lopich, Principal Hardware Engineer and Team Lead at Arm, delivers a product demonstration at the May 2018 Embedded Vision Summit. Specifically, Lopich demonstrates Arm’s Object Detection processor, showing how it detects objects – from 50×60 pixels to full screen – in real time, at high speed (60fps) and in high resolution (Full HD). He

Arm Demonstration of the Company’s Object Detection Processor Read More +

Horizon Robotics Demonstration of Its Autonomous Driving Platform Powered by Its Embedded AI Chip

Su Li, Senior Technical Account Manager at Horizon Robotics, delivers a product demonstration at the May 2018 Embedded Vision Summit. Specifically, Li demonstrates the company’s autonomous driving computing platform, Matrix, based on Horizon Robotics’ self-developed embedded AI processor architecture, BPU2.0. Matrix has powerful perceptual computing capability and can provide high-performance sensing system for L4 autonomous

Horizon Robotics Demonstration of Its Autonomous Driving Platform Powered by Its Embedded AI Chip Read More +

Horizon Robotics Demonstration of Its Smart City Solution Powered by Its Embedded AI Chip

Su Li, Senior Technical Account Manager at Horizon Robotics, delivers a product demonstration at the May 2018 Embedded Vision Summit. Specifically, Li demonstrates the company’s smart city solution using a surveillance video analytic system. Based on the Sunrise AI processor, an embedded AI computer vision chip developed by the company, Horizon Robotics uses a cutting-edge

Horizon Robotics Demonstration of Its Smart City Solution Powered by Its Embedded AI Chip Read More +

Horizon Robotics Demonstration of Its Smart Retail Solution Powered by Its Embedded AI Chip

Su Li, Senior Technical Account Manager at Horizon Robotics, delivers a product demonstration at the May 2018 Embedded Vision Summit. Specifically, Li demonstrates the company’s smart retail solution using an AI-based face recognition camera, which can capture a maximum of 200 faces per frame, detect at 1080p full-frame rate and identify the age and gender

Horizon Robotics Demonstration of Its Smart Retail Solution Powered by Its Embedded AI Chip Read More +

Boulder AI Demonstration of Its GPU-enabled DNNCam (Deep Neural Network Camera)

Darren Odom, CEO of Boulder AI, delivers a product demonstration at the May 2018 Embedded Vision Summit. Specifically, Odom demonstrates DNNCam, an intelligent GPU-enabled deep-learning neural network camera that is waterproof, dust-proof and runs AI algorithms at the image source. The Boulder AI edge camera executes AI/machine learning frameworks and computer vision algorithms without additional

Boulder AI Demonstration of Its GPU-enabled DNNCam (Deep Neural Network Camera) Read More +

NXP Semiconductors Demonstration of Deep Learning-based Multi-object Detection Using S32V234 Vision Processor

Ali Osman Ors, Director of AI Strategy and Partnerships for Automotive at NXP Semiconductors, delivers a product demonstration at the May 2018 Embedded Vision Summit. Specifically, Ors demonstrates deep learning-based multi-object detection based on MobileNet and Single Shot Detector (SSD) running in real-time on the embedded automotive-grade S32V234 Vision SoC, with the capability of detecting

NXP Semiconductors Demonstration of Deep Learning-based Multi-object Detection Using S32V234 Vision Processor Read More +

NXP Semiconductors Demonstration of Optimized Performance and Memory Utilization for Object Detection on i.MX RT

Markus Levy, Director of Enabling Technologies at NXP Semiconductors, delivers a product demonstration at the May 2018 Embedded Vision Summit. Specifically, Levy demonstrates the i.MX RT architecture, which represents the convergence of low-power application processors and high-performance microcontrollers. This particular demo shows the i.MX RT1050 MCU, which is based on a 600MHz Arm® Cortex®-M7 core

NXP Semiconductors Demonstration of Optimized Performance and Memory Utilization for Object Detection on i.MX RT Read More +

AImotive Scales Toward Productization of Self-Driving Technology

Ferenc Pintér, Head of Software at AImotive, provides a company update at the May 2018 Embedded Vision Summit. Specifically, Pintér explains how AImotive‘s technological approach and development pipeline support the creation of scalable autonomous solutions. After discussing the fundamental effect of simulation of self-driving development, Pintér introduces aiSim and aiWare, two AImotive technologies and winners

AImotive Scales Toward Productization of Self-Driving Technology Read More +

Intel Demonstration of Scalable Deep Learning-based Face Detection and Recognition with FPGAs

Richard Chuang, Global Platform Solutions Architect at Intel, delivers a product demonstration at the May 2018 Embedded Vision Summit. Specifically, Chuang demonstrates an end-to-end face detection and recognition reference solution using the OpenVINO toolkit. Four primary algorithms are running in this demo system on top of OpenVINO: face detection, landmark detection, feature extraction, and face

Intel Demonstration of Scalable Deep Learning-based Face Detection and Recognition with FPGAs Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top