Edge AI and Vision Alliance

EVA180x100

Embedded Vision Insights: September 6, 2018 Edition

LETTER FROM THE EDITOR Dear Colleague, The next session of the Embedded Vision Alliance's in-person, hands-on technical training class series, Deep Learning for Computer Vision with TensorFlow, takes place in less than a month in San Jose, California. These classes give you the critical knowledge you need to develop deep learning computer vision applications with […]

Embedded Vision Insights: September 6, 2018 Edition Read More +

“Architecting a Smart Home Monitoring System with Millions of Cameras,” a Presentation from Comcast

Hongcheng Wang, Senior Manager of Technical R&D at Comcast, presents the “Architecting a Smart Home Monitoring System with Millions of Cameras” tutorial at the May 2018 Embedded Vision Summit. Video monitoring is a critical capability for the smart home. With millions of cameras streaming to the cloud, efficient and scalable video analytics becomes essential. To

“Architecting a Smart Home Monitoring System with Millions of Cameras,” a Presentation from Comcast Read More +

“The Role of the Cloud in Autonomous Vehicle Vision Processing: A View from the Edge,” a Presentation from NXP Semiconductors

Ali Osman Ors, Director of Automotive Microcontrollers and Processors at NXP Semiconductors, presents the “Role of the Cloud in Autonomous Vehicle Vision Processing: A View from the Edge” tutorial at the May 2018 Embedded Vision Summit. Regardless of the processing topology—distributed, centralized or hybrid —sensor processing in automotive is an edge compute problem. However, with

“The Role of the Cloud in Autonomous Vehicle Vision Processing: A View from the Edge,” a Presentation from NXP Semiconductors Read More +

EVA180x100

Embedded Vision Insights: August 21, 2018 Edition

VISION PROCESSING IN THE CLOUD Introduction to Creating a Vision Solution in the Cloud A growing number of applications utilize cloud computing for execution of computer vision algorithms. In this presentation, Nishita Sant, Computer Vision Scientist at GumGum, introduces the basics of creating a cloud-based vision service, based on GumGum's experience developing and deploying a

Embedded Vision Insights: August 21, 2018 Edition Read More +

“Computer Vision Hardware Acceleration for Driver Assistance,” a Presentation from Bosch

Markus Tremmel, Chief Expert for ADAS at Bosch, presents the “Computer Vision Hardware Acceleration for Driver Assistance” tutorial at the May 2018 Embedded Vision Summit. With highly automated and fully automated driver assistance system just around the corner, next generation ADAS sensors and central ECUs will have much higher safety and functional requirements to cope

“Computer Vision Hardware Acceleration for Driver Assistance,” a Presentation from Bosch Read More +

Computer Vision for Augmented Reality in Embedded Designs

Augmented reality (AR) and related technologies and products are becoming increasingly popular and prevalent, led by their adoption in smartphones, tablets and other mobile computing and communications devices. While developers of more deeply embedded platforms are also motivated to incorporate AR capabilities in their products, the comparative scarcity of processing, memory, storage, and networking resources

Computer Vision for Augmented Reality in Embedded Designs Read More +

“The Roomba 980: Computer Vision Meets Consumer Robotics,” a Presentation from iRobot

Mario Munich, Senior Vice President of Technology at iRobot, presents the “Roomba 980: Computer Vision Meets Consumer Robotics” tutorial at the May 2018 Embedded Vision Summit. In 2015, iRobot launched the Roomba 980, introducing intelligent visual navigation to its successful line of vacuum cleaning robots. The availability of affordable electro-mechanical components, powerful embedded microprocessors and

“The Roomba 980: Computer Vision Meets Consumer Robotics,” a Presentation from iRobot Read More +

“Recognizing Novel Objects in Novel Surroundings with Single-shot Detectors,” a Presentation from the University of North Carolina at Chapel Hill

Alexander C Berg, Associate Professor at the University of North Carolina at Chapel Hill and CTO of Shopagon, presents the “Recognizing Novel Objects in Novel Surroundings with Single-shot Detectors” tutorial at the May 2018 Embedded Vision Summit. Berg’s group’s 2016 work on single-shot object detection (SSD) reduced the computation cost for accurate detection of object

“Recognizing Novel Objects in Novel Surroundings with Single-shot Detectors,” a Presentation from the University of North Carolina at Chapel Hill Read More +

“Building a Typical Visual SLAM Pipeline,” a Presentation from Virgin Hyperloop One

YoungWoo Seo, Senior Director at Virgin Hyperloop One, presents the “Building a Typical Visual SLAM Pipeline” tutorial at the May 2018 Embedded Vision Summit. Maps are important for both human and robot navigation. SLAM (simultaneous localization and mapping) is one of the core techniques for map-based navigation. As SLAM algorithms have matured and hardware has

“Building a Typical Visual SLAM Pipeline,” a Presentation from Virgin Hyperloop One Read More +

“Visual-Inertial Tracking for AR and VR,” a Presentation from Meta

Timo Ahonen, Director of Engineering for Computer Vision at Meta, presents the “Visual-Inertial Tracking for AR and VR” tutorial at the May 2018 Embedded Vision Summit. This tutorial covers the main current approaches to solving the problem of tracking the motion of a display for AR and VR use cases. Ahonen covers methods for inside-out

“Visual-Inertial Tracking for AR and VR,” a Presentation from Meta Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top