Edge AI and Vision Alliance

“Approaches for Energy Efficient Implementation of Deep Neural Networks,” a Presentation from MIT

Vivienne Sze, Associate Professor at MIT, presents the “Approaches for Energy Efficient Implementation of Deep Neural Networks” tutorial at the May 2018 Embedded Vision Summit. Deep neural networks (DNNs) are proving very effective for a variety of challenging machine perception tasks. But these algorithms are very computationally demanding. To enable DNNs to be used in […]

“Approaches for Energy Efficient Implementation of Deep Neural Networks,” a Presentation from MIT Read More +

“Understanding Automotive Radar: Present and Future,” a Presentation from NXP Semiconductors

Arunesh Roy, Radar Algorithms Architect at NXP Semiconductors, presents the “Understanding Automotive Radar: Present and Future” tutorial at the May 2018 Embedded Vision Summit. Thanks to its proven, all-weather range detection capability, radar is increasingly used for driver assistance functions such as automatic emergency braking and adaptive cruise control. Radar is considered a crucial sensing

“Understanding Automotive Radar: Present and Future,” a Presentation from NXP Semiconductors Read More +

“Building Efficient CNN Models for Mobile and Embedded Applications,” a Presentation from Facebook

Peter Vajda, Research Scientist at Facebook, presents the “Building Efficient CNN Models for Mobile and Embedded Applications” tutorial at the May 2018 Embedded Vision Summit. Recent advances in efficient deep learning models have led to many potential applications in mobile and embedded devices. In this talk, Vajda discusses state-of-the-art model architectures, and introduces Facebook’s work

“Building Efficient CNN Models for Mobile and Embedded Applications,” a Presentation from Facebook Read More +

EVA180x100

Embedded Vision Insights: September 6, 2018 Edition

LETTER FROM THE EDITOR Dear Colleague, The next session of the Embedded Vision Alliance's in-person, hands-on technical training class series, Deep Learning for Computer Vision with TensorFlow, takes place in less than a month in San Jose, California. These classes give you the critical knowledge you need to develop deep learning computer vision applications with

Embedded Vision Insights: September 6, 2018 Edition Read More +

“Architecting a Smart Home Monitoring System with Millions of Cameras,” a Presentation from Comcast

Hongcheng Wang, Senior Manager of Technical R&D at Comcast, presents the “Architecting a Smart Home Monitoring System with Millions of Cameras” tutorial at the May 2018 Embedded Vision Summit. Video monitoring is a critical capability for the smart home. With millions of cameras streaming to the cloud, efficient and scalable video analytics becomes essential. To

“Architecting a Smart Home Monitoring System with Millions of Cameras,” a Presentation from Comcast Read More +

“The Role of the Cloud in Autonomous Vehicle Vision Processing: A View from the Edge,” a Presentation from NXP Semiconductors

Ali Osman Ors, Director of Automotive Microcontrollers and Processors at NXP Semiconductors, presents the “Role of the Cloud in Autonomous Vehicle Vision Processing: A View from the Edge” tutorial at the May 2018 Embedded Vision Summit. Regardless of the processing topology—distributed, centralized or hybrid —sensor processing in automotive is an edge compute problem. However, with

“The Role of the Cloud in Autonomous Vehicle Vision Processing: A View from the Edge,” a Presentation from NXP Semiconductors Read More +

EVA180x100

Embedded Vision Insights: August 21, 2018 Edition

VISION PROCESSING IN THE CLOUD Introduction to Creating a Vision Solution in the Cloud A growing number of applications utilize cloud computing for execution of computer vision algorithms. In this presentation, Nishita Sant, Computer Vision Scientist at GumGum, introduces the basics of creating a cloud-based vision service, based on GumGum's experience developing and deploying a

Embedded Vision Insights: August 21, 2018 Edition Read More +

“Computer Vision Hardware Acceleration for Driver Assistance,” a Presentation from Bosch

Markus Tremmel, Chief Expert for ADAS at Bosch, presents the “Computer Vision Hardware Acceleration for Driver Assistance” tutorial at the May 2018 Embedded Vision Summit. With highly automated and fully automated driver assistance system just around the corner, next generation ADAS sensors and central ECUs will have much higher safety and functional requirements to cope

“Computer Vision Hardware Acceleration for Driver Assistance,” a Presentation from Bosch Read More +

Computer Vision for Augmented Reality in Embedded Designs

Augmented reality (AR) and related technologies and products are becoming increasingly popular and prevalent, led by their adoption in smartphones, tablets and other mobile computing and communications devices. While developers of more deeply embedded platforms are also motivated to incorporate AR capabilities in their products, the comparative scarcity of processing, memory, storage, and networking resources

Computer Vision for Augmented Reality in Embedded Designs Read More +

“The Roomba 980: Computer Vision Meets Consumer Robotics,” a Presentation from iRobot

Mario Munich, Senior Vice President of Technology at iRobot, presents the “Roomba 980: Computer Vision Meets Consumer Robotics” tutorial at the May 2018 Embedded Vision Summit. In 2015, iRobot launched the Roomba 980, introducing intelligent visual navigation to its successful line of vacuum cleaning robots. The availability of affordable electro-mechanical components, powerful embedded microprocessors and

“The Roomba 980: Computer Vision Meets Consumer Robotics,” a Presentation from iRobot Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top