Edge AI and Vision Alliance

“How Image Sensor and Video Compression Parameters Impact Vision Algorithms,” a Presentation from Amazon Lab126

Ilya Brailovskiy, Principal Engineer at Amazon Lab126, presents the "How Image Sensor and Video Compression Parameters Impact Vision Algorithms" tutorial at the May 2017 Embedded Vision Summit. Recent advances in deep learning algorithms have brought automated object detection and recognition to human accuracy levels on various test datasets. But algorithms that work well on an […]

“How Image Sensor and Video Compression Parameters Impact Vision Algorithms,” a Presentation from Amazon Lab126 Read More +

Visual Intelligence Opportunities in Industry 4.0

In order for industrial automation systems to meaningfully interact with the objects they're identifying, inspecting and assembling, they must be able to see and understand their surroundings. Cost-effective and capable vision processors, fed by depth-discerning image sensors and running robust software algorithms, continue to transform longstanding industrial automation aspirations into reality. And, with the emergence

Visual Intelligence Opportunities in Industry 4.0 Read More +

“Adventures in DIY Embedded Vision: The Can’t-miss Dartboard,” a Presentation from Mark Rober

Engineer, inventor and YouTube personality Mark Rober presents the "Adventures in DIY Embedded Vision: The Can’t-miss Dartboard" tutorial at the May 2017 Embedded Vision Summit. Can a mechanical engineer with no background in computer vision build a complex, robust, real-time computer vision system? Yes, with a little help from his friends. Rober fulfilled a three-year

“Adventures in DIY Embedded Vision: The Can’t-miss Dartboard,” a Presentation from Mark Rober Read More +

“Performing Multiple Perceptual Tasks With a Single Deep Neural Network,” a Presentation from Magic Leap

Andrew Rabinovich, Director of Deep Learning at Magic Leap, presents the "Performing Multiple Perceptual Tasks With a Single Deep Neural Network" tutorial at the May 2017 Embedded Vision Summit. As more system developers consider incorporating visual perception into smart devices such as self-driving cars, drones and wearable computers, attention is shifting toward practical formulation and

“Performing Multiple Perceptual Tasks With a Single Deep Neural Network,” a Presentation from Magic Leap Read More +

EVA180x100

Embedded Vision Insights: August 29, 2017 Edition

LETTER FROM THE EDITOR Dear Colleague, TensorFlow has become a popular framework for creating machine learning-based computer vision applications, especially for the development of deep neural networks (DNNs). If you’re planning to develop computer vision applications using deep learning and want to understand how to use TensorFlow to do it, then don’t miss an upcoming

Embedded Vision Insights: August 29, 2017 Edition Read More +

“Using Satellites to Extract Insights on the Ground,” a Presentation from Orbital Insight

Boris Babenko, Senior Software Engineer at Orbital Insight, presents the "Using Satellites to Extract Insights on the Ground" tutorial at the May 2017 Embedded Vision Summit. Satellites are great for seeing the world at scale, but analyzing petabytes of images can be extremely time-consuming for humans alone. This is why machine vision is a perfect

“Using Satellites to Extract Insights on the Ground,” a Presentation from Orbital Insight Read More +

“How to Choose a 3D Vision Technology,” a Presentation from Carnegie Robotics

Chris Osterwood, Chief Technical Officer at Carnegie Robotics, presents the "How to Choose a 3D Vision Technology" tutorial at the May 2017 Embedded Vision Summit. Designers of autonomous vehicles, robots, and many other systems are faced with a critical challenge: Which 3D perception technology to use? There are a wide variety of sensors on the

“How to Choose a 3D Vision Technology,” a Presentation from Carnegie Robotics Read More +

“Automakers at a Crossroads: How Embedded Vision and Autonomy Will Reshape the Industry,” a Presentation from Lux Research

Mark Bünger, VP of Research at Lux Research, presents the "Automakers at a Crossroads: How Embedded Vision and Autonomy Will Reshape the Industry" tutorial at the May 2017 Embedded Vision Summit. The auto and telecom industries have been dreaming of connected cars for twenty years, but their results have been mediocre and mixed. Now, just

“Automakers at a Crossroads: How Embedded Vision and Autonomy Will Reshape the Industry,” a Presentation from Lux Research Read More +

EVA180x100

Embedded Vision Insights: August 18, 2017 Edition

LETTER FROM THE EDITOR Dear Colleague, TensorFlow has become a popular framework for creating machine learning-based computer vision applications, especially for the development of deep neural networks. If you’re planning to develop computer vision applications using deep learning and want to understand how to use TensorFlow to do it, then don’t miss an upcoming full-day,

Embedded Vision Insights: August 18, 2017 Edition Read More +

“Introduction to Optics for Embedded Vision,” a Presentation from Edmund Optics

Jessica Gehlhar, Vision Solutions Engineer at Edmund Optics, presents the “Introduction to Optics for Embedded Vision” tutorial at the May 2017 Embedded Vision Summit. This talk provides an introduction to optics for embedded vision system and algorithm developers. Gehlhar begins by presenting fundamental imaging lens specifications and quality metrics. She explains key parameters and concepts

“Introduction to Optics for Embedded Vision,” a Presentation from Edmund Optics Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top