Edge AI and Vision Alliance

“Image Sensor Formats and Interfaces for IoT Applications,” a Presentation from Sony

Tatsuya Sugioka, Imaging System Architect at Sony Corporation, presents the "Image Sensor Formats and Interfaces for IoT Applications" tutorial at the May 2017 Embedded Vision Summit. Image sensors provide the essential input for embedded vision. Hence, the choice of image sensor format and interface is critical for embedded vision system developers. In this talk, Sugioka […]

“Image Sensor Formats and Interfaces for IoT Applications,” a Presentation from Sony Read More +

EVA180x100

Embedded Vision Insights: September 26, 2017 Edition

LETTER FROM THE EDITOR Dear Colleague, Deep neural networks (DNNs) are proving very effective for a variety of challenging machine perception tasks, but these algorithms are very computationally demanding. To enable DNNs to be used in practical applications, it’s critical to find efficient ways to implement them. The Embedded Vision Alliance will delve into these

Embedded Vision Insights: September 26, 2017 Edition Read More +

“Collaboratively Benchmarking and Optimizing Deep Learning Implementations,” a Presentation from General Motors

Unmesh Bordoloi, Senior Researcher at General Motors, presents the "Collaboratively Benchmarking and Optimizing Deep Learning Implementations" tutorial at the May 2017 Embedded Vision Summit. For car manufacturers and other OEMs, selecting the right processors to run deep learning inference for embedded vision applications is a critical but daunting task.  One challenge is the vast number

“Collaboratively Benchmarking and Optimizing Deep Learning Implementations,” a Presentation from General Motors Read More +

“New Dataflow Architecture for Machine Learning,” a Presentation from Wave Computing

Chris Nicol, CTO at Wave Computing, presents the "New Dataflow Architecture for Machine Learning" tutorial at the May 2017 Embedded Vision Summit. Data scientists have made tremendous advances in the use of deep neural networks (DNNs) to enhance business models and service offerings. But training DNNs can take a week or more using traditional hardware

“New Dataflow Architecture for Machine Learning,” a Presentation from Wave Computing Read More +

Use a Camera Model to Accelerate Camera System Design

This blog post was originally published by Twisthink. It is reprinted here with the permission of Twisthink. The exciting world of embedded cameras is experiencing rapid growth. Digital-imaging technology is being integrated into a wide range of new products and systems. Embedded cameras are becoming widely adopted in the automotive market, security and surveillance markets,

Use a Camera Model to Accelerate Camera System Design Read More +

“Enabling the Full Potential of Machine Learning,” a Presentation from Wave Computing

Derek Meyer, CEO of Wave Computing, presents the "Enabling the Full Potential of Machine Learning" tutorial at the May 2017 Embedded Vision Summit. With the growing recognition that “data is the new oil,” more companies are looking to machine learning to gain competitive advantages and create new business models. But the machine learning industry is

“Enabling the Full Potential of Machine Learning,” a Presentation from Wave Computing Read More +

EVA180x100

Embedded Vision Insights: September 12, 2017 Edition

LETTER FROM THE EDITOR Dear Colleague, Deep neural networks (DNNs) are proving very effective for a variety of challenging machine perception tasks, but these algorithms are very computationally demanding. To enable DNNs to be used in practical applications, it’s critical to find efficient ways to implement them. The Embedded Vision Alliance will delve into these

Embedded Vision Insights: September 12, 2017 Edition Read More +

“How Image Sensor and Video Compression Parameters Impact Vision Algorithms,” a Presentation from Amazon Lab126

Ilya Brailovskiy, Principal Engineer at Amazon Lab126, presents the "How Image Sensor and Video Compression Parameters Impact Vision Algorithms" tutorial at the May 2017 Embedded Vision Summit. Recent advances in deep learning algorithms have brought automated object detection and recognition to human accuracy levels on various test datasets. But algorithms that work well on an

“How Image Sensor and Video Compression Parameters Impact Vision Algorithms,” a Presentation from Amazon Lab126 Read More +

Visual Intelligence Opportunities in Industry 4.0

In order for industrial automation systems to meaningfully interact with the objects they're identifying, inspecting and assembling, they must be able to see and understand their surroundings. Cost-effective and capable vision processors, fed by depth-discerning image sensors and running robust software algorithms, continue to transform longstanding industrial automation aspirations into reality. And, with the emergence

Visual Intelligence Opportunities in Industry 4.0 Read More +

“Adventures in DIY Embedded Vision: The Can’t-miss Dartboard,” a Presentation from Mark Rober

Engineer, inventor and YouTube personality Mark Rober presents the "Adventures in DIY Embedded Vision: The Can’t-miss Dartboard" tutorial at the May 2017 Embedded Vision Summit. Can a mechanical engineer with no background in computer vision build a complex, robust, real-time computer vision system? Yes, with a little help from his friends. Rober fulfilled a three-year

“Adventures in DIY Embedded Vision: The Can’t-miss Dartboard,” a Presentation from Mark Rober Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top