Edge AI and Vision Alliance

“Using Vision to Transform Retail,” a Presentation from IBM

Sumit Gupta, Vice President of AI, Machine Learning and HPC at IBM, presents the “Using Vision to Transform Retail” tutorial at the May 2018 Embedded Vision Summit. This talk explores how recent advances in deep learning-based computer vision have fueled new opportunities in retail. Using case studies based on deployed systems, Gupta explores how deep […]

“Using Vision to Transform Retail,” a Presentation from IBM Read More +

EVA180x100

Embedded Vision Insights: July 10, 2018 Edition

EMBEDDED VISION PERSPECTIVES The Four Key Trends Driving the Proliferation of Visual Perception With so much happening in computer vision applications and technology, and happening so fast, it can be difficult to see the big picture. In this talk, Jeff Bier, Founder of the Embedded Vision Alliance and Co-founder and President of BDTI, examines the

Embedded Vision Insights: July 10, 2018 Edition Read More +

“Optimize Performance: Start Your Algorithm Development With the Imaging Subsystem,” a Presentation from Twisthink

Ryan Johnson, lead engineer at Twisthink, presents the “Optimize Performance: Start Your Algorithm Development With the Imaging Subsystem” tutorial at the May 2018 Embedded Vision Summit. Image sensor and algorithm performance are rapidly increasing, and software and hardware development tools are making embedded vision systems easier to develop. Even with these advancements, optimizing vision-based detection

“Optimize Performance: Start Your Algorithm Development With the Imaging Subsystem,” a Presentation from Twisthink Read More +

“Real-time Calibration for Stereo Cameras Using Machine Learning,” a Presentation from Lucid VR

Sheldon Fernandes, Senior Software and Algorithms Engineer at Lucid VR, presents the “Real-time Calibration for Stereo Cameras Using Machine Learning” tutorial at the May 2018 Embedded Vision Summit. Calibration involves capturing raw data and processing it to get useful information about a camera’s properties. Calibration is essential to ensure that a camera’s output is as

“Real-time Calibration for Stereo Cameras Using Machine Learning,” a Presentation from Lucid VR Read More +

“Think Like an Amateur, Do As an Expert: Lessons from a Career in Computer Vision,” a Keynote Presentation from Dr. Takeo Kanade

Dr. Takeo Kanade, U.A. and Helen Whitaker Professor at Carnegie Mellon University, presents the “Think Like an Amateur, Do As an Expert: Lessons from a Career in Computer Vision” tutorial at the May 2018 Embedded Vision Summit. In this keynote presentation, Dr. Kanade shares his experiences and lessons learned in developing a vast range of

“Think Like an Amateur, Do As an Expert: Lessons from a Career in Computer Vision,” a Keynote Presentation from Dr. Takeo Kanade Read More +

EVA180x100

Embedded Vision Insights: June 26, 2018 Edition

DEEP LEARNING FOR VISION PROCESSING The Caffe2 Framework for Mobile and Embedded Deep Learning Fei Sun, software engineer at Facebook, introduces Caffe2, a new open-source machine learning framework, in this presentation. Sun also explains how Facebook is using Caffe2 to enable computer vision in mobile and embedded devices. Methods for Understanding How Deep Neural Networks

Embedded Vision Insights: June 26, 2018 Edition Read More +

“A Physics-based Approach to Removing Shadows and Shading in Real Time,” a Presentation from Tandent Vision Science

Bruce Maxwell, Director of Research at Tandent Vision Science, presents the “A Physics-based Approach to Removing Shadows and Shading in Real Time” tutorial at the May 2018 Embedded Vision Summit. Shadows cast on ground surfaces can create false features and modify the color and appearance of real features, masking important information used by autonomous vehicles,

“A Physics-based Approach to Removing Shadows and Shading in Real Time,” a Presentation from Tandent Vision Science Read More +

“Generative Sensing: Reliable Recognition from Unreliable Sensor Data,” a Presentation from Arizona State University

Lina Karam, Professor and Computer Engineering Director at Arizona State University, presents the “Generative Sensing: Reliable Recognition from Unreliable Sensor Data” tutorial at the May 2018 Embedded Vision Summit. While deep neural networks (DNNs) perform on par with – or better than – humans on pristine high-resolution images, DNN performance is significantly worse than human

“Generative Sensing: Reliable Recognition from Unreliable Sensor Data,” a Presentation from Arizona State University Read More +

“Infusing Visual Understanding in Cloud and Edge Solutions Using State Of-the-Art Microsoft Algorithms,” a Presentation from Microsoft

Anirudh Koul, Senior Data Scientist, and Jin Yamamoto, Principal Program Manager, both from Microsoft, present the “Infusing Visual Understanding in Cloud and Edge Solutions Using State Of-the-Art Microsoft Algorithms” tutorial at the May 2018 Embedded Vision Summit. Microsoft offers its state-of-the-art computer vision algorithms, used internally in several products, through the Cognitive Services cloud APIs.

“Infusing Visual Understanding in Cloud and Edge Solutions Using State Of-the-Art Microsoft Algorithms,” a Presentation from Microsoft Read More +

May 2018 Embedded Vision Summit Introductory Presentation (Day 1)

Jeff Bier, Founder of the Embedded Vision Alliance, welcomes attendees to the May 2018 Embedded Vision Summit on May 22, 2018 (Day 1). Bier provides an overview of the embedded vision market opportunity, challenges, solutions and trends. He also introduces the Embedded Vision Alliance and the resources it offers for both product creators and potential

May 2018 Embedded Vision Summit Introductory Presentation (Day 1) Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top