Now available—the Embedded Vision Summit On-Demand Edition! Gain valuable computer vision and edge AI insights and know-how from the experts at the 2021 Summit.


Dear Colleague,Developer Survey Webinar

Next Thursday, February 27, Jeff Bier, founder of the Edge AI and Vision Alliance, will present two sessions of a free one-hour webinar, “Algorithms, Processors and Tools for Visual AI: Analysis, Insights and Forecasts”. Every year since 2015, the Edge AI and Vision Alliance has surveyed developers of computer vision-based systems and applications to understand what chips and tools they use to build their visual AI products. The results from our most recent survey, conducted in October 2019, were derived from responses received from more than 700 computer vision developers across a wide range of industries, organizations, geographical locations and job types. In this webinar, Bier will provide insights into the popular hardware and software platforms being used for vision-enabled end products, derived from survey results. Bier will not only share results from this year’s survey but also compare and contrast them with past years’ survey data, identifying trends and extrapolating them to future results forecasts. The first webinar session will take place at 9 am Pacific Time (noon Eastern Time), timed for attendees in Europe and the Americas, while the second, at 6 pm Pacific Time (10 am China Standard Time on February 28), is intended for attendees in Asia. To register, please see the event page for the session you’re interested in.

Registration for the May 18-21, 2020 Embedded Vision Summit, the preeminent conference on practical visual AI and computer vision, is now open. Be sure to register today with promo code SUPEREARLYBIRD20 to receive your Super Early Bird Discount, which ends this Friday! The Alliance is now also accepting applications for the 2020 Vision Product of the Year Awards competition. The Vision Product of the Year Awards are open to Member companies of the Alliance and celebrate the innovation of the industry’s leading companies that are developing and enabling the next generation of computer vision products. Winning a Vision Product of the Year award recognizes your leadership in computer vision as evaluated by independent industry experts; winners will be announced at the 2020 Embedded Vision Summit. For more information, and to enter, please see the program page.

Brian Dipert
Editor-In-Chief, Edge AI and Vision Alliance


Performance Analysis for Optimizing Embedded Deep Learning Inference SoftwareArm
Deep learning on embedded devices is currently enjoying significant success in a number of vision applications—particularly smartphones, where increasingly prevalent AI cameras are able to enhance every captured moment. However, the considerable number of deep learning network architectures proposed every year has led to real challenges for software developers who need to implement these demanding algorithms very efficiently. In this presentation, Gian Marco Iodice, Staff Compute Performance Software Engineer at Arm, presents a structured approach for performance analysis of deep learning software implementations. He examines the fundamentals of performance analysis for deep learning, presenting metrics and methodologies. He then shows how a top-down approach can be used to detect and fix performance bottlenecks, creating efficient deep neural network software implementations. He also illustrates typical software optimizations that can be used to make the best use of available computational resources.

How to Get the Best Deep Learning Performance with the OpenVINO ToolkitIntel
Tremendous recent progress in deep learning and computer vision algorithms has made it possible to create innovative applications that were not previously feasible. However, moving from academic research to real-world algorithm deployment is still complicated due to the amount of native programming and low-level knowledge that is required to unleash the full performance of processing platforms. This talk from Yury Gorbachev, Principal Engineer at Intel, demonstrates how the Intel OpenVINO toolkit makes it easy to move deep learning algorithms from research to deployment. Gorbachev walks through the most important toolkit features that allow you to create lightweight applications and reach maximum performance on various processing platforms, including traditional CPUs as well as accelerators such as VPUs, GPUs and FPGAs.


Using Deep Learning for Video Event Detection on a Compute BudgetPathPartner Technology
Convolutional neural networks (CNNs) have made tremendous strides in object detection and recognition in recent years. However, extending the CNN approach to understanding of video or volumetric data poses tough challenges, including trade-offs between representation quality and computational complexity, which is of particular concern on embedded platforms with tight computational budgets. This presentation from Praveen Nayak, Tech Lead at PathPartner Technology, explores the use of CNNs for video understanding. Nayak reviews the evolution of deep representation learning methods involving spatio-temporal fusion from C3D to Conv-LSTMs for vision-based human activity detection. He proposes a decoupled alternative to this fusion, describing an approach that combines a low-complexity predictive temporal segment proposal model and a fine-grained (perhaps high-complexity) inference model. PathPartner Technology finds that this hybrid approach, in addition to reducing computational load with minimal loss of accuracy, enables effective solutions to these high complexity inference tasks.

Object Detection for Embedded MarketsImagination Technologies
While image classification was the breakthrough use case for deep learning-based computer vision, today it has a limited number of real-world applications. In contrast, object detection is finding numerous applications. In this talk, Paul Brasnett, then the PowerVR Business Development Director for Vision and AI at Imagination Technologies, reviews recent progress in the state-of-the-art in object detection and presents a working example of how to exploit these latest developments to enable efficient object detection on embedded devices.


Yole Développement Webinar – 3D Imaging and Sensing: From Enhanced Photography to an Enabling Technology for AR and VR: February 19, 2020, 8:00 am PT

Edge AI and Vision Alliance Webinar – Algorithms, Processors and Tools for Visual AI: Analysis, Insights and Forecasts: February 27, 2020, 9:00 am and 6:00 pm PT (two sessions)

Embedded Vision Summit: May 18-21, 2020, Santa Clara, California

More Events


New AI Technology from Arm Delivers On-device Intelligence for IoT

Basler Introduces an Embedded Vision System for Cloud-based Machine Learning Applications

Mobileye’s Global Ambitions Take Shape with New Deals in China and South Korea

Allied Vision’s New Alvium CSI-2 Camera Enables Full HD Resolution for Embedded Vision

A New OmniVision 48MP Image Sensor Provides High Dynamic Range and 4K Video Performance for Flagship Mobile Phones

More News


Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.



1646 North California Blvd.,
Suite 360
Walnut Creek, CA 94596 USA

Phone: +1 (925) 954-1411
Scroll to Top