Now available—the Embedded Vision Summit On-Demand Edition! Gain valuable computer vision and edge AI insights and know-how from the experts at the 2021 Summit.

Dear Colleague,Edge Impulse Webinar

Check out these upcoming free webinars from the Alliance and its Member companies and partners to broaden and deepen your edge AI and vision expertise:

  • On Tuesday, February 9, 2021 at 9 am PT, the Edge AI and Vision Alliance will deliver the free webinar “Adding Visual AI or Computer Vision to Your Embedded System: Key Things Every Engineering Manager Should Know” in partnership with AspenCore. Visual AI and computer vision are bringing compelling capabilities to many types of systems—making them safer, easier to use, more efficient and more capable. But visual AI and computer vision are quite different from traditional embedded technologies, and for many product development groups, these new technologies bring unfamiliar challenges and unexpected risks. How can you gain confidence that your requirements can be met with today’s technology? What metrics should you use to assess accuracy? Will you need to collect and label your own training data? Will you need a much more powerful (and expensive and power-hungry) processor? How will you keep your algorithms up to date as their environment changes?

    In this presentation, Jeff Bier and Phil Lapsley from the Alliance will take you on a quick course in managing visual AI projects for embedded systems. They will cover how data is now your best friend, and maybe also your worst nightmare (and what you can do to stay on its good side); they’ll look at the iterative nature of AI/CV projects and how that differs from traditional development; they’ll talk about the importance of requirements and real-world versus laboratory conditions; they’ll touch on important issues of bias; they’ll provide an overview of how to think about accuracy; and they’ll give tips on how to talk to your management about a development process that is likely quite different than what they’re used to. For more information and to register, please see the event page.

  • On Thursday, February 25, 2021 at 9 am PT, Edge Impulse will deliver the free webinar “How to Rapidly Build Robust Data-driven Embedded Machine Learning Applications,” in partnership with the Alliance. In this in-depth tutorial, you’ll learn how to build commercial embedded machine learning (ML) applications, in robust implementations ranging from simple sensor configurations to powerful computer vision deployments. Topics covered in the webinar will include:
    • Assembling datasets
    • Designing and training highly accurate models
    • Implementation optimization, and
    • Integration with the remainder of your edge device

    The webinar will include live demonstrations of the concepts discussed, and is co-presented by Zach Shelby, CEO and co-founder, and Daniel Situnayake, founding ML engineer, both of Edge Impulse. For more information and to register, please see the event page.

  • And on Tuesday, March 23, 2021 at 9 am PT, Nota will deliver the free webinar “Selecting and Combining Deep Learning Model Optimization Techniques for Enhanced On-device AI,” in partnership with the Alliance. Model optimization techniques such as pruning, quantization, filter decomposition, and NAS (neural architecture search) are becoming increasingly important in efficiently implementing deep learning on the edge. Challenges, including determining the optimum technique (potentially involving the combination of multiple different techniques) and defining hyperparameters for peak performance, historically demand deep technical expertise and significant resources in order to successfully achieve the optimization objective. In this presentation, Tae-Ho Kim, founder and CTO of Nota, will describe his company’s experiences with (and resultant perspectives on) various deep learning model optimization techniques, as well as demonstrating how multiple techniques can be combined to further improve performance. Kim will also discuss NetsPresso, Nota’s automatic model optimization platform. For more information and to register, please see the event page.

Brian Dipert
Editor-In-Chief, Edge AI and Vision Alliance


Recent Advances in Post-training QuantizationIntel
The use of low-precision arithmetic (8-bit and smaller data types) is key for the deployment of deep neural network inference with high performance, low cost and low power consumption. Shifting to low-precision arithmetic requires a model quantization step that can be performed at model training time (quantization-aware training) or after training (post-training quantization). Post-training quantization is an easy way to quantize already trained models that provides good accuracy/performance trade-off. In this talk, Alexander Kozlov, Deep Learning R&D Engineer at Intel, reviews recent advances in post-training quantization methods and algorithms that help to reduce quantization error. He also shows the performance speed-up that can be achieved for various models when using 8-bit quantization.

Practical DNN Quantization Techniques and ToolsFacebook
Quantization is a key technique to enable the efficient deployment of deep neural networks. In this talk, Raghuraman Krishnamoorthi, Software Engineer at Facebook, presents an overview of techniques for quantizing convolutional neural networks for inference with integer weights and activations. Krishnamoorthi explores simple and advanced quantization approaches and examine their effects on latency and accuracy on various target processors. He also presents best practices for quantization-aware training to obtain high accuracy with quantized weights and activations.


Challenges and Approaches for Cascaded DNNs: A Case Study of Face Detection for Face VerificationImagination Technologies
This talk from Ana Salazar, Senior Research Manager at Imagination Technologies, explores the challenges of deploying serial computer vision tasks implemented with DNNs. Neural network accelerators have demonstrated significant gains in performance for DNN inference, especially when the net has been quantized. Quantization often brings a loss in accuracy. This loss in accuracy may be considered acceptable in itself but may cause problems if the output of the DNN is used as input for a second DNN which itself has been quantized. Salazar presents her company’s research into this challenge in the context of a face verification CNN which consumes the output of a face detection CNN, discussing approaches for reducing the impact of quantization in such scenarios.

AI-powered People Detection Using Time of Flight DataArrow Electronics and Analog Devices
Arrow Electronics and Analog Devices have teamed up to provide end-users with a Time of Flight development kit that includes hardware, software and algorithms for developing applications that require 3D depth mapping. This tutorial from Andrei Cozma, Engineering Manager at Analog Devices, will walk you through how to get started with the kit and how to use AI frameworks and tools to develop algorithms for people detection and tracking using time of flight data. After watching this tutorial, you’ll be ready to start your own application development using the Arrow Time of Flight development kit.


Edge AI and Vision Alliance Webinar – Adding Visual AI or Computer Vision to Your Embedded System: Key Things Every Engineering Manager Should Know: February 9, 2021, 9:00 am PT

Vision Components Webinar – Adding Embedded Cameras to Your Next Industrial Product Design: February 16, 2021, 9:00 am PT

Edge Impulse Webinar – How to Rapidly Build Robust Data-driven Embedded Machine Learning Applications: February 25, 2021, 9:00 am PT

Nota Webinar – Selecting and Combining Deep Learning Model Optimization Techniques for Enhanced On-device AI: March 23, 2021, 9:00 am PT

More Events


MediaTek’s Dimensity 1200 5G SoC Delivers Enhanced AI and Multimedia Experiences

Chips&Media’s WAVE6 Video Codec IP Series Reduces Latency While Improving Efficiency

CEVA’s 2nd-generation SensPro Expands the Company’s Scalable Sensor Hub DSP Portfolio

The GeForce RTX 3060 Brings NVIDIA’s Latest-Generation GPU Architecture to the Mainstream

Ambarella’s CV5 AI Vision Processor Targets Single 8K and Multi-Imager AI Cameras

More News


NVIDIA Jetson Nano (Best AI Processor)NVIDIA
NVIDIA’s Jetson Nano is the 2020 Edge AI and Vision Product of the Year Award Winner in the AI Processors category. Jetson Nano delivers the power of modern AI in the smallest supercomputer for embedded and IoT. Jetson Nano is a small form factor, power-efficient, low-cost and production-ready System on Module (SOM) and Developer Kit that opens up AI to the educators, makers and embedded developers previously without access to AI. Jetson Nano delivers up to 472 GFLOPS of accelerated computing, can run many modern neural networks in parallel, and delivers the performance to process data from multiple high-resolution sensors, including cameras, LIDAR, IMU, ToF and more, to sense, process and act in an AI system, consuming as little as 5 W.

Please see here for more information on NVIDIA and its Jetson Nano. The Edge AI and Vision Product of the Year Awards (an expansion of previous years’ Vision Product of the Year Awards) celebrate the innovation of the industry’s leading companies that are developing and enabling the next generation of edge AI and computer vision products. Winning a Product of the Year award recognizes your leadership in edge AI and computer vision as evaluated by independent industry experts. The Edge AI and Vision Alliance is now accepting applications for the 2021 Awards competition; for more information and to enter, please see the program page.


Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.



1646 North California Blvd.,
Suite 360
Walnut Creek, CA 94596 USA

Phone: +1 (925) 954-1411
Scroll to Top