Dear Colleague,2022 Embedded Vision Summit

The next Embedded Vision Summit will take place as a live event May 17-19 in Santa Clara, California. The Embedded Vision Summit is the key event for system and application developers who are incorporating computer vision and visual AI into products. It attracts a unique audience of over 1,000 product creators, entrepreneurs and business decision-makers who are creating and using computer vision and visual AI technologies. It’s a unique venue for learning, sharing insights and getting the word out about interesting new technologies, techniques, applications, products and practical breakthroughs in computer vision and visual AI.

We’re delighted to return to being in-person and hope you’ll join us. Once again we’ll be offering a packed program with 100+ sessions, 50+ technology exhibits, and 100+ demos, all covering the technical and business aspects of practical computer vision, deep learning, visual AI and related technologies. And new for 2022 is the Edge AI Deep Dive Day, a series of in-depth sessions focused on specific topics in visual AI at the edge. Registration is now open, and if you register by this Friday, March 11, you can save 25% by using the code SUMMIT22-NL. Register now, save the date, and tell a friend! You won’t want to miss what is shaping up to be our best Summit yet.

Next Thursday, March 17 at 9 am PT, Network Optix will deliver the free webinar “A Platform Approach to Developing Networked Visual AI Systems” in partnership with the Edge AI and Vision Alliance. Internet-connected cameras are becoming ubiquitous. Coupled with computer vision and machine learning algorithms, these cameras form the foundation for a growing range of visual AI applications that monitor people, facilities, and other objects and environments. But creating a robust, scalable application using internet-connected cameras requires much more than cameras and algorithms.

For example, these applications typically need robust video storage management, including the ability to manage limited bandwidth, provisions for reliable recovery in the event of hardware failures and the ability to securely store video on a variety of device types. In addition, networked visual AI applications often must be able to discover and interact with a variety of camera and stream types on a network. They also typically require media servers and clients that can run on mobile, desktop, server and cloud. And they need extensibility –so that they can be integrated with a variety of existing software stacks, applications and ecosystems.

Network Optix’s Nx Meta intelligent video platform enables solution developers to create cross-platform networked visual AI solutions that incorporate device and stream discovery and interoperability, robust storage management, and extensibility in a matter of weeks. In this webinar, Tony Luce, Vice President of Product Marketing at Network Optix, will introduce the NX Meta platform and describe examples of currently deployed applications utilizing the platform. A question-and-answer session also including Nathan Wheeler, Chairman and CEO of the company, will follow the presentation. For more information and to register, please see the event page.

Brian Dipert
Editor-In-Chief, Edge AI and Vision Alliance


Efficient Video Perception Through AIQualcomm
Video data is abundant and being generated at ever-increasing rates. Analyzing video with AI can provide valuable insights and capabilities for many applications ranging from autonomous driving and smart cameras to smartphones and extended reality. However, as video resolution and frame rates increase while AI video perception models become more complex, running these workloads in real-time is becoming more challenging. This presentation from Fatih Porikli, Senior Director of Technology at Qualcomm, explores the latest research that is enabling efficient video perception while maintaining neural network model accuracy.

Introduction to DNN Model Compression TechniquesXailient
Embedding real-time large-scale deep learning vision applications at the edge is challenging due to their huge computational, memory, and bandwidth requirements. System architects can mitigate these demands by modifying deep-neural networks to make them more energy efficient and less demanding of processing resources by applying various model compression approaches. In this talk, Sabina Pokhrel, Customer Success AI Engineer at Xailient, provides an introduction to four established techniques for model compression. She discusses network pruning, quantization, knowledge distillation and low-rank factorization compression approaches.


Robust Object Detection Under Dataset ShiftsArm
In image classification tasks, the evaluation of models’ robustness to increased dataset shifts with a probabilistic framework is very well studied. However, object detection (OD) tasks pose other challenges for uncertainty estimation and evaluation. For example, one needs to evaluate both the quality of the label uncertainty (i.e., what?) and spatial uncertainty (i.e., where?) for a given bounding box, but that evaluation cannot be performed with more traditional average precision metrics (e.g., mAP). In this talk, Partha Maji, Principal Research Scientist at Arm’s Machine Learning Research Lab, discusses how to adapt well-established object detection models to generate uncertainty estimations by introducing stochasticity in the form of Monte Carlo Dropout (MC-Drop). He also discusses how such techniques could be extended to a broad class of embedded vision tasks to improve robustness.

High-fidelity Conversion of Floating-point Networks for Low-precision Inference Using Distillation with Limited DataImagination Technologies
When converting floating-point networks to low-precision equivalents for high-performance inference, the primary objective is to maximally compress the network whilst maintaining fidelity to the original, floating-point network. This is made particularly challenging when only a reduced or unlabelled dataset is available. Data may be limited for reasons of a commercial or legal nature: for example, companies may be unwilling to share valuable data and labels that represent a substantial investment of resources; or the collector of the original dataset may not be permitted to share it for data privacy reasons. James Imber, Senior Research Engineer at Imagination Technologies, presents a method based on distillation that allows high-fidelity, low-precision networks to be produced for a wide range of different network types, using the original trained network in place of a labeled dataset. The proposed approach is directly applicable across multiple domains (e.g. classification, segmentation and style transfer) and can be adapted to numerous network compression techniques.


A Platform Approach to Developing Networked Visual AI Systems – Network Optix Webinar: March 17, 2022, 9:00 am PT

Embedded Vision Summit: May 17-19, 2022, Santa Clara, California

More Events


FRAMOS’ New Sensor Module Targets Demanding 4K/60 FPS Applications

Intel Advances AI Inferencing for Developers via a New Version of OpenVINO

Basler Expands Its 3D Portfolio with a Stereo Camera Series

SmartCow AI Technologies’ Apollo Development Kit Enables Conversational and Other Advanced Natural Language Processing Applications

Immervision Launches a SDK Supporting Universal Web Video Dewarping

More News


Mashgin (Software Engineer, Computer Vision and Deep Learning)Mashgin
Mashgin is building the world’s fastest self-checkout system using AI-powered computer vision. If you’re excited about the applications of computer vision in the real world, you’d be a great fit for our team of passionate engineers!


Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.



1646 N. California Blvd.,
Suite 360
Walnut Creek, CA 94596 USA

Phone: +1 (925) 954-1411
Scroll to Top