Dear Colleague,Perceive webinar

On Thursday, November 10 at 9 am PT, Perceive will deliver the free webinar “Putting Activations on a Diet – Or Why Watching Your Weights Is Not Enough” in partnership with the Edge AI and Vision Alliance. To reduce the memory requirements of neural networks, researchers have proposed numerous heuristics for compressing weights. Lower precision, sparsity, weight sharing and various other schemes shrink the memory needed by the neural network’s weights or program. Unfortunately, during network execution, memory use is usually dominated by activations–the data flowing through the network–rather than weights.

Although lower precision can reduce activation memory somewhat, more extreme steps are required in order to enable large networks to run efficiently with small memory footprints. Fortunately, the underlying information content of activations is often modest, so novel compression strategies can dramatically widen the range of networks executable on constrained hardware. Steve Teig, Founder and CEO of Perceive, will introduce some new strategies for compressing activations, sharply reducing their memory footprint. A question-and-answer session will follow the presentation. For more information and to register, please see the event page.

Brian Dipert
Editor-In-Chief, Edge AI and Vision Alliance


How Do We Enable Edge ML Everywhere? Data, Reliability and Silicon FlexibilityEdge Impulse
In this 2022 Embedded Vision Summit General Session talk, Zach Shelby, Co-founder and CEO of Edge Impulse, reveals insights from the company’s recent global edge ML developer survey, which identified key barriers to machine learning adoption, and shares the company’s vision for how the industry can overcome these obstacles. Unsurprisingly, the first critical obstacle identified by the survey is data. But the issue isn’t simply a lack of massive datasets, as is often assumed. On the contrary, the biggest opportunities in ML will be enabled by highly custom, industry-specific and even user-specific data. We need to master data lifecycle and active learning techniques that enable developers to move quickly from “zero to dataset.”

The real and perceived inability of today’s ML algorithms to reach the ultra-high accuracy needed in industrial systems is another key barrier. New techniques for explainable ML, better testing, sensor fusion and model fusion will increasingly allow developers to achieve industrial-grade reliability. Finally, in order to accelerate ML adoption in embedded products, we must recognize that most developers can’t immediately upgrade their systems to use the latest chips — a problem that is compounded by today’s chip shortages. To enable ML everywhere, we have to find ways to deploy ML on today’s silicon, while ensuring a smooth transition to new devices with AI acceleration in the future.

Optimization Techniques with Intel’s OpenVINO to Enhance Performance on Your Existing HardwareIntel
Whether you’re using TensorFlow, PyTorch or another framework, Intel’s Nico Galoppo, Principal Engineer (substituting for Ansley Dunn, Product Marketing Manager), and Ryan Loney, Technical Product Manager, will show you optimization techniques to enhance performance on your existing hardware in this 2022 Embedded Vision Summit presentation. With the OpenVINO Toolkit, built on the foundation of OneAPI, developers can utilize their own AI model or leverage one of the hundreds of pre-trained models available across vision and audio use cases. You’ll learn how the Neural Network Compression Framework provides optimal model training templates for performance boosts while preserving accuracy, and how the Model Optimizer reduces complexity and makes model conversion faster. Other areas explored by Galoppo and Loney include auto device discovery to enable automatic load balancing and how to optimize for latency or throughput based on your workload.


Powering the Connected Intelligent Edge and the Future of On-Device AIQualcomm
Qualcomm is leading the realization of the “connected intelligent edge,” where the convergence of wireless connectivity, efficient computing and distributed AI will power the devices and experiences that you deserve. In this 2022 Embedded Vision Summit General Session talk, Ziad Asghar, Vice President of Product Management at Qualcomm, explores some of the key challenges in deploying AI across diverse edge products in markets including mobile, automotive, XR, IoT, robotics and PCs — and some of the important differences in the AI requirements of these applications.

Asghar identifies unique AI features that will be needed as physical and digital spaces converge in what is now called the “metaverse”. He highlights key AI technologies offered within Qualcomm products, and how the company connects them to enable the connected intelligent edge. Finally, he shares his vision of the future of on-device AI — including on-device learning, efficient models, state-of-the-art quantization, and how Qualcomm plans to make this vision a reality.

High-Efficiency Edge Vision Processing Based on Dynamically Reconfigurable TPU TechnologyFlex Logix
To achieve high accuracy, edge computer vision requires teraops of processing to be executed in fractions of a second. Additionally, edge systems are constrained in terms of power and cost. This 2022 Embedded Vision Summit presentation from Cheng Wang, Senior Vice President and Co-founder of Flex Logix, explains and demonstrates the novel dynamic TPU array architecture of Flex Logix’s InferX X1 accelerators and contrasts it to current GPU, TPU and other approaches to delivering the teraops computing required by edge vision inferencing.


Putting Activations on a Diet – Or Why Watching Your Weights Is Not Enough – Perceive Webinar: November 10, 2022, 9:00 am PT

More Events


e-con Systems Launches a 4K HDR GMSL2 Multi-camera Configuration for the NVIDIA Jetson AGX Orin SoC

An Upcoming Webinar from Alliance Member Companies Plumerai and Texas Instruments Covers Rapid, Accurate People Detection

Vision Components’ New MIPI Camera Modules Include High-quality Global Shutter Image Sensors

Network Optix’ Nx Witness VMS v5.0 is Now Available

Basler Latest v7 pylon Software Includes vTools Custom-fit Image Processing Modules

More News


OrCam Technologies OrCam Read (Best Consumer Edge AI End Product)OrCam
OrCam Technologies’ OrCam Read is the 2022 Edge AI and Vision Product of the Year Award winner in the Consumer Edge AI End Product category. OrCam Read is the first of a new class of easy-to-use handheld digital readers that supports people with mild to moderate vision loss, as well as those with reading challenges, to access the texts they need and to more effectively accomplish their daily tasks. Whether reading an article for school, perusing a news story on a smartphone, reviewing a phone bill or ordering from a menu, OrCam Read is the only personal AI reader that can instantly capture and read full pages of text and digital screens out loud. All of OrCam Read’s information processing – from its text-to-speech functionality implemented to operate on the edge, to its voice-controlled operation using the “Hey OrCam” voice assistant, to the Natural Language Processing (NLP)- and Natural Language Understanding (NLU)-driven Smart Reading feature – is processed locally, on the device, with no data connectivity required.

Please see here for more information on OrCam Technologies’ OrCam Read. The Edge AI and Vision Product of the Year Awards celebrate the innovation of the industry’s leading companies that are developing and enabling the next generation of edge AI and computer vision products. Winning a Product of the Year award recognizes a company’s leadership in edge AI and computer vision as evaluated by independent industry experts.


Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.



1646 N. California Blvd.,
Suite 360
Walnut Creek, CA 94596 USA

Phone: +1 (925) 954-1411
Scroll to Top