Dear Colleague,2022 Edge AI and Vision Product of the Year Awards

Until this Friday, January 28, the Edge AI and Vision Alliance is accepting applications for the 2022 Edge AI and Vision Product of the Year Awards competition. The Edge AI and Vision Product of the Year Awards celebrate the innovation of the industry’s leading companies that are developing and enabling the next generation of edge AI and computer vision products. Winning a Product of the Year award recognizes your leadership in edge AI and computer vision as evaluated by independent industry experts. Winners will be publicly announced at the Embedded Vision Summit, the key event for system and application developers who are incorporating computer vision and visual AI into products. For more information on the Edge AI and Vision Product of the Year Awards and to enter, please see the program page. Again, the deadline for applications is January 28.

Registration for the Embedded Vision Summit, taking place May 17-19 in Santa Clara, California, is also now open, and if you register by March 11, you can save 25% by using the code SUMMIT22-NL.

On Thursday February 24 at 9 am PT, BrainChip will deliver the free webinar “Developing Optimized Systems with BrainChip’s Akida Neuromorphic Processor” in partnership with the Edge AI and Vision Alliance. BrainChip’s Akida processor today finds use in a diversity of applications, such as classifying images, identifying odors and tastes, recognizing breath data for disease classification, identifying air quality, interpreting LiDAR laser light data, recognizing keywords, and detecting cybersecurity attacks. Akida leverages advanced neuromorphic computing as its processing “engine”, delivering key features such as one-shot learning and on-device computing with no “cloud” dependencies. As such, it’s particularly valuable in evolving smart “edge” devices, where privacy, security, low power consumption, low latency and high performance, low cost, and small size are all important criteria. In this session, you’ll learn how to easily develop efficient AI in edge devices by implementing Akida IP either into your SoC or as standalone silicon. The presenters will provide detailed performance and other results, derived from real-life system implementations using production Akida silicon, as well as share a variety of design techniques and support resources. For more information and to register, please see the event page.

Brian Dipert
Editor-In-Chief, Edge AI and Vision Alliance


Facing Up to BiasPerceive
Today’s face recognition networks identify white men correctly more often than white women or non-white people. The use of these models can manifest racism, sexism, and other troubling forms of discrimination. There are also publications suggesting that compressed models have greater bias than uncompressed ones. Remarkably, poor statistical reasoning bears as much responsibility for the underlying biases as social pathology does. Further, compression per se is not the source of bias; it just magnifies bias already produced by mainstream training methodologies. By illuminating the sources of statistical bias, as Steve Teig, CEO of Perceive, details in this presentation, we can train models in a more principled way – not just throwing more data at them – to be more discriminating, rather than discriminatory.

An Introduction to Data Augmentation Techniques in ML FrameworksAMD
Data augmentation is a set of techniques that expand the diversity of data available for training machine learning models by generating new data from existing data. This talk from Rajy Rawther, principal member of the technical staff and software architect at AMD, introduces different types of data augmentation techniques as well as their uses in various training scenarios. Rawther explores some built-in augmentation methods in popular ML frameworks like PyTorch and TensorFlow. She also discusses some tips and tricks that are commonly used to randomly select parameters to avoid having model overfit to a particular dataset.


Quickly Measure and Optimize Inference Performance Using Intel DevCloud for the EdgeIntel
When developing an edge AI solution, DNN inference performance is critical: If your network doesn’t meet your throughput and latency requirements, you’re in trouble. But accurately measuring inference performance on target hardware can be time-consuming—just getting your hands on the target hardware can take weeks or months. In this Over-the-Shoulder tutorial, Corey Heath, Software Engineer at Intel, shows step-by-step how you can quickly and easily benchmark inference performance on a variety of platforms without having to purchase hardware or install software tools. By using Intel DevCloud for the Edge and the Deep Learning Workbench, you can work from anywhere and get instant access to a wide range of Intel hardware platforms and software tools. And, beyond simply measuring performance, using Intel DevCloud for the Edge allows you to quickly identify bottlenecks and optimize your code—all in a cloud-based environment accessible from anywhere.

Dynamic Neural Accelerator and MERA Compiler for Low-latency and Energy-efficient Inference at the EdgeEdgeCortix
Achieving high performance and power efficiency for machine learning inference at the edge requires maintaining high chip utilization, even with a batch size of one, while processing high-resolution image data. In this tutorial session, Sakyasingha Dasgupta, the founder and CEO of EdgeCortix, shows how EdgeCortix’s reconfigurable Dynamic Neural Accelerator (DNA) AI processor architecture, coupled with the company’s MERA compiler and software stack, enables developers to seamlessly execute deep neural networks written in Pytorch and TensorFlow Lite while maintaining high chip utilization, power efficiency and low latency regardless of the type of convolution neural network. Dasgupta walks through examples of implementing deep neural networks for vision applications on DNA, starting from standard machine learning frameworks and then benchmarking performance using the built-in simulator as well as FPGA hardware.


Developing Optimized Systems with BrainChip’s Akida Neuromorphic Processor – BrainChip Webinar: February 24, 2022, 9:00 am PT

Embedded Vision Summit: May 17-19, 2022, Santa Clara, California

More Events


BrainChip Achieves Full Commercialization of Its AKD1000 AIoT Chip with Availability of Mini PCIe Boards in High Volume

Train Smart Retail AIs 50x Faster with Mindtech’s New Synthetic Data Application Pack

e-con Systems Launches 13 Mpixel Monochrome USB 3.1 Gen 1 Camera with High Sensitivity

Intel Empowers Developers with oneAPI 2022 Toolkits

Syntiant Announces Voice-Enabled Ultra-Low-Power Reference Design for True Wireless Stereo Earbud Applications

More News


Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.



1646 N. California Blvd.,
Suite 360
Walnut Creek, CA 94596 USA

Phone: +1 (925) 954-1411
Scroll to Top