fbpx

Edge AI and Vision Insights: October 26, 2022 Edition

LETTER FROM THE EDITOR
Dear Colleague,Developer Survey

Only two days left! The Edge AI and Vision Alliance’s 9th annual Computer Vision Developer Survey ends this Friday, October 28. For the past eight years, the Alliance has surveyed developers of vision-based products to gain insights into their choices of techniques, languages, algorithms, tools, processors and APIs, as well as understand product development challenges and trends. Please help us by taking a few minutes to share your thoughts. In return you’ll get access to detailed results and a $50 discount on the Embedded Vision Summit in May 2023. You’ll also be entered into a raffle to win one of 20 Amazon Gift Cards worth $50! Click here to take the survey now.

Also, we’re in the process of creating the program for the next Embedded Vision Summit, taking place May 22-25, 2023 in Santa Clara, California. We invite you to share your expertise and experience with peers at the premier event for computer vision and visual/perceptual AI! To learn more about the topics we’re focusing on this year and to submit your idea for a presentation, check out the Summit Call for Proposals or contact us at [email protected]. We will be accepting session proposals through December 7, 2022, but space is limited, so submit soon!

Brian Dipert
Editor-In-Chief, Edge AI and Vision Alliance

DEEP LEARNING MODEL DEVELOPMENT AND OPTIMIZATION

Real-Time Object Detection on MicrocontrollersEdge Impulse
Object detection models are vital for manycomputer vision applications. They can show where an object is in avideo stream, or how many objects are present. But they’re also veryresource-intensive—models like MobileNet SSD can analyze a few frames per second on a Raspberry Pi 4, using a significant amount of RAM. This has put object detection out of reach for the most interesting devices: microcontrollers. Microcontrollers are cheap, small, ubiquitous and energy efficient—and are thus attractive for adding computer vision to everyday devices. But microcontrollers are also very resource-constrained, with clock speeds of up to 200 MHz and less than 256 Kbytes of RAM—far too little to run complex object detection models. But… that’s about to change! In this talk, Jan Jongboom, Co-founder and CTO of Edge Impulse, outlines his company’s work on FOMO (“faster objects, more objects”), a novel DNN architecture for object detection, designed from the ground up to run on microcontrollers.

Accelerating the Creation of Custom, Production-Ready AI Models for Edge AINVIDIA
AI applications are powered by models. However, creating customized, production-ready AI models requires an army of data scientists and mountains of data. For businesses looking to shorten time to market, acquiring training datasets and people with the right skills can be cost-prohibitive. The combination of NVIDIA’s state-of-the-art, pre-trained AI models and the NVIDIA TAO (Train, Adapt and Optimize) Toolkit makes it easy to overcome these barriers. By fine-tuning NVIDIA’s pre-trained vision AI models with your data, you can create custom, production-ready models in hours—rather than months—with no need for AI expertise or large training datasets. Your optimized models can be easily integrated into NVIDIA’s DeepStream accelerated AI framework for deployment at the edge. In this presentation from Akhil Docca, Senior Product Marketing Manager at NVIDIA, you’ll learn how the NVIDIA TAO Toolkit can help you overcome your data prep challenges, enable you to fine-tune your models without deep expertise in AI, and simplify the optimization of models for deployment at the edge.

EMBEDDED PROCESSORS FOR AI INFERENCE

The Future of AI is Here Today: An On-device Deep DiveQualcomm
As a leader in on-device AI, Qualcomm is in a unique position to deliver optimized and now personalized AI experiences to consumers, made possible via innovation in hardware technology and investment across the entire software stack. This investment is now deeply rooted in all of the company’s product offerings, spread across multiple verticals from mobile to automotive. In this talk, Vinesh Sukumar, Senior Director and Head of AI/ML Product Management at Qualcomm, explores the high-performance, low-power Hexagon processor — the core of his company’s latest 7th Generation AI Engine — and shows how the company scales it across the range of products that Qualcomm offers. He also highlights Qualcomm’s investment in advanced techniques such as the latest quantization approaches and neural architecture search to accelerate AI deployment. Finally, he shares details on how his company incorporates these technologies into AI solutions that power Qualcomm’s vision of on-device AI — and shows how these solutions are employed in real-world use cases across many verticals.

Designing Your Next Ultra-Low-Power Always-On SolutionCadence
Increasingly, users expect their systems to be ready to respond at any time—for example, using a voice command to launch a music playlist. System designers have traditionally relied on classical signal processing techniques on simple microcontrollers to implement such features. Today, as more and more devices incorporate cameras and other types of sensors, there’s a growing desire to enable “always-on” functionality using more than just wake words—for example, so that a doorbell camera can wake up when a person approaches. These enhancements require clever AI algorithms that can reliably detect events of interest. And, to enable “always-on” capability, these sophisticated algorithms often must be implemented with ultra-low power consumption, especially for battery-powered devices. In this presentation, Amol Borkar, Director of Product Management and Marketing for Tensilica Vision and AI DSPs at Cadence, shares the trends his company has observed in always-on features and applications and highlights the latest additions to the Cadence Tensilica processor portfolio that address the needs of ultra-low-power, always-on applications.

UPCOMING INDUSTRY EVENTS

Putting Activations on a Diet – Or Why Watching Your Weights Is Not Enough – Perceive Webinar: November 10, 2022, 9:00 am PT

How to Successfully Deploy Deep Learning Models on Edge Devices – Deci Webinar: December 13, 2022, 9:00 am PT

Embedded Vision Summit: May 22-25, 2023, Santa Clara, California

More Events

FEATURED NEWS

VeriSilicon’s AI-ISP Delivers Innovative Image Quality Enhancement that Expands Computer Vision Capabilities

AMD Launches Ryzen 7000 Series Desktop Processors with Zen 4 Architecture

Intel Ships 13th Generation Core Processor Family

Deci Introduces Advanced Semantic Segmentation Models

NVIDIA Delivers Performance Leap with GeForce RTX 40 GPU Series

More News

EDGE AI AND VISION PRODUCT OF THE YEAR WINNER SHOWCASE

Edge Impulse EON Tuner (Best Edge AI Developer Tool)Edge Impulse
Edge Impulse’s EON Tuner is the 2022 Edge AI and Vision Product of the Year Award winner in the Edge AI Developer Tools category. The EON Tuner helps you find and select the best edge machine learning model for your application within the constraints of your target device. While existing “AutoML” tools focus only on machine learning, the EON Tuner performs end-to-end optimizations, from the digital signal processing (DSP) algorithm to the machine learning model, helping developers find and select the ideal tradeoff between these two types of processing blocks to achieve optimal performance for their computer vision application within the latency and memory constraints of their target edge device. The EON Tuner is designed to quickly assist developers in discovering preprocessing algorithms and NN model architectures specifically tailored for their use case and dataset. The EON Tuner eliminates the need for processing block and manual parameter selection to obtain the best model accuracy, reducing user’s technical knowledge requirements and decreasing the total time to get from data collection to a model that runs optimally on an edge device in the field.

Please see here for more information on Edge Impulse’s EON Tuner. The Edge AI and Vision Product of the Year Awards celebrate the innovation of the industry’s leading companies that are developing and enabling the next generation of edge AI and computer vision products. Winning a Product of the Year award recognizes a company’s leadership in edge AI and computer vision as evaluated by independent industry experts.

 

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

1646 N. California Blvd.,
Suite 360
Walnut Creek, CA 94596 USA

Phone
Phone: +1 (925) 954-1411
Scroll to Top