fbpx

Edge AI and Vision Insights: May 25, 2022 Edition

LETTER FROM THE EDITOR
Dear Colleague,2022 Embedded Vision Summit

Last week’s Embedded Vision Summit was a resounding success, with more than 1,000 attendees learning from 100+ expert speakers and 80+ exhibitors and interacting in person for the first time since 2019. Special congratulations go to the winners of the 2022 Edge AI and Vision Product of the Year Awards:

and the winners of this year’s Vision Tank Start-up Competition:

2022 Embedded Vision Summit presentation slide decks in PDF format are now available for download from the Alliance website; publication of presentation videos will begin in the coming weeks. See you at the 2023 Summit!

Brian Dipert
Editor-In-Chief, Edge AI and Vision Alliance

ROBUST REAL-WORLD VISION IMPLEMENTATIONS

Optimizing ML Systems for Real-World DeploymentiRobot
In the real world, machine learning models are components of a broader software application or system. In this talk from the 2021 Embedded Vision Summit, Danielle Dean, Technical Director of Machine Learning at iRobot, explores the importance of optimizing the system as a whole–not just optimizing individual ML models. Based on experience building and deploying deep-learning-based systems for one of the largest fleets of autonomous robots in the world (the Roomba!), Dean highlights critical areas requiring attention for system-level optimization, including data collection, data processing, model building, system application and testing. She also shares recommendations for ways to think about and achieve optimization of the whole system.

A Practical Guide to Implementing Machine Learning on Embedded DevicesChamberlain Group
Deploying machine learning onto edge devices requires many choices and trade-offs. Fortunately, processor designers are adding inference-enhancing instructions and architectures to even the lowest cost MCUs, tools developers are constantly discovering optimizations that extract a little more performance out of existing hardware, and ML researchers are refactoring the math to achieve better accuracy using faster operations and fewer parameters. In this presentation from the 2021 Embedded Vision Summit, Nathan Kopp, Principal Software Architect for Video Systems at the Chamberlain Group, takes a high-level look at what is involved in running a DNN model on existing edge devices, exploring some of the evolving tools and methods that are finally making this dream a reality. He also takes a quick look at a practical example of running a CNN object detector on low-compute hardware.

CAMERA DEVELOPMENT AND OPTIMIZATION

How to Optimize a Camera ISP with Atlas to Automatically Improve Computer Vision AccuracyAlgolux
Computer vision (CV) works on images pre-processed by a camera’s image signal processor (ISP). For the ISP to provide subjectively “good” image quality (IQ), its parameters must be manually tuned by imaging experts over many months for each specific lens / sensor configuration. However, “good” visual IQ isn’t necessarily what’s best for specific CV algorithms. In this session from the 2021 Embedded Vision Summit, Marc Courtemanche, Atlas Product Architect at Algolux, shows how to use the Atlas workflow to automatically optimize an ISP to maximize computer vision accuracy. Easy to access and deploy, the workflow can improve CV results by up to 25 mAP points while reducing time and effort by more than 10x versus today’s subjective manual IQ tuning approaches.

10 Things You Must Know Before Designing Your Own CameraPanopteo
Computer vision requires vision. This is why companies that use computer vision often decide they need to create a custom camera module (and perhaps other custom sensors) that meets the specific needs of their unique application. This 2021 Embedded Vision Summit presentation from Alex Fink, consultant at Panopteo, will help you understand how cameras are different from other types of electronic products; what mistakes companies often make when attempting to design their own cameras; and what you can do to end up with cameras that are built on spec, on schedule and on budget.

FEATURED NEWS

Intel’s oneAPI 2022.2 is Now Available

FRAMOS Makes Next-Generation GMSL3 Accessible for Any Embedded Vision Application

AMD’s Robotics Starter Kit Kick-starts the Intelligent Factory of the Future

iENSO Makes CV22 and CV28 Ambarella-based Embedded Vision Camera Modules Commercially Available

Imagination Technologies and Visidon Partner for Deep-learning-based Super Resolution Technology

More News

 

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

1646 N. California Blvd.,
Suite 360
Walnut Creek, CA 94596 USA

Phone
Phone: +1 (925) 954-1411
Scroll to Top