Edge AI and Vision Insights: December 7, 2022 Edition

LETTER FROM THE EDITOR
Dear Colleague,2023 Embedded Vision Summit

Registration is now open for the 2023 Embedded Vision Summit, coming up May 22-25, 2023 in Santa Clara, California! The Summit is the premier conference and tradeshow for innovators incorporating computer vision and visual or perceptual AI in products. The program is designed to cover the most important technical and business aspects of practical computer vision, deep learning and perceptual AI. Register before December 31 and you can save 35%—the best price you’ll ever be able to get on the Summit!

We’re currently creating the presentation program for the Summit, and we invite you to share your expertise and experience with peers at this premier event! To learn more about the topics we’re focusing on this year and to submit your idea for a presentation, check out the Summit Call for Proposals or contact us at [email protected]. We’ve just extended the submission deadline and now will be accepting session proposals through December 23, but space is limited, so submit soon!


Next Tuesday, December 13 at 9 am PT, Deci will deliver the free webinar “How to Successfully Deploy Deep Learning Models on Edge Devices” in partnership with the Edge AI and Vision Alliance. The introduction of powerful processors, memories and other devices, along with robust connectivity to the cloud, has enabled a new era of advanced AI applications capable of running on the edge. But system resources remain finite; cost, size and weight, power consumption and heat, and other constraints make it challenging to deploy accurate, resource-efficient deep learning models at the edge. How can you build a deep learning model that is not too complex or large to run on an edge device and makes the most of the available hardware?

This technical session is packed with practical tips and tricks on topics ranging from model selection and training tools to running successful inference at the edge. Yonatan Geifman, the company’s Co-founder and CEO, will demonstrate how to benchmark and compare different models, leverage model training best practices, and easily automate compilation and quantization processes, all while using the latest open source libraries and tools. At the conclusion of the webinar, you will have gained practical knowledge on how to eliminate guesswork in significantly improving your edge devices’ performance, boosting runtime speeds while optimizing accuracy for AI-based applications. A question-and-answer session will follow the presentation. For more information and to register, please see the event page.


2023 Edge AI and Vision Product of the Year AwardsAre you looking to raise the visibility of a product you’ve launched in the past year? A great way to do so is to win an Edge AI and Vision Product of the Year Award—benefits include year-round promotion from the Alliance! Winners tell us the Awards help them showcase their product success in the market. By winning you can:

  • Amplify your newest products as the best in the market
  • Credibly raise your product above the competitive noise via independent validation
  • Increase awareness with potential buyers at the Embedded Vision Summit

An independent panel of experts will select one winning entry for each of these categories:

  • Edge AI Processors (including licensable IP)
  • Edge AI Software and Algorithms
  • Cameras and Sensors
  • Edge AI Developer Tools
  • Enterprise Edge AI End Product
  • Consumer Edge AI End Product

The deadline for entries is December 31. This deadline won’t be extended, so enter today and enjoy the holidays knowing you’ve set your company up for an incredible opportunity!

Brian Dipert
Editor-In-Chief, Edge AI and Vision Alliance

EFFICIENT CODING FOR VISION PROCESSING

Programming Vision Pipelines on AI EnginesAMD
AMD’s latest generation of Adaptive Compute Acceleration Platforms (ACAP), Versal AI Core and Versal AI Edge, include an array of powerful AI Engines alongside other computation components, such as programmable logic and ARM cores. This array of AI Engines has high computational capability to address the workloads of diverse applications, including automotive solutions. This presentation introduces the properties and capabilities of these AI Engines for image, video and vision processing. Kristof Denolf, Principal Engineer, and Bader Alam, Director of Software Engineering, both of AMD, begin with a top-down look at how video data makes its way to the AI Engines. Then they delve into a detailed discussion of the compute properties of the VLIW vector architecture of the AI Engines and illustrate how it efficiently executes vision processing kernels. Next, they introduce the Vitis Vision Library and give an overview of its data movement and kernel processing capabilities. They conclude by showing how AMD’s Vitis tools support building a vision pipeline and analyzing its performance.

Vision AI At the Edge: From Zero to Deployment Using Low-Code DevelopmentNVIDIA
Development of vision AI applications for the edge presents complex challenges beyond model inference, often requiring end-to-end acceleration and optimization, spanning I/O to post-processing. Today, for organizations to overcome these challenges and launch successful vision AI products, they need expertise across many domains: machine learning, computer vision, video compression, embedded software and I/O interfaces. In this talk presented by Carlos Garcia-Sierra, Product Manager for DeepStream at NVIDIA, on behalf of Alvin Clark, Product Marketing Manager at NVIDIA, you’ll learn how the NVIDIA DeepStream SDK helps organizations overcome these challenges and accelerate their time to market by abstracting away the necessary low-level details, enabling them to focus on the development of the unique functionality of the end application. This tutorial session will teach you how to create, test and deploy a fully optimized and accelerated vision AI application on a Jetson device in minutes using Graph Composer, a low-code design tool for DeepStream.

IMAGE SENSORS, MODULES AND CAMERAS

Enabling Spatial Understanding for Embedded and Edge Devices with DepthAILuxonis
Many systems need to understand not only what objects are nearby, but also where those objects are in the physical world. This is “spatial AI”. In this presentation, Erik Kokalj, Director of Applications Engineering at Luxonis, shows how you can easily integrate spatial AI into your embedded devices using the Luxonis DepthAI platform and OAK cameras. DepthAI is Luxonis’ spatial AI software stack. DepthAI utilizes and supports Luxonis’ OAK cameras and OAK system-on-module, which provide 4 TOPS of processing power, support 4K video encoding, and run object tracking, AI models, CV functions and stereo depth perception—all in real-time. Kokalj introduces the components of DepthAI, including firmware, APIs, example applications, model zoo and SDK. He also shows how DepthAI integrates with popular operating systems and engines such as Android, ROS and Unity. Most importantly, he shows how you can combine DepthAI with the OAK camera and system-on-module to quickly incorporate spatial AI into your product.

New Imager Modules and Tools Enable Bringing High-quality Vision Systems to Market Quicklyonsemi
Traditionally, developers have struggled to select the best image sensors and lenses, integrate these with the rest of the system, and tune the sensor for best performance. This has resulted in long development cycles and low-quality image data. In this talk, Ganesh Narayanaswamy, Senior Business Marketing Manager in the Industrial and Commercial Solutions Division at onsemi, presents imager modules from onsemi and Arrow that enable developers to rapidly develop vision systems that capture high-quality images. These modules incorporate advanced image sensors coupled with matched lenses, delivered in the form of boards that can be snapped into the main board of an embedded system. The modules and drivers have been tested with many popular ISPs and SoCs. The sensors are pre-tuned to meet the needs of many applications, and the onsemi DevSuite enables developers to quickly adjust the tuning if needed. Narayanaswamy shows how these imager modules and associated software remove the pain and guesswork typically associated with imager selection and integration, freeing developers to focus on other aspects of their system.

UPCOMING INDUSTRY EVENTS

How to Successfully Deploy Deep Learning Models on Edge Devices – Deci Webinar: December 13, 2022, 9:00 am PT

Implementing Advanced AI-based Analytics for Video Management Systems – Hailo Webinar: February 7, 2023, 9:00 am PT

Embedded Vision Summit: May 22-25, 2023, Santa Clara, California

More Events

FEATURED NEWS

Teledyne DALSA Extends Its Falcon Area Scan Camera Series with New 37M and 67M Models

Imagination Technologies and Baidu PaddlePaddle Create Open-source Machine Learning Library for Model Zoo

NVIDIA Launches IGX Edge AI Computing Platform for Safe, Secure Autonomous Systems

Visionary.ai Launches Video Denoiser for Improved Night Vision

Algolux Extends Eos Perception Software to Address ADAS and Autonomous Vehicle Depth Limitations

More News

EDGE AI AND
VISION PRODUCT OF THE YEAR WINNER SHOWCASE

OrCam Technologies OrCam Read (Best Consumer Edge AI End Product)OrCam
OrCam Technologies’ OrCam Read is the 2022 Edge AI and Vision Product of the Year Award winner in the Consumer Edge AI End Product category. OrCam Read is the first of a new class of easy-to-use handheld digital readers that supports people with mild to moderate vision loss, as well as those with reading challenges, to access the texts they need and to more effectively accomplish their daily tasks. Whether reading an article for school, perusing a news story on a smartphone, reviewing a phone bill or ordering from a menu, OrCam Read is the only personal AI reader that can instantly capture and read full pages of text and digital screens out loud. All of OrCam Read’s information processing – from its text-to-speech functionality implemented to operate on the edge, to its voice-controlled operation using the “Hey OrCam” voice assistant, to the Natural Language Processing (NLP)- and Natural Language Understanding (NLU)-driven Smart Reading feature – is processed locally, on the device, with no data connectivity required.

Please see here for more information on OrCam Technologies’ OrCam Read. The Edge AI and Vision Product of the Year Awards celebrate the innovation of the industry’s leading companies that are developing and enabling the next generation of edge AI and computer vision products. Winning a Product of the Year award recognizes a company’s leadership in edge AI and computer vision as evaluated by independent industry experts. The Edge AI and Vision Alliance is now accepting applications for the 2023 Awards competition; for more information and to enter, please see the program page.

 

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top