fbpx

Edge AI and Vision Insights: November 24, 2021 Edition

LETTER FROM THE EDITOR
Dear Colleague,2022 Embedded Vision Summit

The next Embedded Vision Summit will take place as a live event May 17-19, 2022 in Santa Clara, California. The Embedded Vision Summit is the key event for system and application developers who are incorporating computer vision and visual AI into products. It attracts a unique audience of over 1,400 product creators, entrepreneurs and business decision-makers who are creating and using computer vision and visual AI technologies. It’s a unique venue for learning, sharing insights and getting the word out about interesting new technologies, techniques, applications, products and practical breakthroughs in computer vision and visual AI.

We’re delighted to return to being in-person and hope you’ll join us. Once again we’ll be offering a packed program with 100+ sessions, 50+ technology exhibits, 100+ demos and a new Edge AI Deep Dive Day, all covering the technical and business aspects of practical computer vision, deep learning, visual AI and related technologies. Registration is now open, and if you register by December 31, you can save 35% by using the code SUMMIT22NL-35. Register now, save the date and tell a friend! You won’t want to miss what is shaping up to be our best Summit yet!

Also, we are in the process of creating the program for the Summit. If you would like to submit a session proposal, use the form available here. If you would like to discuss potential topics in advance of submitting a proposal, please contact us at [email protected]. Space is limited, so visit our website today to learn more about the Summit as well as the proposal requirements, and to submit your proposal. We will be accepting proposals through December 6.

Every year, the Edge AI and Vision Alliance surveys developers to understand what chips and tools they use to build visual AI systems. This is our eighth year conducting the survey, and we would like to get your opinions. Many suppliers of computer vision building-block technologies use the results of our Computer Vision Developer Survey to guide their priorities. We also share the results from the Survey at Edge AI and Vision Alliance events and in white papers and presentations made available throughout the year on the Alliance website. I’d really appreciate it if you’d take a few minutes to complete the first stage of this year’s survey. (It typically takes less than 10 minutes to complete.) Don’t miss your chance to have your voice heard. As a thank-you, we will send you a coupon for $50 off the price of a two-day Embedded Vision Summit ticket (to be sent when registration opens). In addition, we will enter your completed survey into a drawing for one of fifty Amazon gift cards worth $25! Thank you in advance for your perspective. Fill out the survey.

Brian Dipert
Editor-In-Chief, Edge AI and Vision Alliance

DEPLOYMENT AT SCALE

The Fearless AI Challenge: Can You Quickly Deploy AI Inference to Billions of Devices?Intel
Have you ever been inspired by a new AI research paper and tried to replicate it using open source code, only to end up with a “bricked” development system? This is a common experience. The AI ecosystem is fragmented; many solutions require specialized hardware, such as GPUs, or specific frameworks, libraries, APIs or tools that may conflict with your current development environment. Intel identified this gap and created the OpenVINO toolkit and OpenVINO Notebooks to address these challenges. In this talk, Raymond Lo, OpenVINO Edge AI Software Evangelist at Intel, shows how to get AI inference running in 10 minutes or less—avoiding common pitfalls by simplifying download, setup and runtime. And, using OpenVINO, you can deploy your solution to billions of existing computing devices—including embedded hardware like VPUs—without recompiling your code! After watching this video, you can replicate Lo’s work on your own machine.

Deploying Edge AI Solutions at Scale for the Internet of ThingsQualcomm
There are many large-scale opportunities for edge AI solutions across a broad range of Internet of Things applications. But deploying the kinds of complete solutions that customers desire—at scale—is challenging. Fragmentation in the IoT space means that there is huge diversity in application requirements. Customers are looking for services that can enable deployment of end-to-end services. Developers need platforms that help streamline the deployment process and make full use of on-device edge AI complemented by cloud services. In this presentation, Megha Daga, Director of Product Management for AI Enablement at Qualcomm, examines the critical gaps in achieving seamless edge AI deployment for IoT. You will learn how Qualcomm Technologies is addressing diverse needs across IoT segments by applying a scalable IoT-as-a-service model that helps put intelligence in solutions for applications from home robotics to retail to construction to smart factories and cities.

DATASET OPTIMIZATION

Tools and Strategies for Quickly Building Effective Image DatasetsBDTI
A common pain point when using machine learning for computer vision is the need to manually curate and label large quantities of training images. Depending on the application, thousands to millions of images are needed in order to capture the range of visual environments, angles, lighting conditions, and other variations that may be encountered in the field. To make a model accurate enough for real-world applications, significant effort must be invested in creating a training dataset. Fortunately, there are tools and strategies to help accelerate the task of creating an image dataset. In this presentation, Evan Juras, Computer Vision Engineer at BDTI and EJ Technology Consultants, discusses strategies for quickly building a new dataset for training an object detection model and reviews tools and methods for speeding up the process of curating and labeling images.

Object Detection and Dataset Labeling Using Colors of Manufactured ObjectsBASF
This talk introduces a new method for object detection for consumer goods and other applications based on measuring an object’s illumination invariant fluorescent chroma. Chroma is a measure of the colorfulness of an object relative to a similarly illuminated white object. Reflective chroma changes with the illumination, making it a poor choice for object detection in different environments. Fluorescent chroma, however, is nearly illumination invariant, making it an ideal input for object detection algorithms if the fluoresced light can be separated from the reflected light. In this talk, Ian Childers, Head of Technology for Functional Coatings—Object Recognition at BASF, describes simple designs of illuminators, cameras, and filters to achieve that separation, and shows the classification accuracy of the system to be 95-100% under different lighting conditions using the nearest neighbors algorithm with neighborhood component analysis. BASF has designed hundreds of unique colors and coatings for consumer goods and other items that can be used in this application and the system has many advantages compared to CNN object detection systems, as it does not require extensive training sets or an unoccluded view of the object.

UPCOMING INDUSTRY EVENTS

Developing Intelligent AI Everywhere with BrainChip’s Akida – BrainChip Webinar: December 9, 2021, 9:00 am PT

Embedded Vision Summit: May 17-19, 2022, Santa Clara, California

More Events

FEATURED NEWS

Mythic Introduces a Compact Quad-AMP PCIe Card for High-Performance Edge AI Applications

An Upcoming Webinar from Alliance Members Arm, NXP Semiconductors, Siemens Digital Industries Software and Synopsys, along with Arcturus Networks, Explores How to Build and Secure an Edge AI Solution

The Latest Version of Lattice Semiconductor’s sensAI Solution Stack Accelerates Next-Generation Client Devices

Announcements from NVIDIA’s Recent GPU Technology Conference include the Omniverse Replicator Synthetic-Data-Generation Engine for Training AIs and Jetson AGX Orin Robotics Computer, along with Nota’s NetsPresso AutoML Platform

Teledyne DALSA’s Falcon4-CLHS 11.2M Camera is Engineered for High-performance Imaging Applications

More News

EDGE AI AND VISION PRODUCT OF THE YEAR WINNER SHOWCASE

MINIEYE In-cabin Sensing Solution (Best Automotive Solution)MINIEYE
MINIEYE’s In-cabin Sensing (I-CS) Solution is the 2021 Edge AI and Vision Product of the Year Award Winner in the Automotive Solutions category. I-CS provides comprehensive in-vehicle sensing solutions for smart cockpit and autonomous vehicles by leveraging embedded computer vision and AI using IR cameras. I-CS tracks visual attributes such as head orientation, movement of facial features, gaze, gesture, and body movements, and analyzes driver’s and occupant’s identities, intentions and behaviors. In addition, I-CS also detects objects inside a vehicle that are closely related to in-cabin activities. I-CS’s edge computing infrastructure allows algorithms to run with high efficiency on automotive level chips, making it possible to offer larger combinations of visual sensing features in one solution set. The solution also supports a large variety of computing platforms, including Arm CPUs, FPGAs and specialized neural network chips.

Please see here for more information on MINIEYE’s In-cabin Sensing Solution. The Edge AI and Vision Product of the Year Awards celebrate the innovation of the industry’s leading companies that are developing and enabling the next generation of edge AI and computer vision products. Winning a Product of the Year award recognizes a company’s leadership in edge AI and computer vision as evaluated by independent industry experts.

 

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

1646 N. California Blvd.,
Suite 360
Walnut Creek, CA 94596 USA

Phone
Phone: +1 (925) 954-1411
Scroll to Top