Edge AI and Vision Insights: April 17, 2024

LETTER FROM THE EDITOR
Dear Colleague,Women in Vision Reception

Back by popular demand, we’re excited to again host the annual Women in Vision networking reception at the 2024 Embedded Vision Summit. We invite women working in computer vision and edge AI to join us for this special in-person gathering to meet, network and share ideas. This year’s Women in Vision Reception will be held on Wednesday, May 22 in Santa Clara, California. Appetizers and refreshing beverages will be served. We look forward to seeing you there! Register here, and feel free to invite your colleagues.

Registration for the full Embedded Vision Summit program, taking place May 21-23, is not required in order to attend the Women in Vision reception but is highly recommended! Register now for the Summit using code SUMMIT24-NL for a 15% discount on your conference pass.

Brian Dipert
Editor-In-Chief, Edge AI and Vision Alliance

OPTIMIZING MACHINE LEARNING DEVELOPMENT

Updating the Edge ML Development ProcessSamsara
Samsara is focused on digitizing the world of operations. The company helps customers across many industries—including food and beverage, utilities and energy, field services and government—get information about their physical operations into the cloud, so they can operate more safely, efficiently and sustainably. Samsara’s sensors collect billions of data points per day and on-device processing is instrumental to its success. The company is constantly developing, improving and deploying ML models at the edge. Samsara has found that the traditional development process—where ML scientists create models and hand them off to firmware engineers for embedded implementation—is slow and often produces difficult-to-resolve differences between the original model and the embedded implementation. In this 2023 Embedded Vision Summit talk, Jim Steele, Vice President of Embedded Software at Samsara, presents an alternative development process that his company has adopted with good results. In this process, firmware engineers develop a general framework that ML scientists use to design, develop and deploy their models. This enables quick iterations and fewer confounding bugs.

Selecting Tools for Developing, Monitoring and Maintaining ML ModelsYummly
With the boom in tools for developing, monitoring and maintaining ML models, data science teams have many options to choose from. Proprietary tools provided by cloud service providers are enticing, but teams may fear being locked in—and may worry that these tools are too costly or missing important features when compared with alternatives from specialized providers. Fortunately, most proprietary, fee-based tools have an open-source component that can be integrated into a home-grown solution at low cost. This can be a good starting point, enabling teams to get started with modern tools without making big investments and leaving the door open to evolve tool selection over time. In this 2023 Embedded Vision Summit presentation, Parshad Patel, Data Scientist at Yummly, presents a step-by-step process for creating an MLOps tool set that enables you to deliver maximum value as a data scientist. He shares how Yummly built pipelines for model development and put them into production using open-source projects.

OBJECT DETECTION AND CLASSIFICATION

Understanding, Selecting and Optimizing Object Detectors for Edge ApplicationsWalmart Global Tech
Object detectors count objects in a scene and determine their precise locations, while also labeling them. Object detection plays a crucial role in many vision applications, from autonomous driving to smart appliances. In many of these applications, it’s necessary or desirable to implement object detection at the edge. In this 2023 Embedded Vision Summit presentation, Md Nasir Uddin Laskar, Staff Machine Learning Engineer at Walmart Global Tech, explores the evolution of object detection algorithms, from traditional approaches to deep learning-based methods and transformer-based architectures. He delves into widely used approaches for object detection, such as two-stage R-CNNs and one-stage YOLO algorithms, and examines their strengths and weaknesses. And he provides guidance on how to evaluate and select an object detector for an edge application.

Detecting Data Drift in Image Classification Neural NetworksSouthern Illinois University Carbondale
An unforeseen change in the input data is called “drift,” and may impact the accuracy of machine learning models. In this 2023 Embedded Vision Summit talk, Spyros Tragoudas, Professor and School Director at Southern Illinois University Carbondale, presents a novel scheme for diagnosing data drift in the input streams of image classification neural networks. His proposed method for drift detection and quantification uses a threshold dictionary for the prediction probabilities of each class in the neural network model. The method is applicable to any drift type in images such as noise and weather effects, among others. Tragoudas shares experimental results on various data sets, drift types and neural network models to show that his proposed method estimates the drift magnitude with high accuracy, especially when the level of drift significantly impacts the model’s performance.

UPCOMING INDUSTRY EVENTS

Embedded Vision Summit: May 21-23, 2024, Santa Clara, California

More Events

FEATURED NEWS

Qualcomm Introduces New AI-ready IoT and Industrial Platforms and Advanced Wi-Fi Technology

Arm Accelerates Edge AI with Its Latest Generation Ethos-U NPU and New IoT Reference Design Platform

AMD Extends Its Adaptive SoC Portfolio with New Versal Series Gen 2 Devices Delivering End-to-end Acceleration for AI-driven Embedded Systems

Intel and Altera Announce Edge and FPGA Offerings for AI

Synaptics’ Astra AI-native IoT Platform Launches with SL-series Embedded Processors and the Machina Foundation Series Development Kit

More News

EMBEDDED VISION SUMMIT
SPONSOR SHOWCASE

Attend the Embedded Vision Summit to meet this and other leading computer vision and edge AI technology suppliers!

DEEPXDEEPX
DEEPX is a global leader in on-device AI, specializing in the development of NPU and AI computing solutions. At the 2024 Embedded Vision Summit, we’ll be showcasing our comprehensive All-in-4 AI Total Solution, featuring the DX-V1, DX-V3, DX-M1 and DX-H1. Notably, DEEPX’s flagship AIoT chip, the DX-M1, enables real-time AI processing for 16 channels, while the Green AI inference card, DX-H1, can operate over 61 channels in real time. Visit our booth to witness a historic moment in on-device AI!

EMBEDDED VISION SUMMIT PARTNER SHOWCASE

PRO RobotsPRO Robots
The PRO Robots channels boast over 1.5 million subscribers globally and operate in four languages: English, Spanish, Chinese and French. We produce captivating video content about robotics and AI while also covering the most cutting-edge tech events. Feel free to contact us for top-notch promotion in the robotics and high-tech industry!

AspenCoreAspenCore
The Edge AI and Vision Alliance is delighted to partner on the Embedded Vision Summit with AspenCore and its industry flagship publications: EE Times, embedded, and EDN. If you’re working in embedded vision, you owe it to yourself to subscribe to these great resources. And like the best things in life, they’re free! Subscribe here.

 

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top