Object Identification Functions

Inuitive Demonstration of On-camera SLAM, Depth and AI Using a NU4X00-based Sensor Module
Shay Harel, Field Application Engineer at Inuitive, demonstrates the company’s latest edge AI and vision technologies and products at the 2025 Embedded Vision Summit. Specifically, Harel demonstrates one of several examples his company presented at the Summit, highlighting the capabilities of its latest vision-on-chip technology. In this demo, the NU4X00 processor performs depth sensing, object

STMicroelectronics Demonstration of Real-time Object Detection and Tracking
Therese Mbock, Product Marketing Engineer at STMicroelectronics, and Sylvain Bernard, Founder and Solutions Architect at Siana Systems, demonstration the companies’ latest edge AI and vision technologies and products at the 2025 Embedded Vision Summit. Specifically, Mbock and Bernard demonstrate using STMicroelectronics’ VD66GY and STM32N6 for real-time object tracking, ideal for surveillance and automation.

STMicroelectronics Demonstration of Real-time Multi-pose Detection
Therese Mbock, Product Marketing Engineer at STMicroelectronics, and Sylvain Bernard, Founder and Solutions Architect at Siana Systems, demonstration the companies’ latest edge AI and vision technologies and products at the 2025 Embedded Vision Summit. Specifically, Mbock and Bernard demonstrate using STMicroelectronics’ VD55G1 and STM32N6 to detect real-time human poses, ideal for fitness, gestures, and gaming.

Robot-based Shelf Monitoring Cameras for Retail Operation Efficiency
This blog post was originally published at e-con Systems’ website. It is reprinted here with the permission of e-con Systems. Robot-based shelf monitoring systems integrate camera systems with autonomous robotics. It helps ensure seamless inventory tracking, planogram compliance, and shelf organization. Discover how these camera-based systems work and their must-have imaging features. Retail management has

Nextchip Demonstration of Various Computing Applications Using the Company’s ADAS SoC
Jonathan Lee, Manager of the Global Strategy Team at Nextchip, demonstrates the company’s latest edge AI and vision technologies and products at the 2025 Embedded Vision Summit. Specifically, Lee demonstrates various computing applications using his company’s ADAS SoC. Lee showcases how Nextchip’s ADAS SoC can be used for applications such as: Sensor fusion of iToF

Microchip Technology Demonstration of Real-time Object and Facial Recognition with Edge AI Platforms
Swapna Guramani, Applications Engineer for Microchip Technology, demonstrates the company’s latest edge AI and vision technologies and products at the 2025 Embedded Vision Summit. Specifically, Guramani demonstrates her company’s latest AI/ML capabilities in action: real-time object recognition using the SAMA7G54 32-bit MPU running Edge Impulse’s FOMO model, and facial recognition powered by TensorFlow Lite’s Mobile

3LC Demonstration of Debugging YOLO with 3LC’s Training-time Truth Detector
Paul Endresen, CEO of 3LC, demonstrates the company’s latest edge AI and vision technologies and products at the 2025 Embedded Vision Summit. Specifically, Andresen demonstrates how to uncover hidden treasures in the COCO dataset – like unlabeled forks and phantom objects – using his platform’s training-time introspection tools. In this demo, 3LC eavesdrops on a

VeriSilicon Demonstration of a Partner Application In the iEVCam
Halim Theny, VP of Product Engineering at VeriSilicon, demonstrates the company’s latest edge AI and vision technologies and products at the 2025 Embedded Vision Summit. Specifically, Theny demonstrates a customer’s SoC, featuring a collaboration with a premier camera vendor using an event-based sensor to detect motion, and processed by VeriSilicon’s NPU and vision DSP. This

Considerate Cars: Making Calls for Coffee and Keeping Drivers Alert
Caffeine ready to collect as the car decides to pull over to charge could become the normality of the future, as software-defined vehicle technology and the presence of AI within vehicles advances. IDTechEx’s portfolios of Robotics & Autonomy and Semiconductors, Computing & AI research reports cover passenger safety and increased comfort, while the research onElectric

How Do Surround-view Cameras Improve Driving and Parking Safety?
This blog post was originally published at e-con Systems’ website. It is reprinted here with the permission of e-con Systems. As vehicles become more complex, their need for accurate imaging has increased. This has driven the adoption of surround-view cameras. They give drivers a complete, real-time, 360-degree view of the vehicle, thereby improving situational awareness

Synopsys Demonstration of Siengine’s AD1000 ADAS Chip, Powered by Synopsys NPX6 NPU IP
Gordon Cooper, Principal Product Manager at Synopsys, demonstrates the company’s latest edge AI and vision technologies and products at the 2025 Embedded Vision Summit. Specifically, Cooper demonstrates the powerful SiEngine AD1000 NPU and the robust toolchain including debugger, profiler, and simulator, which features Synopsys NPX6 NPU IP. Learn how the platform supports TensorFlow, ONNX, and

Sony Semiconductor Demonstration of On-sensor YOLO Inference with the Sony IMX500 and Raspberry Pi
Amir Servi, Edge AI Product Manager at Sony Semiconductors, demonstrates the company’s latest edge AI and vision technologies and products at the 2025 Embedded Vision Summit. Specifically, Servi demonstrates the IMX500 — the first vision sensor with integrated edge AI processing capabilities. Using the Raspberry Pi AI Camera and Ultralytics YOLOv11n models, Servi showcases real-time

Namuga Vision Connectivity Demonstration of Compact Solid-state LiDAR for Automotive and Robotics Applications
Min Lee, Business Development Team Leader at Namuga Vision Connectivity, demonstrates the company’s latest edge AI and vision technologies and products at the 2025 Embedded Vision Summit. Specifically, Lee demonstrates a compact solid-state LiDAR solution tailored for automotive and robotics industries. This solid-state LiDAR features high precision, fast response time, and no moving parts—ideal for

How Driver Monitoring Cameras Improve Driving Safety and Their Key Features
This blog post was originally published at e-con Systems’ website. It is reprinted here with the permission of e-con Systems. Driver monitoring cameras have become widely accepted as a force in improving road safety. They go a long way to address the risks associated with driver inattention and fatigue by helping continuously observe driver behavior.

Namuga Vision Connectivity Demonstration of an AI-powered Total Camera System for an Automotive Bus Solution
Min Lee, Business Development Team Leader at Namuga Vision Connectivity, demonstrates the company’s latest edge AI and vision technologies and products at the 2025 Embedded Vision Summit. Specifically, Lee demonstrates his company’s AI-powered total camera system. The system is designed for integration into public transportation, especially buses, enhancing safety and automation. It includes front-view, side-view,

Improving Synthetic Data Augmentation and Human Action Recognition with SynthDa
This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. Human action recognition is a capability in AI systems designed for safety-critical applications, such as surveillance, eldercare, and industrial monitoring. However, many real-world datasets are limited by data imbalance, privacy constraints, or insufficient coverage of rare but

Teledyne FLIR Demonstration of an Advanced Thermal Imaging Camera Enabling Automotive Safety Improvements
Ethan Franz, Senior Software Engineer at Teledyne FLIR, demonstrates the company’s latest edge AI and vision technologies and products in Lattice Semiconductor’s booth at the 2025 Embedded Vision Summit. Specifically, Franz demonstrates a state-of-the-art thermal imaging camera for automotive safety applications, designed using Lattice FPGAs. This next-generation camera, also incorporating Teledyne FLIR’s advanced sensing technology,

Lattice Semiconductor Demonstration of Random Bin Picking Based on Structured-Light 3D Scanning
Mark Hoopes, Senior Director of Industrial and Automotive at Lattice Semiconductor, demonstrates the company’s latest edge AI and vision technologies and products at the 2025 Embedded Vision Summit. Specifically, Hoopes demonstrates how Lattice FPGAs increase performance, reduce latency and jitter, and reduce overall power for a random bin picking robot. The Lattice FPGA offloads the