Software for Embedded Vision

Low-power Computer Vision Challenge: Empowering AI Development on Edge Devices
This blog post was originally published at Qualcomm’s website. It is reprinted here with the permission of Qualcomm. The Low-Power Computer Vision Challenge (LPCVC) is an annual competition organized by the Institute of Electrical and Electronics Engineers (IEEE) to improve the energy efficiency of computer vision technologies for systems with constrained resources. Established in 2015

“Beyond the Demo: Turning Computer Vision Prototypes into Scalable, Cost-effective Solutions,” a Presentation from Plainsight Technologies
Kit Merker, CEO of Plainsight Technologies, presents the “Beyond the Demo: Turning Computer Vision Prototypes into Scalable, Cost-effective Solutions” tutorial at the May 2025 Embedded Vision Summit. Many computer vision projects reach proof of concept but stall before production due to high costs, deployment challenges and infrastructure complexity. This presentation… “Beyond the Demo: Turning Computer

“Running Accelerated CNNs on Low-power Microcontrollers Using Arm Ethos-U55, TensorFlow and Numpy,” a Presentation from OpenMV
Kwabena Agyeman, President of OpenMV, presents the “Running Accelerated CNNs on Low-power Microcontrollers Using Arm Ethos-U55, TensorFlow and Numpy” tutorial at the May 2025 Embedded Vision Summit. In this presentation, Agyeman introduces the OpenMV AE3 and OpenMV N6 low-power, high-performance embedded machine vision cameras, which are 200x better than the… “Running Accelerated CNNs on Low-power

R²D²: Building AI-based 3D Robot Perception and Mapping with NVIDIA Research
This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. Robots must perceive and interpret their 3D environments to act safely and effectively. This is especially critical for tasks such as autonomous navigation, object manipulation, and teleoperation in unstructured or unfamiliar spaces. Advances in robotic perception increasingly

Unlocking the Power of Edge AI With Microchip Technology
This blog post was originally published at Microchip Technology’s website. It is reprinted here with the permission of Microchip Technology. From the factory floor to the operating room, edge AI is changing everything. Here’s how Microchip is helping developers bring real-time intelligence to the world’s most power-constrained devices. Not long ago, Artificial Intelligence (AI) lived

“A Re-imagination of Embedded Vision System Design,” a Presentation from Imagination Technologies
Dennis Laudick, Vice President of Product Management and Marketing at Imagination Technologies, presents the “A Re-imagination of Embedded Vision System Design” tutorial at the May 2025 Embedded Vision Summit. Embedded vision applications, with their demand for ever more processing power, have been driving up the size and complexity of edge… “A Re-imagination of Embedded Vision

Cluster Self-refinement for Enhanced Online Multi-camera People Tracking
This blog post was originally published at Nota AI’s website. It is reprinted here with the permission of Nota AI. Online multi-camera system for efficient individual tracking Accurate ID management with Cluster Self-Refinement (CSR) Improved performance with enhanced pose estimation In this paper, we introduce our online MCPT methodology, which achieved third place in Track1

A World’s First On-glass GenAI Demonstration: Qualcomm’s Vision for the Future of Smart Glasses
This blog post was originally published at Qualcomm’s website. It is reprinted here with the permission of Qualcomm. Our live demo of a generative AI assistant running completely on smart glasses — without the aid of a phone or the cloud — and the reveal of the new Snapdragon AR1+ platform spark new possibilities for

“Evolving Inference Processor Software Stacks to Support LLMs,” a Presentation from Expedera
Ramteja Tadishetti, Principal Software Engineer at Expedera, presents the “Evolving Inference Processor Software Stacks to Support LLMs” tutorial at the May 2025 Embedded Vision Summit. As large language models (LLMs) and vision-language models (VLMs) have quickly become important for edge applications from smartphones to automobiles, chipmakers and IP providers have… “Evolving Inference Processor Software Stacks

NVIDIA Holoscan Sensor Bridge Empowers Developers with Real-time Data Processing
This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. In the rapidly evolving robotics and edge AI landscape, the ability to efficiently process and transfer sensor data is crucial. Many edge applications are moving away from single-sensor fixed-function solutions and in favor of diverse sensor arrays.

“How to Right-size and Future-proof a Container-first Edge AI Infrastructure,” a Presentation from Avassa and OnLogic
Carl Moberg, CTO of Avassa, and Zoie Rittling, Business Development Manager at OnLogic, co-present the “How Right-size and Future-proof a Container-first Edge AI Infrastructure” tutorial at the May 2025 Embedded Vision Summit. In this presentation, Moberg and Rittling provide practical guidance on overcoming key challenges in deploying AI at the… “How to Right-size and Future-proof

AI On Board: Near Real-time Insights for Sustainable Fishing
This article was originally published at Tryolabs’ website. It is reprinted here with the permission of Tryolabs. Marine ecosystems are under pressure from unsustainable fishing, with some populations declining faster than they can recover. Illegal, unreported, and unregulated (IUU) fishing further contributes to the problem, threatening biodiversity, economies, and global seafood supply chains. While many

“Image Tokenization for Distributed Neural Cascades,” a Presentation from Google and VeriSilicon
Derek Chow, Software Engineer at Google, and Shang-Hung Lin, Vice President of NPU Technology at VeriSilicon, co-present the “Image Tokenization for Distributed Neural Cascades” tutorial at the May 2025 Embedded Vision Summit. Multimodal LLMs promise to bring exciting new abilities to devices! As we see foundational models become more capable,… “Image Tokenization for Distributed Neural

“Key Requirements to Successfully Implement Generative AI in Edge Devices—Optimized Mapping to the Enhanced NPX6 Neural Processing Unit IP,” a Presentation from Synopsys
Gordon Cooper, Principal Product Manager at Synopsys, presents the “Key Requirements to Successfully Implement Generative AI in Edge Devices—Optimized Mapping to the Enhanced NPX6 Neural Processing Unit IP” tutorial at the May 2025 Embedded Vision Summit. In this talk, Cooper discusses emerging trends in generative AI for edge devices and… “Key Requirements to Successfully Implement

Upcoming Webinar Explores SLAM Optimization for Autonomous Robots
On July 10, 2025 at 8:00 am PT (11:00 am ET), Alliance Member company eInfochips will deliver the free webinar “GPU-Accelerated Real-Time SLAM Optimization for Autonomous Robots.” From the event page: Optimizing execution time for long-term and large-scale SLAM algorithms is essential for real-time deployments on edge compute platforms. Higher throughput of SLAM output provides

AI and Computer Vision Insights at CVPR 2025
This blog post was originally published at Qualcomm’s website. It is reprinted here with the permission of Qualcomm. Our papers, demos, workshops and tutorial continue our leadership in generative AI and learning systems At Qualcomm AI Research, we are advancing AI to make its core capabilities — perception, reasoning and action — ubiquitous across devices.

“Bridging the Gap: Streamlining the Process of Deploying AI onto Processors,” a Presentation from SqueezeBits
Taesu Kim, Chief Technology Officer at SqueezeBits, presents the “Bridging the Gap: Streamlining the Process of Deploying AI onto Processors” tutorial at the May 2025 Embedded Vision Summit. Large language models (LLMs) often demand hand-coded conversion scripts for deployment on each distinct processor-specific software stack—a process that’s time-consuming and prone… “Bridging the Gap: Streamlining the

“From Enterprise to Makers: Driving Vision AI Innovation at the Extreme Edge,” a Presentation from Sony Semiconductor Solutions
Amir Servi, Edge Deep Learning Product Manager at Sony Semiconductor Solutions, presents the “From Enterprise to Makers: Driving Vision AI Innovation at the Extreme Edge” tutorial at the May 2025 Embedded Vision Summit. Sony’s unique integrated sensor-processor technology is enabling ultra-efficient intelligence directly at the image source, transforming vision AI… “From Enterprise to Makers: Driving