Resources
In-depth information about the edge AI and vision applications, technologies, products, markets and trends.
The content in this section of the website comes from Edge AI and Vision Alliance members and other industry luminaries.
All Resources
Accelerating Product Development in the Era of Physical AI
This video was originally published at Peridio’s website. It is reprinted here with the permission of Peridio. The embedded world is undergoing its biggest transformation in a generation. AI workloads are now moving into the

Airy3D and Lattice to Showcase Compact, Integrated Humanoid and Robotic 3D Vision Demo at Embedded World 2026
Montreal, Canada — March 4, 2026 — Airy3D today announced a joint demonstration with Lattice Semiconductor highlighting a compact and compute-efficient 3D vision solution for humanoids and advanced robotics, which will be on display at

Multi-Sensor IoT architecture: inside the stack and how to scale it
This blog post was originally published at Qualcomm’s website. It is reprinted here with the permission of Qualcomm. What Is a Multi-Sensor Stack, Really? At its core, a multi-sensor stack is a layered system where

Which Service Robots Will Dominate the Market in the Next 10 Years?
Logistics robots and cleaning robots both benefit from high market demand and relatively low technical barriers, compared to kitchen and restaurant robots or underwater robots. Source: Service Robots 2026-2036: Technologies, Players and Markets This blog post

Why On-device AI Matters
This blog post was originally published at ENERZAi’s website. It is reprinted here with the permission of ENERZAi. Hello! I’m Minwoo Son from ENERZAi’s Business Development team. Through several posts so far, we’ve shared ENERZAi’s

Upcoming Webinar on LLM-driven Driver Development
On March 19, 2026, at 1:00 pm EDT (10:00 am PDT) Boston.AI will deliver a webinar “Intelligent Driver Development with LLM Context Engineering ” From the event page: Developing even simple sensor drivers can consume

From ADAS to Robotaxi: How to Overcome the Major Vision Challenges
This blog post was originally published at e-con Systems’ website. It is reprinted here with the permission of e-con Systems. Key Takeaways Why does robotaxi vision need more than task-driven ADAS sensing? Impact of long-duty

10xEngineers and Andes Enable High-Performance AI Compilation for RISC-V AX46MPV Cores
Hsinchu, Taiwan – February 26, 2026 – The collaboration between 10xEngineers, a services company specializing in AI compilers, and Andes Technology Corporation, a leading provider of high-performance, low-power 32- and 64-bit RISC-V processor IP and a Founding

Edge AI and Vision on Renesas RA8P1 MCU
Take a look at the Renesas flagship MCU, RA8P1, featuring a 1GHz ARM Cortex-M85 and Ethos-U55 NPU. Here we are showcasing one facial detection demo using YOLO and a Wheat Disease Detection demo, in partnership
Technologies

Upcoming Webinar on Agentic Memory Systems
On April 16, 2026, at 1:00 pm EDT (10:00 am PDT) Boston.AI will deliver a webinar “Remembering to Forget: Agentic Memory Systems and Context Constraints” From the event page: As AI agents evolve from stateless responders into persistent, goal-directed systems, memory has become a central design challenge. The question is no longer just what agents

From hardware to intelligence: The Qualcomm AI camera platform for scalable security solutions
At ISC West 2026 Qualcomm Technologies showcases its vision for the future of smart camera and security Key Takeaways: End-to-end AI development tools and the Qualcomm Insight Platform to enable customers to develop AI features once and scale across their entire product portfolio. Qualcomm Technologies’ camera solutions span video security, law enforcement and enterprise body

Lightweight Keyword Spotting Solution from Microchip
Microchip presents a customizable, target-agnostic solution to program wake words and voice commands. The ML model, generated and tested using a custom application, has low latency and a minimal memory footprint, making it ideal for resource-constrained embedded systems. The ML model can be integrated into voice-based applications running on any 32-bit microcontroller or microprocessor running
Applications

2026: The Year Intelligence Gets Physical
This article was originally published at Analog Devices’ website. It is reprinted here with the permission of Analog Devices. Artificial intelligence is entering a new phase where models interpret contextual data whilst interacting with the physical world in real time. At Analog Devices, Inc. (ADI), we call this Physical Intelligence: intelligent systems that can perceive, reason

From Warehouse to Wallet: New State of AI in Retail and CPG Survey Uncovers How AI Is Rewiring Supply Chains and Customer Experiences
This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. The third annual NVIDIA State of AI in Retail and CPG survey shows why nine in 10 retailers will increase AI budgets in 2026, focusing on open-source models and software, as well as agentic and physical AI. Highlights

STMicroelectronics and Leopard Imaging Accelerate Robotics Vision with NVIDIA Jetson-ready Multi-sensor Module
Key Takeaways Multimodal module combining 2D imaging, 3D depth sensing, and human-like motion perception NVIDIA Holoscan Sensor Bridge ensuring multi-gigabit plug and play connectivity with Jetson platforms Fully supported by NVIDIA Isaac open robot development platform STMicroelectronics and Leopard Imaging® have introduced an all-in-one multimodal vision module for humanoid and other advanced robotics systems. Combining
Functions

AI On: 3 Ways to Bring Agentic AI to Computer Vision Applications
This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. Learn how to integrate vision language models into video analytics applications, from AI-powered search to fully automated video analysis. Today’s computer vision systems excel at identifying what happens in physical spaces and processes, but lack the abilities to explain the

SAM3: A New Era for Open‑Vocabulary Segmentation and Edge AI
Quality training data – especially segmented visual data – is a cornerstone of building robust vision models. Meta’s recently announced Segment Anything Model 3 (SAM3) arrives as a potential game-changer in this domain. SAM3 is a unified model that can detect, segment, and even track objects in images and videos using both text and visual

TLens vs VCM Autofocus Technology
This blog post was originally published at e-con Systems’ website. It is reprinted here with the permission of e-con Systems. In this blog, we’ll walk you through how TLens technology differs from traditional VCM autofocus, how TLens combined with e-con Systems’ Tinte ISP enhances camera performance, key advantages of TLens over mechanical autofocus systems, and applications
