Resources
In-depth information about the edge AI and vision applications, technologies, products, markets and trends.
The content in this section of the website comes from Edge AI and Vision Alliance members and other industry luminaries.
All Resources

Edge AI and Vision on Renesas RA8P1 MCU
Take a look at the Renesas flagship MCU, RA8P1, featuring a 1GHz ARM Cortex-M85 and Ethos-U55 NPU. Here we are showcasing one facial detection demo using YOLO and a Wheat Disease Detection demo, in partnership

Chips&Media Accelerates WAVE-N Ecosystem: Redefining the Future of Next-Generation Customized NPUs
February 24, 2026, Seoul, — As a premier multimedia IP provider, Chips&Media is proud to announce the strategic expansion of the WAVE-N ecosystem – our next-generation customized NPU architecture. Key Objectives: Strategic Partnerships: Cultivating alliances with leading AI-based imaging network

CES 2026: Physical AI moves from concept to system architecture
This market analysis was originally published at the Yole Group’s website. It is reprinted here with the permission of the Yole Group. The world’s largest consumer electronics conference demonstrated the technical synergies between automotive and

The Forest Listener: Where edge AI meets the wild
This blog post was originally published at Micron’s website. It is reprinted here with the permission of Micron. Let’s first discuss the power of enabling. Enabling a wide electronic ecosystem is essential for fostering innovation, scalability

How Lenovo is scaling Level 4 autonomous robotaxis on Arm
This blog post was originally published at Arm’s website. It is reprinted here with the permission of Arm. As L4 robotaxis shift from pilot to production, Arm offers the compute foundation needed to deliver end-to-end physical AI

What Does a GPU Have to Do With Automotive Security?
This blog post was originally published at Imagination Technologies’ website. It is reprinted here with the permission of Imagination Technologies. The automotive industry is undergoing the most significant transformation since the advent of electronics in

Ambarella to Showcase “The Ambarella Edge: From Agentic to Physical AI” at Embedded World 2026
Enabling developers to build, integrate, and deploy edge AI solutions at scale SANTA CLARA, Calif., — Ambarella, Inc. (NASDAQ: AMBA), an edge AI semiconductor company, today announced that it will exhibit at Embedded World 2026,

Vision Components unveils all-in-one VC EvoCam with MediaTek processor
Ettlingen, February 18, 2026 — Vision Components is presenting the VCSBC EvoCam for the first time at embedded world, a new generation of all-in-one intelligent board-level cameras featuring the MediaTek Genio 510 processor. Measuring tiny
Pushing the Limits of HDR with Ubicept
This blog post was originally published at Ubicept’s website. It is reprinted here with the permission of Ubicept. Executive summary Ubicept’s SPAD-based system offers consistent HDR performance in nighttime driving conditions, preserving shadow and highlight
Technologies

From ADAS to Robotaxi: How Vision Systems Must Level Up to Meet New Mobility Use Cases (Part 2)
This blog post was originally published at e-con Systems’ website. It is reprinted here with the permission of e-con Systems. Key Takeaways How urban lighting and motion define robotaxi imaging needs Which camera features support reliable perception during day and night operation Why unified AI vision boxes reduce latency and coordination gaps How integrated vision platforms
Accelerating Product Development in the Era of Physical AI
This video was originally published at Peridio’s website. It is reprinted here with the permission of Peridio. The embedded world is undergoing its biggest transformation in a generation. AI workloads are now moving into the physical world — into cameras, robots, tractors, and drones — and edge devices are evolving into intelligent agents. Yet the

Airy3D and Lattice to Showcase Compact, Integrated Humanoid and Robotic 3D Vision Demo at Embedded World 2026
Montreal, Canada — March 4, 2026 — Airy3D today announced a joint demonstration with Lattice Semiconductor highlighting a compact and compute-efficient 3D vision solution for humanoids and advanced robotics, which will be on display at Embedded World 2026. The demo combines Airy3D’s DepthIQ™ technology with a compact, low-power Lattice CrossLink™-NX FPGA to enable high-quality depth
Applications

From ADAS to Robotaxi: How Vision Systems Must Level Up to Meet New Mobility Use Cases (Part 2)
This blog post was originally published at e-con Systems’ website. It is reprinted here with the permission of e-con Systems. Key Takeaways How urban lighting and motion define robotaxi imaging needs Which camera features support reliable perception during day and night operation Why unified AI vision boxes reduce latency and coordination gaps How integrated vision platforms

Which Service Robots Will Dominate the Market in the Next 10 Years?
Logistics robots and cleaning robots both benefit from high market demand and relatively low technical barriers, compared to kitchen and restaurant robots or underwater robots. Source: Service Robots 2026-2036: Technologies, Players and Markets This blog post was originally published at IDTechEx’s website. It is reprinted here with the permission of IDTechEx. The service robotics industry has grown

From ADAS to Robotaxi: How to Overcome the Major Vision Challenges
This blog post was originally published at e-con Systems’ website. It is reprinted here with the permission of e-con Systems. Key Takeaways Why does robotaxi vision need more than task-driven ADAS sensing? Impact of long-duty operation and changing lighting on perception reliability Challenges faced across vehicles, cities, and operating conditions How visual data continuity affects
Functions

AI On: 3 Ways to Bring Agentic AI to Computer Vision Applications
This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. Learn how to integrate vision language models into video analytics applications, from AI-powered search to fully automated video analysis. Today’s computer vision systems excel at identifying what happens in physical spaces and processes, but lack the abilities to explain the

SAM3: A New Era for Open‑Vocabulary Segmentation and Edge AI
Quality training data – especially segmented visual data – is a cornerstone of building robust vision models. Meta’s recently announced Segment Anything Model 3 (SAM3) arrives as a potential game-changer in this domain. SAM3 is a unified model that can detect, segment, and even track objects in images and videos using both text and visual

TLens vs VCM Autofocus Technology
This blog post was originally published at e-con Systems’ website. It is reprinted here with the permission of e-con Systems. In this blog, we’ll walk you through how TLens technology differs from traditional VCM autofocus, how TLens combined with e-con Systems’ Tinte ISP enhances camera performance, key advantages of TLens over mechanical autofocus systems, and applications
