Resources
In-depth information about the edge AI and vision applications, technologies, products, markets and trends.
The content in this section of the website comes from Edge AI and Vision Alliance members and other industry luminaries.
All Resources

Upcoming Webinar on LLM-driven Driver Development
On March 19, 2026, at 1:00 pm EDT (10:00 am PDT) Boston.AI will deliver a webinar “Intelligent Driver Development with LLM Context Engineering ” From the event page: Developing even simple sensor drivers can consume

From ADAS to Robotaxi: How to Overcome the Major Vision Challenges
This blog post was originally published at e-con Systems’ website. It is reprinted here with the permission of e-con Systems. Key Takeaways Why does robotaxi vision need more than task-driven ADAS sensing? Impact of long-duty

10xEngineers and Andes Enable High-Performance AI Compilation for RISC-V AX46MPV Cores
Hsinchu, Taiwan – February 26, 2026 – The collaboration between 10xEngineers, a services company specializing in AI compilers, and Andes Technology Corporation, a leading provider of high-performance, low-power 32- and 64-bit RISC-V processor IP and a Founding

Edge AI and Vision on Renesas RA8P1 MCU
Take a look at the Renesas flagship MCU, RA8P1, featuring a 1GHz ARM Cortex-M85 and Ethos-U55 NPU. Here we are showcasing one facial detection demo using YOLO and a Wheat Disease Detection demo, in partnership

Chips&Media Accelerates WAVE-N Ecosystem: Redefining the Future of Next-Generation Customized NPUs
February 24, 2026, Seoul, — As a premier multimedia IP provider, Chips&Media is proud to announce the strategic expansion of the WAVE-N ecosystem – our next-generation customized NPU architecture. Key Objectives: Strategic Partnerships: Cultivating alliances with leading AI-based imaging network

CES 2026: Physical AI moves from concept to system architecture
This market analysis was originally published at the Yole Group’s website. It is reprinted here with the permission of the Yole Group. The world’s largest consumer electronics conference demonstrated the technical synergies between automotive and

The Forest Listener: Where edge AI meets the wild
This blog post was originally published at Micron’s website. It is reprinted here with the permission of Micron. Let’s first discuss the power of enabling. Enabling a wide electronic ecosystem is essential for fostering innovation, scalability

HCLTech unveils VisionX 2.0, a next-gen multi-modal AI Edge Platform with NVIDIA
Noida, India, February 20, 2026 — HCLTech, a leading global technology company, today unveiled VisionX 2.0, an upgraded version of its multi-modal AI edge platform. This platform delivers real-time intelligence, enhanced safety and operational efficiency at

How Lenovo is scaling Level 4 autonomous robotaxis on Arm
This blog post was originally published at Arm’s website. It is reprinted here with the permission of Arm. As L4 robotaxis shift from pilot to production, Arm offers the compute foundation needed to deliver end-to-end physical AI
Technologies

TI Accelerates the Next Generation of Physical AI with NVIDIA
News highlights: TI and NVIDIA are collaborating to accelerate the path from simulation to the safe deployment of humanoid robots in the real world. As part of this collaboration, TI integrated its mmWave radar technology with NVIDIA Jetson Thor and NVIDIA Holoscan to enable low-latency 3D perception and safety awareness for physical AI applications. TI

ModelCat AI Announces AI Model Portability Across Silicon Devices
An industry first, ModelCat’s Agentic AI generates models for new chips using a user’s current production models, dramatically accelerating inferencing to the edge. SUNNYVALE, Calif., March 5, 2026 /PRNewswire/ — ModelCat, the creator of the world’s first fully autonomous AI model builder, today announced its latest innovative platform capability: Model Retargeting (Patent Pending). Using Model Retargeting, ModelCat customers gain model

STM32U3B5/U3C5: Bringing High-Performance DSP & Edge AI to Ultralow Power Designs
Built on the Arm® Cortex®‑M33 core, the STM32U3B5/U3C5 MCUs combine up to 2 Mbytes of dual‑bank flash memory with 640 Kbytes of RAM and are available in packages from 48 to 144 pins (UFQFPN, WLCSP, LQFP, and UFBGA). The lines introduce a hardware signal processor (HSP) to the STM32U3 portfolio, offloading complex DSP and edge‑AI workloads and
Applications

TI Accelerates the Next Generation of Physical AI with NVIDIA
News highlights: TI and NVIDIA are collaborating to accelerate the path from simulation to the safe deployment of humanoid robots in the real world. As part of this collaboration, TI integrated its mmWave radar technology with NVIDIA Jetson Thor and NVIDIA Holoscan to enable low-latency 3D perception and safety awareness for physical AI applications. TI

From ADAS to Robotaxi: How Vision Systems Must Level Up to Meet New Mobility Use Cases (Part 2)
This blog post was originally published at e-con Systems’ website. It is reprinted here with the permission of e-con Systems. Key Takeaways How urban lighting and motion define robotaxi imaging needs Which camera features support reliable perception during day and night operation Why unified AI vision boxes reduce latency and coordination gaps How integrated vision platforms

Which Service Robots Will Dominate the Market in the Next 10 Years?
Logistics robots and cleaning robots both benefit from high market demand and relatively low technical barriers, compared to kitchen and restaurant robots or underwater robots. Source: Service Robots 2026-2036: Technologies, Players and Markets This blog post was originally published at IDTechEx’s website. It is reprinted here with the permission of IDTechEx. The service robotics industry has grown
Functions

AI On: 3 Ways to Bring Agentic AI to Computer Vision Applications
This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. Learn how to integrate vision language models into video analytics applications, from AI-powered search to fully automated video analysis. Today’s computer vision systems excel at identifying what happens in physical spaces and processes, but lack the abilities to explain the

SAM3: A New Era for Open‑Vocabulary Segmentation and Edge AI
Quality training data – especially segmented visual data – is a cornerstone of building robust vision models. Meta’s recently announced Segment Anything Model 3 (SAM3) arrives as a potential game-changer in this domain. SAM3 is a unified model that can detect, segment, and even track objects in images and videos using both text and visual

TLens vs VCM Autofocus Technology
This blog post was originally published at e-con Systems’ website. It is reprinted here with the permission of e-con Systems. In this blog, we’ll walk you through how TLens technology differs from traditional VCM autofocus, how TLens combined with e-con Systems’ Tinte ISP enhances camera performance, key advantages of TLens over mechanical autofocus systems, and applications
