Resources
In-depth information about the edge AI and vision applications, technologies, products, markets and trends.
The content in this section of the website comes from Edge AI and Vision Alliance members and other industry luminaries.
All Resources

Faster Sensor Simulation for Robotics Training with Machine Learning Surrogates
This article was originally published at Analog Devices’ website. It is reprinted here with the permission of Analog Devices. Training robots in the physical world is slow, expensive, and difficult to scale. Roboticists developing AI policies

Voyager SDK v1.5.3 is Live, and That Means Ultralytics YOLO26 Support
Voyager v1.5.3 dropped, and Ultralytics YOLO26 support is the big headline here. If you’ve been following Ultralytics’ releases, you’ll know Ultralytics YOLO26 is specifically engineered for edge devices like Axelera’s Metis hardware. Why Ultralytics YOLO26 matters for your

Upcoming Webinar on Challenges of Depth of Field (DoF) in Macro Imaging
On January 29, 2026, at 9:00 am PST (12:00 pm EST) Alliance Member company e-con Systems will deliver a webinar “Challenges of Depth of Field (DoF) in Macro Imaging” From the event page: We’re excited

How Edge Computing In Retail Is Transforming the Shopping Experience
Forward-looking retailers are increasingly relying on an in-store combination of data collection through IoT devices with various types of sensors, AI for decisions and transactions on live data, and digital signage to communicate results and

Free Webinar Highlights Compelling Advantages of FPGAs
On March 17, 2026 at 9 am PT (noon ET), Efinix’s Mark Oliver, VP of Marketing and Business Development, will present the free hour webinar “Why your Next AI Accelerator Should Be an FPGA,” organized

Meet MIPS S8200: Real-Time, On-Device AI for the Physical World
This blog post was originally published at MIPS’s website. It is reprinted here with the permission of MIPS. Physical AI is the ability for machines to sense their environment, think locally, act safely, and communicate

The Next Platform Shift: Physical and Edge AI, Powered by Arm
This blog post was originally published at Arm’s website. It is reprinted here with the permission of Arm. The Arm ecosystem is taking AI beyond the cloud and into the real-world As CES 2026 opens, a common thread quickly

Why DRAM Prices Keep Rising in the Age of AI
As hyperscale data centers rewrite the rules of the memory market, shortages could persist until 2027. Strong server DRAM demand for AI data centers is driving memory prices higher throughout the market, as customers scramble

STM32MP21x: It’s Never Been More Cost-effective or More Straightforward to Create Industrial Applications with Cameras
This blog post was originally published at STMicroelectronics’ website. It is reprinted here with the permission of STMicroelectronics. ST is launching today the STM32MP21x product line, the most affordable STM32MP2, comprising a single-core Cortex-A35 running at 1.5
Technologies

Upcoming Webinar on CSI-2 over D-PHY & C-PHY
On February 24, 2026, at 9:00 am PST (12:00 pm EST) MIPI Alliance will deliver a webinar “MIPI CSI-2 over D-PHY & C-PHY: Advancing Imaging Conduit Solutions” From the event page: MIPI CSI-2®, together with MIPI D-PHY™ and C-PHY™ physical layers, form the foundation of image sensor solutions across a wide range of markets, including

What’s New in MIPI Security: MIPI CCISE and Security for Debug
This blog post was originally published at MIPI Alliance’s website. It is reprinted here with the permission of MIPI Alliance. As the need for security becomes increasingly more critical, MIPI Alliance has continued to broaden its portfolio of standardized solutions, adding two more specifications in late 2025, and continuing work on significant updates to the MIPI Camera

Production-Ready, Full-Stack Edge AI Solutions Turn Microchip’s MCUs and MPUs Into Catalysts for Intelligent Real-Time Decision-Making
Chandler, Ariz., February 10, 2026 — A major next step for artificial intelligence (AI) and machine learning (ML) innovation is moving ML models from the cloud to the edge for real-time inferencing and decision-making applications in today’s industrial, automotive, data center and consumer Internet of Things (IoT) networks. Microchip Technology (Nasdaq: MCHP) has extended its edge AI offering
Applications

What Happens When the Inspection AI Fails: Learning from Production Line Mistakes
This blog post was originally published at Lincode’s website. It is reprinted here with the permission of Lincode. Studies show that about 34% of manufacturing defects are missed because inspection systems make mistakes.[1] These numbers show a big problem—when the inspection AI misses something, even a tiny defect can spread across hundreds or thousands of products. One

Accelerating next-generation automotive designs with the TDA5 Virtualizer™ Development Kit
This blog post was originally published at Texas Instruments’ website. It is reprinted here with the permission of Texas Instruments. Introduction Continuous innovation in high-performance, power-efficient systems-on-a-chip (SoCs) is enabling safer, smarter and more autonomous driving experiences in even more vehicles. As another big step forward, Texas Instruments and Synopsys developed a Virtualizer Development Kit™ (VDK) for the

Into the Omniverse: OpenUSD and NVIDIA Halos Accelerate Safety for Robotaxis, Physical AI Systems
This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. NVIDIA Editor’s note: This post is part of Into the Omniverse, a series focused on how developers, 3D practitioners and enterprises can transform their workflows using the latest advancements in OpenUSD and NVIDIA Omniverse. New NVIDIA safety
Functions

AI On: 3 Ways to Bring Agentic AI to Computer Vision Applications
This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. Learn how to integrate vision language models into video analytics applications, from AI-powered search to fully automated video analysis. Today’s computer vision systems excel at identifying what happens in physical spaces and processes, but lack the abilities to explain the

SAM3: A New Era for Open‑Vocabulary Segmentation and Edge AI
Quality training data – especially segmented visual data – is a cornerstone of building robust vision models. Meta’s recently announced Segment Anything Model 3 (SAM3) arrives as a potential game-changer in this domain. SAM3 is a unified model that can detect, segment, and even track objects in images and videos using both text and visual

TLens vs VCM Autofocus Technology
This blog post was originally published at e-con Systems’ website. It is reprinted here with the permission of e-con Systems. In this blog, we’ll walk you through how TLens technology differs from traditional VCM autofocus, how TLens combined with e-con Systems’ Tinte ISP enhances camera performance, key advantages of TLens over mechanical autofocus systems, and applications
