Resources
In-depth information about the edge AI and vision applications, technologies, products, markets and trends.
The content in this section of the website comes from Edge AI and Vision Alliance members and other industry luminaries.
All Resources

When DRAM Becomes the Bottleneck (Again): What the 2026 Memory Squeeze Means for Edge AI
A funny thing is happening in the edge AI world: some of the most important product decisions you’ll make this year won’t be about TOPS, sensor resolution, or which transformer variant to deploy. They’ll be

Upcoming Webinar on Advances in Automatic Test Pattern Generation
On January 14, 2026, at 7:00 am EST (10:00 am EST) Alliance Member company Synopsys will deliver a webinar “Advances in ATPG: From Power and Timing Awareness to Intelligent Pattern Search with AI” From the

Empowering Professionals and Aspiring Creators, Snapdragon X2 Plus Delivers Multi-day Battery Life, Fast Performance and Advanced AI
Key Takeaways: Snapdragon® X2 Plus transforms every click and every moment for modern professionals, aspiring creators and everyday users, delivering speed, multi-day battery life and built-in AI features. Representing a bold leap forward, the newest entrant in the Snapdragon X Series platform broadens access to the advanced performance and premium experiences

What is a Red Light Camera? A Quick Guide to Vision-Based Traffic Violation Detection
This article was originally published at e-con Systems’ website. It is reprinted here with the permission of e-con Systems. Intersections remain among the most accident-prone areas in traffic networks, with violations like red-light running leading to

MemryX Unveils MX4 Roadmap: Enabling Distributed, Asynchronous Dataflow for Highly Efficient Data Center AI
ANN ARBOR, Mich., Dec. 26, 2025 (PRNewswire) — MemryX Inc., a company delivering production AI inference acceleration, today announced its strategic roadmap for the MX4. The next-generation accelerator is engineered to scale the company’s “at-memory” dataflow architecture from

Cadence Launches Partner Ecosystem to Accelerate Chiplet Time to Market
Strategic collaborations with Samsung Foundry, Arm and others enable Cadence to deliver pre-validated chiplet solutions based on the Cadence Physical AI chiplet platform SAN JOSE, Calif., January 6, 2026, — Cadence (Nasdaq: CDNS) today announced

Commonlands Demonstration of Field of View & Distortion Visualization for M12 Lenses & S-Mount Lenses
Max Henkart, Founder and Optical Engineer at Commonlands, demonstrates the company’s latest products at the December 2025 Edge AI and Vision Alliance Forum. Specifically, Henkart demonstrates Commonlands’ new real-time tool for visualizing field of view

Intel Core Ultra Series 3 Debut as First Built on Intel 18A
Intel ushers in the next generation of AI PCs with exceptional performance, graphics and battery life; available this month Key Takeaways: First platform built on Intel 18A: At CES 2026, Intel launched the Intel® Core™ Ultra

AMD Introduces Ryzen AI Embedded Processor Portfolio, Powering AI-Driven Immersive Experiences in Automotive, Industrial and Physical AI
Key Takeaways: New AMD Ryzen™ AI Embedded P100 and X100 Series processors combine high-performance “Zen 5” CPU cores, an AMD RDNA™ 3.5 GPU and an AMD XDNA™ 2 NPU for low-power AI acceleration Delivers energy-efficient,
Technologies

Why DRAM prices keep rising in the age of AI
As hyperscale data centers rewrite the rules of the memory market, shortages could persist until 2027. Strong server DRAM demand for AI data centers is driving memory prices higher throughout the market, as customers scramble to secure supply for their production needs amid fears of future shortages. The DRAM market is in an AI-driven upcycle,

STM32MP21x: It’s never been more cost-effective or more straightforward to create industrial applications with cameras
This blog post was originally published at STMicroelectronics’ website. It is reprinted here with the permission of STMicroelectronics. ST is launching today the STM32MP21x product line, the most affordable STM32MP2, comprising a single-core Cortex-A35 running at 1.5 GHz and a Cortex-M33 at 300 MHz. It thus completes the STM32MP2 series announced in 2023, which became our first 64-bit MPUs. After the

Upcoming Webinar on Last Mile Logistics
On January 28, 2026, at 11:00 am PST (2:00 pm EST) Alliance Member company STMicroelectronics will deliver a webinar “Transforming last mile logistics with STMicroelectronics and Point One” From the event page: Precision navigation is rapidly becoming the standard for last mile delivery vehicles of all types. But what does it truly take to keep
Applications

STM32MP21x: It’s never been more cost-effective or more straightforward to create industrial applications with cameras
This blog post was originally published at STMicroelectronics’ website. It is reprinted here with the permission of STMicroelectronics. ST is launching today the STM32MP21x product line, the most affordable STM32MP2, comprising a single-core Cortex-A35 running at 1.5 GHz and a Cortex-M33 at 300 MHz. It thus completes the STM32MP2 series announced in 2023, which became our first 64-bit MPUs. After the

Upcoming Webinar on Last Mile Logistics
On January 28, 2026, at 11:00 am PST (2:00 pm EST) Alliance Member company STMicroelectronics will deliver a webinar “Transforming last mile logistics with STMicroelectronics and Point One” From the event page: Precision navigation is rapidly becoming the standard for last mile delivery vehicles of all types. But what does it truly take to keep

NAMUGA Successfully Concludes CES Participation, official Launch of Next-Generation 3D LiDAR Sensor ‘Stella-2’
Las Vegas, NV, Jan 15 — NAMUGA announced that it successfully concluded the unveiling of its new product, Stella-2, at CES 2026, the world’s largest IT and consumer electronics exhibition, held in Las Vegas, USA, from January 6 to 9. The newly unveiled product, Stella-2, is a solid-state LiDAR jointly developed by NAMUGA and Lumotive. In
Functions

AI On: 3 Ways to Bring Agentic AI to Computer Vision Applications
This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. Learn how to integrate vision language models into video analytics applications, from AI-powered search to fully automated video analysis. Today’s computer vision systems excel at identifying what happens in physical spaces and processes, but lack the abilities to explain the

SAM3: A New Era for Open‑Vocabulary Segmentation and Edge AI
Quality training data – especially segmented visual data – is a cornerstone of building robust vision models. Meta’s recently announced Segment Anything Model 3 (SAM3) arrives as a potential game-changer in this domain. SAM3 is a unified model that can detect, segment, and even track objects in images and videos using both text and visual

TLens vs VCM Autofocus Technology
This blog post was originally published at e-con Systems’ website. It is reprinted here with the permission of e-con Systems. In this blog, we’ll walk you through how TLens technology differs from traditional VCM autofocus, how TLens combined with e-con Systems’ Tinte ISP enhances camera performance, key advantages of TLens over mechanical autofocus systems, and applications
