Resources
In-depth information about the edge AI and vision applications, technologies, products, markets and trends.
The content in this section of the website comes from Edge AI and Vision Alliance members and other industry luminaries.
All Resources

Bosch and Qualcomm Expand Collaboration to Strategic ADAS Solutions
Highlights: High performance solutions: Bosch and Qualcomm aim to make ADAS solutions for enhanced safety and comfort available to everyone. Continued Business Momentum: Collaboration has secured significant new business wins for both next-generation ADAS and

Key Trends Shaping the Semiconductor Industry in 2026
This blog post was originally published at HTEC’s website. It is reprinted here with the permission of HTEC. The hardware boom is slowing down. What comes next is a software, power, and inference problem—and

Texas Instruments, D3 Embedded, Lattice and NVIDIA Show a Practical Radar-Camera Fusion Stack for Robotics
TI’s new application brief and companion demo outline how mmWave radar, camera input, FPGA-based sensor bridging and NVIDIA Holoscan can be combined into a low-latency perception pipeline for humanoids and other autonomous machines. Texas

Upcoming Webinar on Akida Radar Reference Platform
On April 20, 2026, at 8:00 pm PDT (11:00 am EDT) BrainChip will deliver a webinar “Akida Radar Reference Platform: See the Evolution of Radar Intelligence with AI-Powered Object Classification” From the event page: Join

From Connected to Aware: How PSOC™ Edge Enables the Next Wave of Smart Devices
This blog post was originally published at Infineon’s website. It is reprinted here with the permission of Infineon. Across home, retail, and industry, devices that once followed simple rules are now expected to understand people
Building Robotics Applications with Ryzen AI and ROS 2
This blog post was originally published at AMD’s website. It is reprinted here with the permission of AMD. This blog showcases how to deploy power-efficient Ryzen AI perception models with ROS 2 – the Robot Operating


BrainChip Unveils Radar Reference Platform to Bridge the ‘Identification Gap’ in Edge AI
LAGUNA HILLS, Calif. — April 6, 2026 — BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, BCHPY), the world’s first commercial producer of ultra-low-power, neuromorphic AI technology, today announced the launch of its Radar Reference Platform. This fully validated hardware and AI

e-con Systems Launches STURDeCAM57: A 5MP Global Shutter RGB-IR Camera for In-cabin Monitoring Systems
California & Chennai (March 31, 2026): e-con Systems®, a global leader in embedded vision solutions, launches STURDeCAM57, a 5MP global shutter RGB-IR GMSL2 camera designed to deliver reliable, context-rich vision from day to night for
Technologies

Cadence and NVIDIA Expand Partnership to Reinvent Engineering for the Age of AI and Accelerated Computing
15 Apr 2026 Expanded collaboration combines agentic AI, physics-based simulation, and digital twins to accelerate engineering and unlock new levels of productivity across semiconductors, physical AI systems and AI factories SAN JOSE, Calif.— At CadenceLIVE Silicon Valley 2026, Cadence (Nasdaq: CDNS) announced an expanded partnership with NVIDIA to deliver accelerated solutions across agentic AI, physics-based simulation

Intel Launches Intel Core Series 3 Processors: Changing the Game for Everyday Computing
Intel® Core™ Series 3 brings advanced features and Intel’s latest architectures to value buyers, commercial and essential edge devices What’s New: Intel® today unveiled its new Intel Core™ Series 3 mobile processors, bringing advanced performance, exceptional battery life, and AI-ready to value buyers, commercial and essential edge devices. Purpose-engineered for value, Intel Core Series 3

Upcoming Webinar on Sony’s IMX925/935 Sensor Series and High Performance SLVS-EC Interface
On May 12, 2026, at 10:00 am CEST, RESTAR FRAMOS will deliver a webinar “Reaching High-Speed and High-Resolution Architecture with IMX925/935 and SLVS-EC” From the event page: From sensor architecture to real-world integration — join the engineers behind the technology High-speed and high-resolution machine vision systems are pushing the limits of data throughput, latency, and
Applications

MPEG-5 LCEVC: A practical shift for industrial AI video pipelines
This blog post was originally published at V-Nova’s website. It is reprinted here with the permission of V-Nova. In Industrial and Defense environments, I hear the same story. More cameras. Higher resolutions. Stricter latency targets. Infrastructure that cannot be replaced easily. And increasing pressure around storage, bandwidth, compute, and privacy. This is why MPEG-5 LCEVC is becoming even more relevant. It improves compression

“Non-Contact Vital Sign Monitoring Using Low-Cost WiFi Devices,” a Presentation from the University of California, Santa Cruz
Katia Obraczka, Professor of Computer Science and Engineering, University of California, Santa Cruz, Pranay Kocheta, Lexington High School, and Nayan Bhatia, University of California, Santa Cruz present the “Non-Contact Vital Sign Monitoring Using Low-Cost WiFi Devices” tutorial at the December 2025 Edge AI and Vision Innovation Forum.

Bosch and Qualcomm Expand Collaboration to Strategic ADAS Solutions
Highlights: High performance solutions: Bosch and Qualcomm aim to make ADAS solutions for enhanced safety and comfort available to everyone. Continued Business Momentum: Collaboration has secured significant new business wins for both next-generation ADAS and cockpit solutions. Proven Global Success: Bosch delivers over 10 million cockpit computers powered by Snapdragon® Cockpit Platforms. Global Market Penetration:
Functions

AI On: 3 Ways to Bring Agentic AI to Computer Vision Applications
This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. Learn how to integrate vision language models into video analytics applications, from AI-powered search to fully automated video analysis. Today’s computer vision systems excel at identifying what happens in physical spaces and processes, but lack the abilities to explain the

SAM3: A New Era for Open‑Vocabulary Segmentation and Edge AI
Quality training data – especially segmented visual data – is a cornerstone of building robust vision models. Meta’s recently announced Segment Anything Model 3 (SAM3) arrives as a potential game-changer in this domain. SAM3 is a unified model that can detect, segment, and even track objects in images and videos using both text and visual

TLens vs VCM Autofocus Technology
This blog post was originally published at e-con Systems’ website. It is reprinted here with the permission of e-con Systems. In this blog, we’ll walk you through how TLens technology differs from traditional VCM autofocus, how TLens combined with e-con Systems’ Tinte ISP enhances camera performance, key advantages of TLens over mechanical autofocus systems, and applications
