Resources
In-depth information about the edge AI and vision applications, technologies, products, markets and trends.
The content in this section of the website comes from Edge AI and Vision Alliance members and other industry luminaries.
All Resources

Intel Launches Core Series 2 Processor with Real-Time Performance and Expands Edge AI Portfolio
New industrial-ready platform delivers breakthrough deterministic performance; sixth Edge AI suite targets healthcare applications NUREMBERG, Germany — March 9, 2026 — At Embedded World 2026, Intel launched the Intel® Core™ processor Series 2 with P-cores, an industrial-ready platform engineered

Synaptics Introduces SYN765x, an Industry-Leading AI-Native Wi-Fi® 7 Solution for Integrated IoT Edge Applications
AN JOSE, Calif., Mar 10, 2026 — Synaptics Incorporated (Nasdaq: SYNA), today announced the SYN765x, an AI-native wireless solution that redefines Edge intelligence. As an industry-leading single-chip device combining AI-optimized compute with integrated Wi-Fi® 7, the SYN765x

RTOS vs. Bare-Metal: Decision Matrix Tool for Projects Based on High-End Microcontrollers
This blog post was originally published at eInfochips’ website. It is reprinted here with the permission of eInfochips. Introduction When building a system with a powerful microcontroller (MCU) or microprocessor, such as an ARM Cortex-M4,

NXP’s New i.MX 93W Fuses Edge Compute and Secure Wireless Connectivity to Accelerate Physical AI
Key Takeaways: First applications processor to combine an AI NPU with secure, tri-radio connectivity, replacing up to 60 discrete components with a single package Accelerates coordinated AI agent deployment with integrated edge compute and secure

Conversations at the Edge with NXP
This blog post was originally published at Au-Zone’s website. It is reprinted here with the permission of Au-Zone. Are Single-Sensor Robots Obsolete? We think so, and we’re here to show you why. Au-Zone is proud

TI Accelerates the Next Generation of Physical AI with NVIDIA
News highlights: TI and NVIDIA are collaborating to accelerate the path from simulation to the safe deployment of humanoid robots in the real world. As part of this collaboration, TI integrated its mmWave radar technology

ModelCat AI Announces AI Model Portability Across Silicon Devices
An industry first, ModelCat’s Agentic AI generates models for new chips using a user’s current production models, dramatically accelerating inferencing to the edge. SUNNYVALE, Calif., March 5, 2026 /PRNewswire/ — ModelCat, the creator of the world’s first fully autonomous

STM32U3B5/U3C5: Bringing High-Performance DSP & Edge AI to Ultralow Power Designs
Built on the Arm® Cortex®‑M33 core, the STM32U3B5/U3C5 MCUs combine up to 2 Mbytes of dual‑bank flash memory with 640 Kbytes of RAM and are available in packages from 48 to 144 pins (UFQFPN, WLCSP,

From ADAS to Robotaxi: How Vision Systems Must Level Up to Meet New Mobility Use Cases (Part 2)
This blog post was originally published at e-con Systems’ website. It is reprinted here with the permission of e-con Systems. Key Takeaways How urban lighting and motion define robotaxi imaging needs Which camera features support reliable
Technologies

Upcoming Webinar on Agentic Memory Systems
On April 16, 2026, at 1:00 pm EDT (10:00 am PDT) Boston.AI will deliver a webinar “Remembering to Forget: Agentic Memory Systems and Context Constraints” From the event page: As AI agents evolve from stateless responders into persistent, goal-directed systems, memory has become a central design challenge. The question is no longer just what agents

From hardware to intelligence: The Qualcomm AI camera platform for scalable security solutions
At ISC West 2026 Qualcomm Technologies showcases its vision for the future of smart camera and security Key Takeaways: End-to-end AI development tools and the Qualcomm Insight Platform to enable customers to develop AI features once and scale across their entire product portfolio. Qualcomm Technologies’ camera solutions span video security, law enforcement and enterprise body

Lightweight Keyword Spotting Solution from Microchip
Microchip presents a customizable, target-agnostic solution to program wake words and voice commands. The ML model, generated and tested using a custom application, has low latency and a minimal memory footprint, making it ideal for resource-constrained embedded systems. The ML model can be integrated into voice-based applications running on any 32-bit microcontroller or microprocessor running
Applications

2026: The Year Intelligence Gets Physical
This article was originally published at Analog Devices’ website. It is reprinted here with the permission of Analog Devices. Artificial intelligence is entering a new phase where models interpret contextual data whilst interacting with the physical world in real time. At Analog Devices, Inc. (ADI), we call this Physical Intelligence: intelligent systems that can perceive, reason

From Warehouse to Wallet: New State of AI in Retail and CPG Survey Uncovers How AI Is Rewiring Supply Chains and Customer Experiences
This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. The third annual NVIDIA State of AI in Retail and CPG survey shows why nine in 10 retailers will increase AI budgets in 2026, focusing on open-source models and software, as well as agentic and physical AI. Highlights

STMicroelectronics and Leopard Imaging Accelerate Robotics Vision with NVIDIA Jetson-ready Multi-sensor Module
Key Takeaways Multimodal module combining 2D imaging, 3D depth sensing, and human-like motion perception NVIDIA Holoscan Sensor Bridge ensuring multi-gigabit plug and play connectivity with Jetson platforms Fully supported by NVIDIA Isaac open robot development platform STMicroelectronics and Leopard Imaging® have introduced an all-in-one multimodal vision module for humanoid and other advanced robotics systems. Combining
Functions

AI On: 3 Ways to Bring Agentic AI to Computer Vision Applications
This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. Learn how to integrate vision language models into video analytics applications, from AI-powered search to fully automated video analysis. Today’s computer vision systems excel at identifying what happens in physical spaces and processes, but lack the abilities to explain the

SAM3: A New Era for Open‑Vocabulary Segmentation and Edge AI
Quality training data – especially segmented visual data – is a cornerstone of building robust vision models. Meta’s recently announced Segment Anything Model 3 (SAM3) arrives as a potential game-changer in this domain. SAM3 is a unified model that can detect, segment, and even track objects in images and videos using both text and visual

TLens vs VCM Autofocus Technology
This blog post was originally published at e-con Systems’ website. It is reprinted here with the permission of e-con Systems. In this blog, we’ll walk you through how TLens technology differs from traditional VCM autofocus, how TLens combined with e-con Systems’ Tinte ISP enhances camera performance, key advantages of TLens over mechanical autofocus systems, and applications
