Resources
In-depth information about the edge AI and vision applications, technologies, products, markets and trends.
The content in this section of the website comes from Edge AI and Vision Alliance members and other industry luminaries.
All Resources
The Future of Security Is Already Running. Here Is What It Looks Like.
This blog post was originally published at Axelera AI’s website. It is reprinted here with the permission of Axelera AI. A camera sees everything and understands nothing. For decades, that has been the fundamental limitation of

Bringing AI Closer to the Edge and On-Device with Gemma 4
This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. The Gemmaverse expands with the launch of the latest Gemma 4 multimodal and multilingual models, designed to scale across the

Humanoid Robots 2026-2036: Technologies, Markets, and Opportunities
Maturity of commercialization of humanoid robotics by application. For full data, refer to IDTechEx’s research on “Humanoid Robots 2026-2036: Technology, Market, and Opportunities” This blog post was originally published at IDTechEx’s website. It is reprinted here

Google Pushes Multimodal AI Further Onto Edge Devices with Gemma 4
MOUNTAIN VIEW, Calif., April 2, 2026 — Google has introduced Gemma 4, a new family of open models with open weights that is clearly aimed at bringing more capable AI onto local hardware. Released under

Gemma 4 Models Optimized for Intel Hardware: Enabling Instant Deployment from Day Zero
We’re excited to announce Intel’s strategic partnership with Google to deliver optimized Gemma 4 models on Intel hardware from day one. This collaboration enables developers to leverage the power of Google’s latest AI models on

See How ams OSRAM Revolutionize Optical Solutions with the Help of Cadence Tools
OSRAM utilizes the Quantus SNA workflow for high-precision silicon. In modern semiconductor design, Heterogeneous Integration is the new frontier. When light detectors for Medical Imaging or depth sensors for Autonomous Systems are packed onto a

The On-Device LLM Revolution: Why 3B-30B Models Are Moving to the Edge
This blog post was originally published at Quadric’s website. It is reprinted here with the permission of Quadric. After years of cloud-centric inference, AI is moving to the edge. The “Goldilocks zone” of 3B to 30B

Evaluating Waveguide Technologies for AR Smart Glasses
The difference in optical efficiency at wider FOV for the same technology type for several commercial waveguides. Data was normalized to the narrow FOV product, with arrows representing the reduction in efficiency for wide FOV
Inside the Intelligent Mobile Camera Powered by Exynos 2600 VPS
This blog post was originally published at Samsung Semiconductor’s website. It is reprinted here with the permission of Samsung Semiconductor. Until recently, the evolution of mobile cameras has been centered on the image sensor and the
Technologies

Cadence and NVIDIA Expand Partnership to Reinvent Engineering for the Age of AI and Accelerated Computing
15 Apr 2026 Expanded collaboration combines agentic AI, physics-based simulation, and digital twins to accelerate engineering and unlock new levels of productivity across semiconductors, physical AI systems and AI factories SAN JOSE, Calif.— At CadenceLIVE Silicon Valley 2026, Cadence (Nasdaq: CDNS) announced an expanded partnership with NVIDIA to deliver accelerated solutions across agentic AI, physics-based simulation

Intel Launches Intel Core Series 3 Processors: Changing the Game for Everyday Computing
Intel® Core™ Series 3 brings advanced features and Intel’s latest architectures to value buyers, commercial and essential edge devices What’s New: Intel® today unveiled its new Intel Core™ Series 3 mobile processors, bringing advanced performance, exceptional battery life, and AI-ready to value buyers, commercial and essential edge devices. Purpose-engineered for value, Intel Core Series 3

Upcoming Webinar on Sony’s IMX925/935 Sensor Series and High Performance SLVS-EC Interface
On May 12, 2026, at 10:00 am CEST, RESTAR FRAMOS will deliver a webinar “Reaching High-Speed and High-Resolution Architecture with IMX925/935 and SLVS-EC” From the event page: From sensor architecture to real-world integration — join the engineers behind the technology High-speed and high-resolution machine vision systems are pushing the limits of data throughput, latency, and
Applications

MPEG-5 LCEVC: A practical shift for industrial AI video pipelines
This blog post was originally published at V-Nova’s website. It is reprinted here with the permission of V-Nova. In Industrial and Defense environments, I hear the same story. More cameras. Higher resolutions. Stricter latency targets. Infrastructure that cannot be replaced easily. And increasing pressure around storage, bandwidth, compute, and privacy. This is why MPEG-5 LCEVC is becoming even more relevant. It improves compression

“Non-Contact Vital Sign Monitoring Using Low-Cost WiFi Devices,” a Presentation from the University of California, Santa Cruz
Katia Obraczka, Professor of Computer Science and Engineering, University of California, Santa Cruz, Pranay Kocheta, Lexington High School, and Nayan Bhatia, University of California, Santa Cruz present the “Non-Contact Vital Sign Monitoring Using Low-Cost WiFi Devices” tutorial at the December 2025 Edge AI and Vision Innovation Forum.

Bosch and Qualcomm Expand Collaboration to Strategic ADAS Solutions
Highlights: High performance solutions: Bosch and Qualcomm aim to make ADAS solutions for enhanced safety and comfort available to everyone. Continued Business Momentum: Collaboration has secured significant new business wins for both next-generation ADAS and cockpit solutions. Proven Global Success: Bosch delivers over 10 million cockpit computers powered by Snapdragon® Cockpit Platforms. Global Market Penetration:
Functions

AI On: 3 Ways to Bring Agentic AI to Computer Vision Applications
This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. Learn how to integrate vision language models into video analytics applications, from AI-powered search to fully automated video analysis. Today’s computer vision systems excel at identifying what happens in physical spaces and processes, but lack the abilities to explain the

SAM3: A New Era for Open‑Vocabulary Segmentation and Edge AI
Quality training data – especially segmented visual data – is a cornerstone of building robust vision models. Meta’s recently announced Segment Anything Model 3 (SAM3) arrives as a potential game-changer in this domain. SAM3 is a unified model that can detect, segment, and even track objects in images and videos using both text and visual

TLens vs VCM Autofocus Technology
This blog post was originally published at e-con Systems’ website. It is reprinted here with the permission of e-con Systems. In this blog, we’ll walk you through how TLens technology differs from traditional VCM autofocus, how TLens combined with e-con Systems’ Tinte ISP enhances camera performance, key advantages of TLens over mechanical autofocus systems, and applications
