Resources
In-depth information about the edge AI and vision applications, technologies, products, markets and trends.
The content in this section of the website comes from Edge AI and Vision Alliance members and other industry luminaries.
All Resources

Low-Light Image Enhancement: YUV vs RAW – What’s the Difference?
This blog post was originally published at Visidon’s website. It is reprinted here with the permission of Visidon. In the world of embedded vision—whether for mobile phones, surveillance systems, or smart edge devices—image quality in low-light

NVIDIA-Accelerated Mistral 3 Open Models Deliver Efficiency, Accuracy at Any Scale
This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. The new Mistral 3 open model family delivers industry-leading accuracy, efficiency, and customization capabilities for developers and enterprises.

Ambarella’s CV3-AD655 Surround View with IMG BXM GPU: A Case Study
The CV3-AD family block diagram. This blog post was originally published at Imagination Technologies’ website. It is reprinted here with the permission of Imagination Technologies. Ambarella’s CV3-AD655 autonomous driving AI domain controller pairs energy-efficient compute

e-con Systems to Launch Darsi Pro, an NVIDIA Jetson-Powered AI Compute Box for Advanced Vision Applications
This blog post was originally published at e-con Systems’ website. It is reprinted here with the permission of e-con Systems. This blog offers expert insights into Darsi Pro, how it delivers a unified vision solution,

Overcoming the Skies: Navigating the Challenges of Drone Autonomy
This blog post was originally published at Inuitive’s website. It is reprinted here with the permission of Inuitive. From early military prototypes to today’s complex commercial operations, drones have evolved from experimental aircraft into essential

NVIDIA Advances Open Model Development for Digital and Physical AI
This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. NVIDIA releases new AI tools for speech, safety and autonomous driving — including NVIDIA DRIVE Alpamayo-R1, the world’s

OpenVINO 2025.4 Release Broadens Model Support
OpenVINO 2025.4 is very much an edge-first release: it tightens the loop between perception, language, and action across AI PCs, embedded devices, and near-edge servers. On the model side, Intel is clearly optimizing for “local

AMD Spartan UltraScale+ FPGA Kit Adds Proven Infineon HyperRAM Support for Edge AI Designs
Somewhat eclipsed by last week’s announcement that the AMD Spartan™ UltraScale+™ FPGA SCU35 Evaluation Kit is now available, AMD and Infineon have disclosed successful validation of Infineon’s 64-Mb HYPERRAM™ memory and HYPERRAM controller IP on

Breaking the Human Accuracy Barrier in Computer Vision Labeling
This article was originally published at 3LC’s website. It is reprinted here with the permission of 3LC. Introduction There’s been a lot of excitement lately around how foundation models (such as CLIP, SAM, Grounding DINO, etc.)
Technologies
Samsung to launch in-house mobile GPU by 2027
December 25, 2025, Suwon, South Korea — Samsung Electronics is accelerating plans to bring mobile graphics processing fully in-house, with multiple reports pointing to a proprietary GPU architecture arriving in an Exynos application processor as early as 2027. A report citing Cailian Press, says Samsung’s System LSI Division is pushing toward a “100% proprietary technology”

How Embedded Vision Is Helping Modernize and Future-Proof Retail Operations
This blog post was originally published at e-con Systems’ website. It is reprinted here with the permission of e-con Systems. Physical stores are becoming intelligent environments. Embedded vision turns every critical touchpoint into a source of real-time insight, from shelves and kiosks to checkout zones and digital signages. With cameras analyzing activity as it happens, retailers

Groq and Nvidia Enter Non-Exclusive Inference Technology Licensing Agreement to Accelerate AI Inference at Global Scale
Mountain View, CA, December 24 — Today, Groq announced that it has entered into a non-exclusive licensing agreement with Nvidia for Groq’s inference technology. The agreement reflects a shared focus on expanding access to high-performance, low cost inference. As part of this agreement, Jonathan Ross, Groq’s Founder, Sunny Madra, Groq’s President, and other members of
Applications
Samsung to launch in-house mobile GPU by 2027
December 25, 2025, Suwon, South Korea — Samsung Electronics is accelerating plans to bring mobile graphics processing fully in-house, with multiple reports pointing to a proprietary GPU architecture arriving in an Exynos application processor as early as 2027. A report citing Cailian Press, says Samsung’s System LSI Division is pushing toward a “100% proprietary technology”

How Embedded Vision Is Helping Modernize and Future-Proof Retail Operations
This blog post was originally published at e-con Systems’ website. It is reprinted here with the permission of e-con Systems. Physical stores are becoming intelligent environments. Embedded vision turns every critical touchpoint into a source of real-time insight, from shelves and kiosks to checkout zones and digital signages. With cameras analyzing activity as it happens, retailers

The Coming Robotics Revolution: How AI and Macnica’s Capture, Process, Communicate Philosophy Will Define the Next Industrial Era
This blog post was originally published at Macnica’s website. It is reprinted here with the permission of Macnica. Just as networking and fiber-optic infrastructure quietly laid the groundwork for the internet economy, fueling the rise of Amazon, Facebook, and the digital platforms that redefined commerce and communication, today’s breakthroughs in artificial intelligence are setting the stage
Functions

AI On: 3 Ways to Bring Agentic AI to Computer Vision Applications
This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. Learn how to integrate vision language models into video analytics applications, from AI-powered search to fully automated video analysis. Today’s computer vision systems excel at identifying what happens in physical spaces and processes, but lack the abilities to explain the

SAM3: A New Era for Open‑Vocabulary Segmentation and Edge AI
Quality training data – especially segmented visual data – is a cornerstone of building robust vision models. Meta’s recently announced Segment Anything Model 3 (SAM3) arrives as a potential game-changer in this domain. SAM3 is a unified model that can detect, segment, and even track objects in images and videos using both text and visual

TLens vs VCM Autofocus Technology
This blog post was originally published at e-con Systems’ website. It is reprinted here with the permission of e-con Systems. In this blog, we’ll walk you through how TLens technology differs from traditional VCM autofocus, how TLens combined with e-con Systems’ Tinte ISP enhances camera performance, key advantages of TLens over mechanical autofocus systems, and applications
