Resources
In-depth information about the edge AI and vision applications, technologies, products, markets and trends.
The content in this section of the website comes from Edge AI and Vision Alliance members and other industry luminaries.
All Resources

China’s Autonomous Trucks Now Log Over One Million Kilometers Daily
This blog post was originally published at IDTechEx’s website. It is reprinted here with the permission of IDTechEx. To gain deeper insights into the rapidly accelerating market for autonomous trucks in China, IDTechEx recently visited Inceptio Technology, a

Public road demonstration tests of the latest autonomous driving EV bus, including nighttime operation, have begun at a hot spring resort
Following on from last year, Macnica, Inc., Fukuyama Consultant Co., Ltd., and KCS Corporation will collaborate with Ureshino City, Saga Prefecture, to conduct a public road demonstration experiment of autonomous driving vehicle (autonomous driving level

How semiconductor equipment makers will drive the next $1 trillion wave
This blog post was originally published at HCLTech’s website. It is reprinted here with the permission of HCLTech. Key takeaways AI, mobility and cloud are the growth engines: They’re pushing chips toward a $1 trillion market by

VC MIPI Cameras & ADLINK i.MX 8M Plus: Full driver support
Ettlingen, November 18, 2025 — For a medical imaging project, ADLINK wanted to integrate VC MIPI Cameras with its I-Pi SMARC IMX8M Plus Development Kit. Vision Components adapted the standard driver for the NXP i.MX

The Art of Robotics and The Growing Intellect of Autonomy
This blog post was originally published at IDTechEx’s website. It is reprinted here with the permission of IDTechEx. ‘Robotics’ takes on many different forms today, from cars pre-empting a driver’s needs and making coffee-stop decisions

Micron Ships Automotive UFS 4.1, Designed to Unlock Intelligent Mobility With Speed, Safety and Reliability
Architected to power AI workloads, Micron’s latest automotive solution, built with G9 NAND, equips the industry to create safer, smarter more connected driver experiences MUNICH, Nov. 13, 2025 (GLOBE NEWSWIRE) — Automotive Computing Conference — Micron

STMicroelectronics introduces new battery-saving wireless microcontrollers optimized for remote controls
Nov 18, 2025 Geneva, Switzerland STM32WL3R MCUs add flexible features to low-power radio for superior user experiences in consumer electronics and home automation Integrated in new RF remotes from building-automation leader Somfy STMicroelectronics has introduced the STM32WL3R,

The Image Sensor Size and Pixel Size of a Camera is Critical to Image Quality
This blog post was originally published at Commonlands’ website. It is reprinted here with the permission of Commonlands. The sensor format size and pixel size of digital camera impacts nearly every performance attribute of a

Au-Zone Technologies Expands EdgeFirst Studio Access
Proven MLOps Platform for Spatial Perception at the Edge Now Available CALGARY, AB – November 19, 2025 – Au-Zone Technologies today expands general access to EdgeFirst Studio™, the enterprise MLOps platform purpose-built for Spatial
Technologies

What is a dust denoising filter in TOF camera, and how does it remove noise artifacts in vision systems?
This article was originally published at e-con Systems’ website. It is reprinted here with the permission of e-con Systems. Time-of-Flight (ToF) cameras with IR sensors are susceptible to performance variations caused by environmental dust. This dust can create ‘dust noise’ in the output depth map, directly impacting camera accuracy and, consequently, the reliability of critical

Low-Light Image Enhancement: YUV vs RAW – What’s the Difference?
This blog post was originally published at Visidon’s website. It is reprinted here with the permission of Visidon. In the world of embedded vision—whether for mobile phones, surveillance systems, or smart edge devices—image quality in low-light conditions can make or break user experience. That’s where advanced AI-based denoising algorithms come into play. At our company, we

NVIDIA-Accelerated Mistral 3 Open Models Deliver Efficiency, Accuracy at Any Scale
This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. The new Mistral 3 open model family delivers industry-leading accuracy, efficiency, and customization capabilities for developers and enterprises. Optimized from NVIDIA GB200 NVL72 to edge platforms, Mistral 3 includes: One large state-of-the-art sparse multimodal and multilingual mixture of
Applications

Ambarella’s CV3-AD655 Surround View with IMG BXM GPU: A Case Study
The CV3-AD family block diagram. This blog post was originally published at Imagination Technologies’ website. It is reprinted here with the permission of Imagination Technologies. Ambarella’s CV3-AD655 autonomous driving AI domain controller pairs energy-efficient compute with Imagination’s IMG BXM GPU to enable real-time surround-view visualisation for L2++/L3 vehicles. This case study outlines the industry shift

Overcoming the Skies: Navigating the Challenges of Drone Autonomy
This blog post was originally published at Inuitive’s website. It is reprinted here with the permission of Inuitive. From early military prototypes to today’s complex commercial operations, drones have evolved from experimental aircraft into essential tools across industries. Since the FAA issued its first commercial permit in 2006, applications have rapidly expanded—from disaster relief and

NVIDIA Advances Open Model Development for Digital and Physical AI
This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. NVIDIA releases new AI tools for speech, safety and autonomous driving — including NVIDIA DRIVE Alpamayo-R1, the world’s first open industry-scale reasoning vision language action model for mobility — and a new independent benchmark recognizes the openness and
Functions

SAM3: A New Era for Open‑Vocabulary Segmentation and Edge AI
Quality training data – especially segmented visual data – is a cornerstone of building robust vision models. Meta’s recently announced Segment Anything Model 3 (SAM3) arrives as a potential game-changer in this domain. SAM3 is a unified model that can detect, segment, and even track objects in images and videos using both text and visual

TLens vs VCM Autofocus Technology
This blog post was originally published at e-con Systems’ website. It is reprinted here with the permission of e-con Systems. In this blog, we’ll walk you through how TLens technology differs from traditional VCM autofocus, how TLens combined with e-con Systems’ Tinte ISP enhances camera performance, key advantages of TLens over mechanical autofocus systems, and applications

“Object Detection Models: Balancing Speed, Accuracy and Efficiency,” a Presentation from Union.ai
Sage Elliott, AI Engineer at Union.ai, presents the “Object Detection Models: Balancing Speed, Accuracy and Efficiency,” tutorial at the May 2025 Embedded Vision Summit. Deep learning has transformed many aspects of computer vision, including object detection, enabling accurate and efficient identification of objects in images and videos. However, choosing the… “Object Detection Models: Balancing Speed,
