Edge AI and Vision Insights: March 18, 2026

 

LETTER FROM THE EDITOR

Dear Colleague,

This issue highlights two important threads shaping the future of edge AI. First, we explore vision for autonomous intelligence, with presentations on geometric depth estimation, vision-based aircraft functions and vision LLMs in multi-agent systems—together showing how visual perception is becoming a foundation for autonomy, reasoning and coordinated action. We then turn to scaling AI in practice, where talks from The Nature Conservancy and CVS Health examine what it takes to make AI work in the real world, from data quality and operational rigor to deployment at organizational scale. Last up, we’ve got highlights from the products released at embedded world 2026. But first…

Registration for the Qualcomm AI Hub Deep Dive workshop at the Embedded Vision Summit is now open! Learn how to compile, optimize and test models for a specific device and runtime, while preserving accuracy and staying within memory constraints. Designed for ML and software engineers who want to ship AI models on Qualcomm-powered edge devices, this technical hands-on session strengthens how teams approach selecting and deploying optimized models on Qualcomm NPUs. Participants will leave with a working application, device‑level performance data and APIs to obtain optimized assets and integrate them directly into existing development workflows. Qualcomm will also address other common challenges faced by developers in bringing AI to the edge, enabling you to expand your skill set.

If you’re looking to get started with AI at the edge, improve your results with it and/or extend it into new use cases, you’ll definitely want to attend this year’s workshop (even if you’ve attended past Qualcomm Deep Dive workshops). And don’t forget to bring your laptop to fully dive into the practical hands-on training exercises.

The details:
Registration fee: $25.
Where: SEMI Innovation Center, Milpitas, California
When: Wednesday, May 13 from 9:00 – 12:00 pm (the day after the Embedded Vision Summit main program).

Register for the Qualcomm AI Hub Deep Dive today!

Without further ado, let’s get to the content.

Erik Peters
Director of Ecosystem and Community Engagement, Edge AI and Vision Alliance

VISION FOR AUTONOMOUS INTELLIGENCE

Depth Estimation from Monocular Images Using Geometric Foundation Models

In this presentation, recorded at the 2025 Embedded Vision Summit, RareČ™ AmbruČ™, Senior Manager for Large Behavior Models at Toyota Research Institute, looks at recent advances in depth estimation from images. He first focuses on the ability to estimate metric depth from monocular camera images from different domains and camera parameters. Next, AmbruČ™ looks at extensions to the multi-view setting and covers an efficient diffusion-based architecture capable of encoding hundreds of images and rendering depth and RGB images from novel viewpoints. Throughout the presentation, he focuses on the interplay between architectural inductive bias, training data and optimization objectives and their combined effect on building geometric foundation models that estimate 3D structure from images.

Vision-based Aircraft Functions for Autonomous Flight Systems

At Acubed (“A cubed”), an Airbus innovation center, the mission is to accelerate AI and autonomy in aerospace. In this talk from the 2025 Embedded Vision Summit, Arne Stoschek, Vice President of AI and Autonomy, gives an overview of vision systems for autonomous flight, starting at the core: sensors, perception algorithms and AI decision-making. He connects the technology to the business goals, such as reducing pilot fatigue, improving safety, enhancing efficiency and reducing human error. Stoschek then shares lessons learned from Acubed’s autonomous flight system development, including challenges they’ve faced along the way. He concludes by discussing how Acubed’s experience can apply to other industries and talks about what’s next for Acubed and Airbus in autonomous flight.

Vision LLMs in Multi-agent Collaborative Systems: Architecture and Integration

In this 2025 Embedded Vision Summit talk, Niyati Prajapati, ML and Generative AI Lead at Google, explores how vision LLMs can be used in multi-agent collaborative systems to enable new levels of capability and autonomy. She explains the architecture of these systems, including how vision LLMs and agents can be integrated. And she explores how such systems are constructed via case studies in automated quality control and warehouse robotics.

SCALING AI IN PRACTICE

Scaling Artificial Intelligence and Computer Vision for Conservation

In this presentation from the 2025 Embedded Vision Summit, Matt Merrifield, Chief Technology Officer at The Nature Conservancy, explains how the world’s largest environmental nonprofit is spearheading projects to scale the use of edge AI and vision solutions for critical conservation challenges. He discusses how his team of data scientists, designers and software developers support projects including geospatial mapping, artificial intelligence and field sensors to advance a diverse set of conservation strategies in California and beyond.

SKAIVISION: Transforming Automotive Dealerships with Computer Vision

Imagine a busy car dealership: customers arrive and wait to talk to sales or service people; vehicles enter and depart; safety checks need to be made; electric vehicles need to be charged. How long has that customer been waiting? Did we check the tires on the car in Bay 3? How long has that EV been on Charger #5? Managing all of these moving parts in the physical world is a real headache—and a critical one for a successful dealership. SKAIVISION solves this headache using computer vision. In this 2025 Embedded Vision Summit presentation, Jason Fayling, Co-founder and Chief Technology Officer, shows how to use AI-powered cameras to optimize service workflows, improve customer satisfaction and increase revenue at dealerships. He discusses overcoming technical challenges, benefits of real-time monitoring and the future of computer vision in this space. He also shares lessons learned from building a start-up in this space, highlighting successes and pain points.

UPCOMING INDUSTRY EVENTS

Intelligent Driver Development with LLM Context Engineering 

 – Boston.AI Webinar: March 19, 10:00 am PT

Embedded Vision Summit: May 11-13, Santa Clara, California

Newsletter subscribers may use the code 26EVSUM-NL for 15% off the price of registration until April 10.

FEATURED NEWS

TI has introduced two MCU families featuring the fast, efficient TinyEngine NPU 

STMicroelectronics has incorporated a “hardware signal processor” into the STM32U3B5/U3C5 MCUs to improve performance, efficiency, and security

NXP has released the i.MX 93W, fusing edge compute and secure wireless connectivity to accelerate physical AI

HCLTech has unveiled VisionX 2.0, a next-gen multi-modal AI edge platform with NVIDIA

BrainChip has launched the AkidaTag reference platform for battery-powered for smart sensing

More News

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top