LETTER FROM THE EDITOR |
|
Dear Colleague, On Tuesday, March 3, the Edge AI and Vision Alliance is pleased to present a webinar in collaboration with The Ocean Cleanup. The Ocean Cleanup is on a mission to rid the world’s oceans of plastic. To do that, the team needs to know where plastic accumulates, how it moves, and how their cleanup systems behave in tough, remote marine environments. Robin de Vries, Lead for Autonomous Debris Imaging System (ADIS) will walk attendees through their development, from the first generation of GoPros and removable hard drives to their current setup: a customized smart camera platform that runs computer vision models on the device. Robin will discuss system design for marine environments, hardware choices, power and thermal limits, model deployment and remote management, as well as tradeoffs and lessons learned. More info here. This issue, we’ll conclude our two-part feature on foundational vision/AI techniques, and we’ll touch on one of the applications that always receives a lot of attention at CES: autonomous driving. Frank Moesle from Valeo provides both business insights on software-defined vehicles (SDVs), sensor fusion, and software reliability, as well as technical insights into ADAS for SDVs. If you enjoy Frank’s perspectives, he’s confirmed to return to this year’s Embedded Vision Summit, May 11-13 in Santa Clara, California. Without further ado, let’s get to the content. Erik Peters |
BUILDING AND DEPLOYING REAL-WORLD ROBOTS |
<!–
COMPUTER VISION MODEL FUNDAMENTALS |
|
Transformer Networks: How They Work and Why They Matter Transformer neural networks have revolutionized artificial intelligence by introducing an architecture built around self-attention mechanisms. This has enabled unprecedented advances in understanding sequential data, such as human languages, while also dramatically improving accuracy on nonsequential tasks like object detection. In this talk, Rakshit Agrawal, formerly Principal AI Scientist at Synthpop AI, explains the technical underpinnings of transformer architectures, from input data tokenization and positional encoding to the self-attention mechanism, which is the core component of these networks. He also explores how transformers have influenced the direction of AI research and industry innovation. Finally, he touches on trends that will likely influence how transformers evolve in the near future. |
|
Understanding Human Activity from Visual Data Activity detection and recognition are crucial tasks in various industries, including surveillance and sports analytics. In this talk, Mehrsan Javan, Chief Technology Officer at Sportlogiq, provides an in-depth exploration of human activity understanding, covering the fundamentals of activity detection and recognition, and the challenges of individual and group activity analysis. He uses examples from the sports domain, which provides a unique test bed requiring analysis of activities involving multiple people, including complex interactions among them. Javan traces the evolution of technologies from early deep learning models to large-scale architectures, with a focus on recent technologies such as graph neural networks, transformer-based models, spatial and temporal attention and vision-language approaches, including their strengths and shortcomings. Additionally, he examines the computational and deployment challenges associated with dataset scale, annotation complexity, generalization and real-time implementation constraints. He concludes by outlining potential challenges and future research directions in activity detection and recognition. |
AUTONOMOUS DRIVING & ADAS |
|
Three Big Topics in Autonomous Driving and ADAS In this on-stage interview, Frank Moesle, Software Department Manager at Valeo, and independent journalist Junko Yoshida focus on trends and challenges in automotive technology, autonomous driving and ADAS. First up: Sensor fusion is often touted as the perception solution for autonomy. But what exactly is it? What’s involved and what are the challenges? Next, Moesle and Yoshida discuss the trend toward “software-defined everything” in automotive. Is it just a buzzword, or are there places where it brings real value? And finally, they touch on software reliability: if cars are becoming increasingly autonomous and dependent on software, how do we build automotive systems that are safe and reliable? |
|
Toward Hardware-agnostic ADAS Implementations for Software-defined Vehicles ADAS (advanced-driver assistance systems) software has historically been tightly bound to the underlying system-on-chip (SoC). This software, especially for visual perception, has been extensively optimized for specific SoCs and their dedicated accelerators. In this talk, Frank Moesle, Software Department Manager at Valeo, explains the historic reasons for this approach and shows its advantages. Recent developments, however, such as the emergence of middleware solutions, allow the decoupling of embedded software from the hardware and its specific accelerators, enabling the creation of true software-defined vehicles. Moesle explains how such an approach can achieve efficient implementations, including the use of emulation and cloud processing, and how this benefits not only Tier 1 automotive subsystem suppliers, but also SoC vendors and auto manufacturers. |
UPCOMING INDUSTRY EVENTS |
|
Cleaning the Oceans with Edge AI: The Ocean Cleanup’s Smart Camera Transformation – The Ocean Cleanup Webinar: March 3, 2026, 9:00 am PT Embedded Vision Summit: May 11-13, 2026, Santa Clara, California Newsletter subscribers may use the code 26EVSUM-NL for 25% off the price of registration. |
FEATURED NEWS |
|
Qualcomm’s has expanded its IoT edge AI offerings developers, enterprises & OEMs Ambarella has launched a powerful 8K Vision AI SoC with and multi-sensor perception performance NVIDIA has released the Jetson T4000 and NVIDIA JetPack 7.1 for edge inference NXP has introduced its eIQ agentic AI framework for autonomous intelligence at the edge ModelCat AI is delivering rapid ML model onboarding in partnership with Alif Semiconductor Chips&Media and Visionary.ai have unveiled the world’s first AI-based full image signal processor |






