LETTER FROM THE EDITOR |
|
Dear Colleague,
In this edition, we’ll cover an edge AI application domain that affects all of us: healthcare. Specifically, we’ll see how computer vision and agentic AI are performing real-time monitoring to transform our physical and mental health, and those of our elders, for example in detecting cognitive decline. We will also explore two takes on the near future of edge AI. But first… We’re excited to announce our 2026 Embedded Vision Summit keynote speakers: Eric Xing, President of the Mohamed bin Zayed University of Artificial Intelligence, and Vikas Chandra, Senior Director at Meta Reality Labs. Professor Xing will present recent breakthroughs in world models, fully open foundation models and parameter-efficient reasoning models. In addition to his position at the Mohamed bin Zayed University of Artificial Intelligence, he is a Professor of Computer Science at Carnegie Mellon University. His main research interests are in the development of machine learning and statistical methodology, as well as large-scale distributed computational systems and architectures, for solving problems involving automated learning, reasoning and decision-making in artificial, biological and social systems. In recent years, he has been focused on building large language models, world models, agent models and foundation models for biology. Vikas Chanda’s keynote, “Scaling Down Is the New Scaling Up,” will argue that the next decade will be about scaling down: AI that runs on your device, reasons across what you see and hear, and understands by utilizing context that never leaves your pocket. At Meta, Dr. Chandra leads an AI research team building efficient on-device AI for glasses and other mixed-reality products. These devices perceive the world as the wearer does, using context to anticipate needs and take action, laying the foundation for the next generation of human-device interaction. Prior to joining Meta in 2018, Dr. Chandra was Director of Applied Machine Learning at Arm Research, where his team helped pioneer techniques that enable AI to run on small, resource-constrained devices. Without further ado, let’s get to the content. Erik Peters |
BUILDING AND DEPLOYING REAL-WORLD ROBOTS |
AI AND VISION ADVANCES IN HEALTHCARE |
|
In this wide-ranging interview, Walter Greenleaf, Neuroscientist at Stanford University’s Virtual Human Interaction Lab, explains how advances in virtual and augmented reality, machine learning and agentic AI and biosensing and embedded vision are converging to transform not only healthcare but human interaction as well. He details how this convergence will impact clinical care, disability solutions and personal health and wellness. Through real-time monitoring of physiological measurements, eye movements, voice tone, facial expressions and behavioral patterns, these integrated technologies are enabling sophisticated systems capable of sensing, analyzing and adapting to our arousal levels, cognitive status and emotional state, adjusting to individual preferences and interaction styles. Greenleaf examines how this technological revolution will transform physical and mental health as well as how humans interact with each other and with the world around us. You’ll learn how agentic AI and immersive visualization will unleash truly personalized experiences that reflect and enhance an individual’s physical and mental health. |
|
Using Computer Vision for Early Detection of Cognitive Decline via Sleep-wake Data AITCare-Vision predicts cognitive decline by analyzing sleep-wake disorder data in older adults. Using computer vision and motion sensors coupled with AI algorithms, AITCare-Vision continuously monitors sleep patterns, including disturbances such as frequent nighttime awakenings or irregular sleep cycles. AITCare-Vision utilizes this data to identify patterns that may signal cognitive decline, such as changes in sleep consistency or increased time spent awake at night. These insights are compared with baseline data to detect subtle shifts in cognitive health over time. In this presentation, Ravi Kota, CEO of AI Tensors, discusses the development of AITCare-Vision. He focuses on some of the key challenges his company addressed in the development process, including devising techniques to obtain accurate sleep-wake data without the use of wearables, designing the system to preserve privacy and implementing techniques to enable running AI models at the edge with low power consumption. |
WHAT’S NEXT IN EDGE AI |
|
On-Device LLMs in 2026: What Changed, What Matters, What’s Next In this article, Vikas Chandra (a 2026 Embedded Vision Summit keynote speaker) and Raghuraman Krishnamoorthi explain why on-device LLMs on phones have shifted from “toy demos” to practical engineering—driven less by faster chips than by new approaches to model design, training, compression and deployment. They frame the motivation as four concrete benefits—lower latency, stronger privacy, lower serving cost and offline availability—while noting that frontier reasoning and very long conversations still tend to favor the cloud. They argue the binding constraint on phones is memory bandwidth (not TOPS), so 4-bit quantization and careful memory management (including KV-cache techniques) disproportionately improve real token throughput and usability under tight RAM and power limits. The authors then survey the “practical toolkit” (quantization, KV-cache strategies, speculative decoding, pruning) and increasingly mature deployment stacks (e.g., ExecuTorch, llama.cpp, MLX), and close by flagging what’s next: mixture-of-experts remains memory-movement-limited on edge, while test-time compute and on-device personalization look like major levers. |
|
Edge AI and Vision at Scale: What’s Real, What’s Next, What’s Missing? Edge AI and vision are no longer science projects—some applications, such as automotive safety systems, have already achieved massive scale. But for every success story, there are many more edge AI and computer vision products that have struggled to move beyond pilot deployments. So what’s holding them back? Scaling edge AI involves far more than just getting a model to run on a device. Challenges range from physical installation and fleet management to model updates, data drift, hardware changes and supply chain disruptions. And as systems grow, so do the variations in environments, sensor quality and real-world conditions. What does “scale” really mean in this space—and what does it take to get there? Exploring these questions is a panel of experts with firsthand experience deploying edge AI at scale, for a candid and practical discussion of what’s real, what’s next and what’s still missing. Sally Ward-Foxton, Senior Reporter at EE Times, moderates our panel, featuring: Chen Wu, Director and Head of Perception at Waymo, Vikas Bhardwaj, Director of AI in the Reality Labs at Meta, Vaibhav Ghadiok, Chief Technology Officer of Hayden AI, and Gérard Medioni, Vice President and Distinguished Scientist at Amazon Prime Video and MGM Studios. |
UPCOMING INDUSTRY EVENTS |
|
Enabling Reliable Industrial 3D Vision with iToF Technology – e-con Systems Webinar: February 19, 2026, 11:00 am CET
– February 25, Pittsburgh, Pennsylvania, 8:15 am – 5:30 pm ET.
Cleaning the Oceans with Edge AI: The Ocean Cleanup’s Smart Camera Transformation – The Ocean Cleanup Webinar: March 3, 2026, 9:00 am PT
Why your Next AI Accelerator Should Be an FPGA – Efinix Webinar: March 17, 2026, 9:00 am PT
Embedded Vision Summit: May 11-13, 2026, Santa Clara, California Newsletter subscribers may use the code 26EVSUM-NL for 25% off the price of registration. |
FEATURED NEWS |
|
Texas Instruments TDA5 Virtualizer Development Kit is accelerating next-generation automotive designs Qualcomm, D3 Embedded and others will host Robotics Builders Forum, offering hardware, know-how and networking Microchip has extended its edge AI offering with full-stack solutions that streamline development |






