“Edge AI and Vision at Scale: What’s Real, What’s Next, What’s Missing?,” An Embedded Vision Summit Expert Panel Discussion

Sally Ward-Foxton, Senior Reporter at EE Times, moderates the “Edge AI and Vision at Scale: What’s Real, What’s Next, What’s Missing?” Expert Panel at the May 2025 Embedded Vision Summit. Other panelists include Chen Wu, Director and Head of Perception at Waymo, Vikas Bhardwaj, Director of AI in the Reality Labs at Meta, Vaibhav Ghadiok, Chief Technology Officer of Hayden AI, and Gérard Medioni, Vice President and Distinguished Scientist at Amazon Prime Video and MGM Studios.

Edge AI and vision are no longer science projects—some applications, such as automotive safety systems, have already achieved massive scale. But for every success story, there are many more edge AI and computer vision products that have struggled to move beyond pilot deployments. So what’s holding them back?

Scaling edge AI involves far more than just getting a model to run on a device. Challenges range from physical installation and fleet management to model updates, data drift, hardware changes and supply chain disruptions. And as systems grow, so do the variations in environments, sensor quality and real-world conditions. What does “scale” really mean in this space—and what does it take to get there? Exploring these questions is a panel of experts with firsthand experience deploying edge AI at scale, for a candid and practical discussion of what’s real, what’s next and what’s still missing.

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top