Summit 2025

“Three Big Topics in Autonomous Driving and ADAS,” an Interview with Valeo

Frank Moesle, Software Department Manager at Valeo, talks with Independent Journalist Junko Yoshida for the “Three Big Topics in Autonomous Driving and ADAS” interview at the May 2025 Embedded Vision Summit. In this on-stage interview, Moesle and Yoshida focus on trends and challenges in automotive technology, autonomous driving and ADAS.… “Three Big Topics in Autonomous […]

“Three Big Topics in Autonomous Driving and ADAS,” an Interview with Valeo Read More +

“Toward Hardware-agnostic ADAS Implementations for Software-defined Vehicles,” a Presentation from Valeo

Frank Moesle, Software Department Manager at Valeo, presents the “Toward Hardware-agnostic ADAS Implementations for Software-defined Vehicles” tutorial at the May 2025 Embedded Vision Summit. ADAS (advanced-driver assistance systems) software has historically been tightly bound to the underlying system-on-chip (SoC). This software, especially for visual perception, has been extensively optimized for… “Toward Hardware-agnostic ADAS Implementations for

“Toward Hardware-agnostic ADAS Implementations for Software-defined Vehicles,” a Presentation from Valeo Read More +

“Object Detection Models: Balancing Speed, Accuracy and Efficiency,” a Presentation from Union.ai

Sage Elliott, AI Engineer at Union.ai, presents the “Object Detection Models: Balancing Speed, Accuracy and Efficiency,” tutorial at the May 2025 Embedded Vision Summit. Deep learning has transformed many aspects of computer vision, including object detection, enabling accurate and efficient identification of objects in images and videos. However, choosing the… “Object Detection Models: Balancing Speed,

“Object Detection Models: Balancing Speed, Accuracy and Efficiency,” a Presentation from Union.ai Read More +

“Depth Estimation from Monocular Images Using Geometric Foundation Models,” a Presentation from Toyota Research Institute

Rareș Ambruș, Senior Manager for Large Behavior Models at Toyota Research Institute, presents the “Depth Estimation from Monocular Images Using Geometric Foundation Models” tutorial at the May 2025 Embedded Vision Summit. In this presentation, Ambruș looks at recent advances in depth estimation from images. He first focuses on the ability… “Depth Estimation from Monocular Images

“Depth Estimation from Monocular Images Using Geometric Foundation Models,” a Presentation from Toyota Research Institute Read More +

“Introduction to DNN Training: Fundamentals, Process and Best Practices,” a Presentation from Think Circuits

Kevin Weekly, CEO of Think Circuits, presents the “Introduction to DNN Training: Fundamentals, Process and Best Practices” tutorial at the May 2025 Embedded Vision Summit. Training a model is a crucial step in machine learning, but it can be overwhelming for beginners. In this talk, Weekly provides a comprehensive introduction… “Introduction to DNN Training: Fundamentals,

“Introduction to DNN Training: Fundamentals, Process and Best Practices,” a Presentation from Think Circuits Read More +

“Introduction to Depth Sensing: Technologies, Trade-offs and Applications,” a Presentation from Think Circuits

Chris Sarantos, Independent Consultant with Think Circuits, presents the “Introduction to Depth Sensing: Technologies, Trade-offs and Applications” tutorial at the May 2025 Embedded Vision Summit. Depth sensing is a crucial technology for many applications, including robotics, automotive safety and biometrics. In this talk, Sarantos provides an overview of depth sensing… “Introduction to Depth Sensing: Technologies,

“Introduction to Depth Sensing: Technologies, Trade-offs and Applications,” a Presentation from Think Circuits Read More +

“Lessons Learned Building and Deploying a Weed-killing Robot,” a Presentation from Tensorfield Agriculture

Xiong Chang, CEO and Co-founder of Tensorfield Agriculture, presents the “Lessons Learned Building and Deploying a Weed-Killing Robot” tutorial at the May 2025 Embedded Vision Summit. Agriculture today faces chronic labor shortages and growing challenges around herbicide resistance, as well as consumer backlash to chemical inputs. Smarter, more sustainable approaches… “Lessons Learned Building and Deploying

“Lessons Learned Building and Deploying a Weed-killing Robot,” a Presentation from Tensorfield Agriculture Read More +

“Transformer Networks: How They Work and Why They Matter,” a Presentation from Synthpop AI

Rakshit Agrawal, Principal AI Scientist at Synthpop AI, presents the “Transformer Networks: How They Work and Why They Matter” tutorial at the May 2025 Embedded Vision Summit. Transformer neural networks have revolutionized artificial intelligence by introducing an architecture built around self-attention mechanisms. This has enabled unprecedented advances in understanding sequential… “Transformer Networks: How They Work

“Transformer Networks: How They Work and Why They Matter,” a Presentation from Synthpop AI Read More +

“Virtual Reality, Machine Learning and Biosensing Advances Converging to Transform Healthcare and Beyond,” an Interview with Stanford University

Walter Greenleaf, Neuroscientist at Stanford University’s Virtual Human Interaction Lab, talks with Tom Vogelsong, Start-Up Scout at K2X Technology and Life Science for the “Virtual Reality, Machine Learning and Biosensing Advances Converging to Transform Healthcare and Beyond” interview at the May 2025 Embedded Vision Summit. In this wide-ranging interview, Greenleaf… “Virtual Reality, Machine Learning and

“Virtual Reality, Machine Learning and Biosensing Advances Converging to Transform Healthcare and Beyond,” an Interview with Stanford University Read More +

“Understanding Human Activity from Visual Data,” a Presentation from Sportlogiq

Mehrsan Javan, Chief Technology Officer at Sportlogiq, presents the “Understanding Human Activity from Visual Data” tutorial at the May 2025 Embedded Vision Summit. Activity detection and recognition are crucial tasks in various industries, including surveillance and sports analytics. In this talk, Javan provides an in-depth exploration of human activity understanding,… “Understanding Human Activity from Visual

“Understanding Human Activity from Visual Data,” a Presentation from Sportlogiq Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top