Summit 2025

“Technology and Market Trends in CMOS Image Sensors,” an Interview with the Yole Group

Florian Domengie, Principal Technology and Market Analyst for Imaging at the Yole Group, talks with Shung Chieh, Senior Vice President at Eikon Systems (an Eikon Therapeutics business unit) for the “Technology and Market Trends in CMOS Image Sensors” interview at the May 2025 Embedded Vision Summit. Shung Chieh, who has… “Technology and Market Trends in […]

“Technology and Market Trends in CMOS Image Sensors,” an Interview with the Yole Group Read More +

“A New Era of 3D Sensing: Transforming Industries and Creating Opportunities,” a Presentation from the Yole Group

Florian Domengie, Principal Technology and Market Analyst for Imaging at the Yole Group, presents the “A New Era of 3D Sensing: Transforming Industries and Creating Opportunities” tutorial at the May 2025 Embedded Vision Summit. The 3D sensing market is projected to more than double by 2030, surpassing $18B. Key drivers… “A New Era of 3D

“A New Era of 3D Sensing: Transforming Industries and Creating Opportunities,” a Presentation from the Yole Group Read More +

“The New OpenCV 5.0: Added Features, Performance Improvements and Future Directions,” a Presentation from OpenCV.org

Satya Mallick, CEO of OpenCV.org, presents the “New OpenCV 5.0: Added Features, Performance Improvements and Future Directions” tutorial at the May 2025 Embedded Vision Summit. In this presentation, Mallick delves into the latest version of OpenCV, the world’s most popular open-source computer vision library. He highlights the major innovations and… “The New OpenCV 5.0: Added

“The New OpenCV 5.0: Added Features, Performance Improvements and Future Directions,” a Presentation from OpenCV.org Read More +

“Introduction to Shrinking Models with Quantization-aware Training and Post-training Quantization,” a Presentation from NXP Semiconductors

Robert Cimpeanu, Machine Learning Software Engineer at NXP Semiconductors, presents the “Introduction to Shrinking Models with Quantization-aware Training and Post-training Quantization” tutorial at the May 2025 Embedded Vision Summit. In this presentation, Cimpeanu explains two neural network quantization techniques, quantization-aware training (QAT) and post-training quantization (PTQ), and explain when to… “Introduction to Shrinking Models with

“Introduction to Shrinking Models with Quantization-aware Training and Post-training Quantization,” a Presentation from NXP Semiconductors Read More +

“Customizing Vision-language Models for Real-world Applications,” a Presentation from NVIDIA

Monika Jhuria, Technical Marketing Engineer at NVIDIA, presents the “Customizing Vision-language Models for Real-world Applications” tutorial at the May 2025 Embedded Vision Summit. Vision-language models (VLMs) have the potential to revolutionize various applications, and their performance can be improved through fine-tuning and customization. In this presentation, Jhuria explores the concept… “Customizing Vision-language Models for Real-world

“Customizing Vision-language Models for Real-world Applications,” a Presentation from NVIDIA Read More +

“An Introduction to the MIPI CSI-2 Image Sensor Standard and Its Latest Advances,” a Presentation from the MIPI Alliance

Haran Thanigasalam, Camera and Imaging Systems Consultant for the MIPI Alliance, presents the “Introduction to the MIPI CSI-2 Image Sensor Standard and Its Latest Advances” tutorial at the May 2025 Embedded Vision Summit. In this presentation, Thanigasalam provides an overview of the MIPI CSI-2 image sensor interface standard, covering its… “An Introduction to the MIPI

“An Introduction to the MIPI CSI-2 Image Sensor Standard and Its Latest Advances,” a Presentation from the MIPI Alliance Read More +

“Visual Search: Fine-grained Recognition with Embedding Models for the Edge,” a Presentation from Gimlet Labs

Omid Azizi, Co-Founder of Gimlet Labs, presents the “Visual Search: Fine-grained Recognition with Embedding Models for the Edge” tutorial at the May 2025 Embedded Vision Summit. In the domain of AI vision, we have seen an explosion of models that can reliably detect objects of various types, from people to… “Visual Search: Fine-grained Recognition with

“Visual Search: Fine-grained Recognition with Embedding Models for the Edge,” a Presentation from Gimlet Labs Read More +

“Optimizing Real-time SLAM Performance for Autonomous Robots with GPU Acceleration,” a Presentation from eInfochips

Naitik Nakrani, Solution Architect Manager at eInfochips, presents the “Optimizing Real-time SLAM Performance for Autonomous Robots with GPU Acceleration” tutorial at the May 2025 Embedded Vision Summit. Optimizing execution time of long-term and large-scale SLAM algorithms is essential for real-time deployments on edge compute platforms. Faster SLAM output means faster… “Optimizing Real-time SLAM Performance for

“Optimizing Real-time SLAM Performance for Autonomous Robots with GPU Acceleration,” a Presentation from eInfochips Read More +

“LLMs and VLMs for Regulatory Compliance, Quality Control and Safety Applications,” a Presentation from Camio

Lazar Trifunovic, Solutions Architect at Camio, presents the “LLMs and VLMs for Regulatory Compliance, Quality Control and Safety Applications” tutorial at the May 2025 Embedded Vision Summit. By using vision-language models (VLMs) or combining large language models (LLMs) with conventional computer vision models, we can create vision systems that are… “LLMs and VLMs for Regulatory

“LLMs and VLMs for Regulatory Compliance, Quality Control and Safety Applications,” a Presentation from Camio Read More +

“Simplifying Portable Computer Vision with OpenVX 2.0,” a Presentation from AMD

Kiriti Nagesh Gowda, Staff Engineer at AMD, presents the “Simplifying Portable Computer Vision with OpenVX 2.0” tutorial at the May 2025 Embedded Vision Summit. The Khronos OpenVX API offers a set of optimized primitives for low-level image processing, computer vision and neural network operators. It provides a simple method for… “Simplifying Portable Computer Vision with

“Simplifying Portable Computer Vision with OpenVX 2.0,” a Presentation from AMD Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top