Technical Insights

“Integrating Cameras with the Robot Operating System (ROS),” a Presentation from Amazon Lab126

Karthik Poduval, Principal Software Development Engineer at Amazon Lab126, presents the “Integrating Cameras with the Robot Operating System (ROS)” tutorial at the May 2025 Embedded Vision Summit. In this presentation, Poduval explores the integration of cameras within the Robot Operating System (ROS) for robust embedded vision applications. He delves into… “Integrating Cameras with the Robot […]

“Integrating Cameras with the Robot Operating System (ROS),” a Presentation from Amazon Lab126 Read More +

“The New OpenCV 5.0: Added Features, Performance Improvements and Future Directions,” a Presentation from OpenCV.org

Satya Mallick, CEO of OpenCV.org, presents the “New OpenCV 5.0: Added Features, Performance Improvements and Future Directions” tutorial at the May 2025 Embedded Vision Summit. In this presentation, Mallick delves into the latest version of OpenCV, the world’s most popular open-source computer vision library. He highlights the major innovations and improvements in OpenCV 5.0, including

“The New OpenCV 5.0: Added Features, Performance Improvements and Future Directions,” a Presentation from OpenCV.org Read More +

“Introduction to Shrinking Models with Quantization-aware Training and Post-training Quantization,” a Presentation from NXP Semiconductors

Robert Cimpeanu, Machine Learning Software Engineer at NXP Semiconductors, presents the “Introduction to Shrinking Models with Quantization-aware Training and Post-training Quantization” tutorial at the May 2025 Embedded Vision Summit. In this presentation, Cimpeanu explains two neural network quantization techniques, quantization-aware training (QAT) and post-training quantization (PTQ), and explains when to use each. He discusses what

“Introduction to Shrinking Models with Quantization-aware Training and Post-training Quantization,” a Presentation from NXP Semiconductors Read More +

“Customizing Vision-language Models for Real-world Applications,” a Presentation from NVIDIA

Monika Jhuria, Technical Marketing Engineer at NVIDIA, presents the “Customizing Vision-language Models for Real-world Applications” tutorial at the May 2025 Embedded Vision Summit. Vision-language models (VLMs) have the potential to revolutionize various applications, and their performance can be improved through fine-tuning and customization. In this presentation, Jhuria explores the concept and shares insights on domain

“Customizing Vision-language Models for Real-world Applications,” a Presentation from NVIDIA Read More +

“An Introduction to the MIPI CSI-2 Image Sensor Standard and Its Latest Advances,” a Presentation from the MIPI Alliance

Haran Thanigasalam, Camera and Imaging Systems Consultant for the MIPI Alliance, presents the “Introduction to the MIPI CSI-2 Image Sensor Standard and Its Latest Advances” tutorial at the May 2025 Embedded Vision Summit. In this presentation, Thanigasalam provides an overview of the MIPI CSI-2 image sensor interface standard, covering its fundamental features and capabilities, including

“An Introduction to the MIPI CSI-2 Image Sensor Standard and Its Latest Advances,” a Presentation from the MIPI Alliance Read More +

“Optimizing Real-time SLAM Performance for Autonomous Robots with GPU Acceleration,” a Presentation from eInfochips

Naitik Nakrani, Solution Architect Manager at eInfochips, presents the “Optimizing Real-time SLAM Performance for Autonomous Robots with GPU Acceleration” tutorial at the May 2025 Embedded Vision Summit. Optimizing execution time of long-term and large-scale SLAM algorithms is essential for real-time deployments on edge compute platforms. Faster SLAM output means faster map refresh rates and quicker

“Optimizing Real-time SLAM Performance for Autonomous Robots with GPU Acceleration,” a Presentation from eInfochips Read More +

“Simplifying Portable Computer Vision with OpenVX 2.0,” a Presentation from AMD

Kiriti Nagesh Gowda, Staff Engineer at AMD, presents the “Simplifying Portable Computer Vision with OpenVX 2.0” tutorial at the May 2025 Embedded Vision Summit. The Khronos OpenVX API offers a set of optimized primitives for low-level image processing, computer vision and neural network operators. It provides a simple method for writing optimized code that is

“Simplifying Portable Computer Vision with OpenVX 2.0,” a Presentation from AMD Read More +

“Quantization Techniques for Efficient Deployment of Large Language Models: A Comprehensive Review,” a Presentation from AMD

Dwith Chenna, MTS Product Engineer for AI Inference at AMD, presents the “Quantization Techniques for Efficient Deployment of Large Language Models: A Comprehensive Review” tutorial at the May 2025 Embedded Vision Summit. The deployment of large language models (LLMs) in resource-constrained environments is challenging due to the significant computational and memory demands of these models.

“Quantization Techniques for Efficient Deployment of Large Language Models: A Comprehensive Review,” a Presentation from AMD Read More +

“Introduction to Data Types for AI: Trade-offs and Trends,” a Presentation from Synopsys

Joep Boonstra, Synopsys Scientist at Synopsys, presents the “Introduction to Data Types for AI: Trade-offs and Trends” tutorial at the May 2025 Embedded Vision Summit. The increasing complexity of AI models has led to a growing need for efficient data storage and processing. One critical way to gain efficiency is using smaller and simpler data

“Introduction to Data Types for AI: Trade-offs and Trends,” a Presentation from Synopsys Read More +

“Improved Data Sampling Techniques for Training Neural Networks,” a Presentation from Karthik Rao Aroor

Independent AI Engineer Karthik Rao Aroor presents the “Improved Data Sampling Techniques for Training Neural Networks” tutorial at the May 2024 Embedded Vision Summit. For classification problems in which there are equal numbers of samples in each class, Aroor proposes and presents a novel mini-batch sampling approach to train neural networks using gradient descent. His

“Improved Data Sampling Techniques for Training Neural Networks,” a Presentation from Karthik Rao Aroor Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top