Multimodal

“Customizing Vision-language Models for Real-world Applications,” a Presentation from NVIDIA

Monika Jhuria, Technical Marketing Engineer at NVIDIA, presents the “Customizing Vision-language Models for Real-world Applications” tutorial at the May 2025 Embedded Vision Summit. Vision-language models (VLMs) have the potential to revolutionize various applications, and their performance can be improved through fine-tuning and customization. In this presentation, Jhuria explores the concept… “Customizing Vision-language Models for Real-world […]

“Customizing Vision-language Models for Real-world Applications,” a Presentation from NVIDIA Read More +

XR Tech Market Report

Woodside Capital Partners (WCP) is pleased to share its XR Tech Market Report, authored by senior bankers Alain Bismuth and Rudy Burger, and by analyst Alex Bonilla. Why we are interested in the XR Ecosystem Investors have been pouring billions of dollars into developing enabling technologies for augmented reality (AR) glasses aimed at the consumer market,

XR Tech Market Report Read More +

R²D²: Boost Robot Training with World Foundation Models and Workflows from NVIDIA Research

This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. As physical AI systems advance, the demand for richly labeled datasets is accelerating beyond what we can manually capture in the real world. World foundation models (WFMs), which are generative AI models trained to simulate, predict, and

R²D²: Boost Robot Training with World Foundation Models and Workflows from NVIDIA Research Read More +

“LLMs and VLMs for Regulatory Compliance, Quality Control and Safety Applications,” a Presentation from Camio

Lazar Trifunovic, Solutions Architect at Camio, presents the “LLMs and VLMs for Regulatory Compliance, Quality Control and Safety Applications” tutorial at the May 2025 Embedded Vision Summit. By using vision-language models (VLMs) or combining large language models (LLMs) with conventional computer vision models, we can create vision systems that are… “LLMs and VLMs for Regulatory

“LLMs and VLMs for Regulatory Compliance, Quality Control and Safety Applications,” a Presentation from Camio Read More +

Collaborating With Robots: How AI Is Enabling the Next Generation of Cobots

This blog post was originally published at Ambarella’s website. It is reprinted here with the permission of Ambarella. Collaborative robots, or cobots, are reshaping how we interact with machines. Designed to operate safely in shared environments, AI-enabled cobots are now embedded across manufacturing, logistics, healthcare, and even the home. But their role goes beyond automation—they

Collaborating With Robots: How AI Is Enabling the Next Generation of Cobots Read More +

R²D²: Building AI-based 3D Robot Perception and Mapping with NVIDIA Research

This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. Robots must perceive and interpret their 3D environments to act safely and effectively. This is especially critical for tasks such as autonomous navigation, object manipulation, and teleoperation in unstructured or unfamiliar spaces. Advances in robotic perception increasingly

R²D²: Building AI-based 3D Robot Perception and Mapping with NVIDIA Research Read More +

A World’s First On-glass GenAI Demonstration: Qualcomm’s Vision for the Future of Smart Glasses

This blog post was originally published at Qualcomm’s website. It is reprinted here with the permission of Qualcomm. Our live demo of a generative AI assistant running completely on smart glasses — without the aid of a phone or the cloud — and the reveal of the new Snapdragon AR1+ platform spark new possibilities for

A World’s First On-glass GenAI Demonstration: Qualcomm’s Vision for the Future of Smart Glasses Read More +

We Built a Personalized, Multimodal AI Smart Glass Experience — Watch It Here

This blog post was originally published at Qualcomm’s website. It is reprinted here with the permission of Qualcomm. Our demo shows the power of on-device AI and why smart glasses make the ideal AI user interface Gabby walks into a gym while carrying a smartphone and wearing a pair of smart glasses. Unsure of where

We Built a Personalized, Multimodal AI Smart Glass Experience — Watch It Here Read More +

AI Blueprint for Video Search and Summarization Now Available to Deploy Video Analytics AI Agents Across Industries

This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. The age of video analytics AI agents is here. Video is one of the defining features of the modern digital landscape, accounting for over 50% of all global data traffic. Dominant in media and increasingly important for

AI Blueprint for Video Search and Summarization Now Available to Deploy Video Analytics AI Agents Across Industries Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top