Multimodal

Collaborating With Robots: How AI Is Enabling the Next Generation of Cobots

This blog post was originally published at Ambarella’s website. It is reprinted here with the permission of Ambarella. Collaborative robots, or cobots, are reshaping how we interact with machines. Designed to operate safely in shared environments, AI-enabled cobots are now embedded across manufacturing, logistics, healthcare, and even the home. But their role goes beyond automation—they […]

Collaborating With Robots: How AI Is Enabling the Next Generation of Cobots Read More +

R²D²: Building AI-based 3D Robot Perception and Mapping with NVIDIA Research

This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. Robots must perceive and interpret their 3D environments to act safely and effectively. This is especially critical for tasks such as autonomous navigation, object manipulation, and teleoperation in unstructured or unfamiliar spaces. Advances in robotic perception increasingly

R²D²: Building AI-based 3D Robot Perception and Mapping with NVIDIA Research Read More +

A World’s First On-glass GenAI Demonstration: Qualcomm’s Vision for the Future of Smart Glasses

This blog post was originally published at Qualcomm’s website. It is reprinted here with the permission of Qualcomm. Our live demo of a generative AI assistant running completely on smart glasses — without the aid of a phone or the cloud — and the reveal of the new Snapdragon AR1+ platform spark new possibilities for

A World’s First On-glass GenAI Demonstration: Qualcomm’s Vision for the Future of Smart Glasses Read More +

We Built a Personalized, Multimodal AI Smart Glass Experience — Watch It Here

This blog post was originally published at Qualcomm’s website. It is reprinted here with the permission of Qualcomm. Our demo shows the power of on-device AI and why smart glasses make the ideal AI user interface Gabby walks into a gym while carrying a smartphone and wearing a pair of smart glasses. Unsure of where

We Built a Personalized, Multimodal AI Smart Glass Experience — Watch It Here Read More +

AI Blueprint for Video Search and Summarization Now Available to Deploy Video Analytics AI Agents Across Industries

This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. The age of video analytics AI agents is here. Video is one of the defining features of the modern digital landscape, accounting for over 50% of all global data traffic. Dominant in media and increasingly important for

AI Blueprint for Video Search and Summarization Now Available to Deploy Video Analytics AI Agents Across Industries Read More +

R²D²: Unlocking Robotic Assembly and Contact Rich Manipulation with NVIDIA Research

This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. This edition of NVIDIA Robotics Research and Development Digest (R2D2) explores several contact-rich manipulation workflows for robotic assembly tasks from NVIDIA Research and how they can address key challenges with fixed automation, such as robustness, adaptability, and

R²D²: Unlocking Robotic Assembly and Contact Rich Manipulation with NVIDIA Research Read More +

NVIDIA Powers Humanoid Robot Industry With Cloud-to-robot Computing Platforms for Physical AI

New NVIDIA Isaac GR00T Humanoid Open Models Soon Available for Download on Hugging Face GR00T-Dreams Blueprint Generates Data to Train Humanoid Robot Reasoning and Behavior NVIDIA RTX PRO 6000 Blackwell Workstations and RTX PRO Servers Accelerate Robot Simulation and Training Agility Robotics, Boston Dynamics, Foxconn, Lightwheel, NEURA Robotics and XPENG Robotics Among Many Robot Makers Adopting NVIDIA Isaac COMPUTEX—NVIDIA today announced VIDIA

NVIDIA Powers Humanoid Robot Industry With Cloud-to-robot Computing Platforms for Physical AI Read More +

Efficient LLaMA-3.2-Vision by Trimming Cross-attended Visual Features

This blog post was originally published at Nota AI’s website. It is reprinted here with the permission of Nota AI. Our method, Trimmed-Llama, reduces the key-value cache (KV cache) and latency of cross-attention-based Large Vision Language Models (LVLMs) without sacrificing performance. We identify sparsity in LVLM cross-attention maps, showing a consistent layer-wise pattern where most

Efficient LLaMA-3.2-Vision by Trimming Cross-attended Visual Features Read More +

Deploying an Efficient Vision-Language Model on Mobile Devices

This blog post was originally published at Nota AI’s website. It is reprinted here with the permission of Nota AI. Recent large language models (LLMs) have demonstrated unprecedented performance in a variety of natural language processing (NLP) tasks. Thanks to their versatile language processing capabilities, it has become possible to develop various NLP applications that

Deploying an Efficient Vision-Language Model on Mobile Devices Read More +

LM Studio Accelerates LLM Performance With NVIDIA GeForce RTX GPUs and CUDA 12.8

This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. Latest release of the desktop application brings enhanced dev tools and model controls, as well as better performance for RTX GPUs. As AI use cases continue to expand — from document summarization to custom software agents —

LM Studio Accelerates LLM Performance With NVIDIA GeForce RTX GPUs and CUDA 12.8 Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top