Software

XR Tech Market Report

Woodside Capital Partners (WCP) is pleased to share its XR Tech Market Report, authored by senior bankers Alain Bismuth and Rudy Burger, and by analyst Alex Bonilla. Why we are interested in the XR Ecosystem Investors have been pouring billions of dollars into developing enabling technologies for augmented reality (AR) glasses aimed at the consumer market, […]

XR Tech Market Report Read More +

Boosting Data Quality: Simulation-based vs. Generative AI Synthetic Data Generation

This blog post was originally published at Geisel Software’s Symage website. It is reprinted here with the permission of Geisel Software. Imagine you’re tasked with boosting data quality for your AI model. You’re at a crossroads, faced with two distinct paths for generating synthetic image data. On one side, there’s Generative AI—fast, adaptable, and capable

Boosting Data Quality: Simulation-based vs. Generative AI Synthetic Data Generation Read More +

OpenAI’s gpt-oss-20b: Its First Open-source Reasoning Model to Run on Devices with Snapdragon

This blog post was originally published at Qualcomm’s website. It is reprinted here with the permission of Qualcomm. At Qualcomm Technologies, we’ve long believed that AI assistants will be ubiquitous, personal and on-device. Today, we’re excited to share a major milestone in that journey: OpenAI has open-sourced its first reasoning model, gpt-oss-20b, a chain-of-thought reasoning

OpenAI’s gpt-oss-20b: Its First Open-source Reasoning Model to Run on Devices with Snapdragon Read More +

“Visual Search: Fine-grained Recognition with Embedding Models for the Edge,” a Presentation from Gimlet Labs

Omid Azizi, Co-Founder of Gimlet Labs, presents the “Visual Search: Fine-grained Recognition with Embedding Models for the Edge” tutorial at the May 2025 Embedded Vision Summit. In the domain of AI vision, we have seen an explosion of models that can reliably detect objects of various types, from people to… “Visual Search: Fine-grained Recognition with

“Visual Search: Fine-grained Recognition with Embedding Models for the Edge,” a Presentation from Gimlet Labs Read More +

“Optimizing Real-time SLAM Performance for Autonomous Robots with GPU Acceleration,” a Presentation from eInfochips

Naitik Nakrani, Solution Architect Manager at eInfochips, presents the “Optimizing Real-time SLAM Performance for Autonomous Robots with GPU Acceleration” tutorial at the May 2025 Embedded Vision Summit. Optimizing execution time of long-term and large-scale SLAM algorithms is essential for real-time deployments on edge compute platforms. Faster SLAM output means faster… “Optimizing Real-time SLAM Performance for

“Optimizing Real-time SLAM Performance for Autonomous Robots with GPU Acceleration,” a Presentation from eInfochips Read More +

SiMa.ai Next-Gen Platform for Physical AI in Production

Modalix in Production, Now Shipping SoM Pin-Compatible with leading GPU SoM, Dev Kits, and LLiMa for Seamless LLM-to-Modalix Integration SAN JOSE, Calif., August 12, 2025 — SiMa.ai, a pioneer in Physical AI solutions, today is making three significant product announcements to accelerate the scaling of Physical AI. Production and immediate availability of its next-generation Physical

SiMa.ai Next-Gen Platform for Physical AI in Production Read More +

R²D²: Boost Robot Training with World Foundation Models and Workflows from NVIDIA Research

This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. As physical AI systems advance, the demand for richly labeled datasets is accelerating beyond what we can manually capture in the real world. World foundation models (WFMs), which are generative AI models trained to simulate, predict, and

R²D²: Boost Robot Training with World Foundation Models and Workflows from NVIDIA Research Read More +

“LLMs and VLMs for Regulatory Compliance, Quality Control and Safety Applications,” a Presentation from Camio

Lazar Trifunovic, Solutions Architect at Camio, presents the “LLMs and VLMs for Regulatory Compliance, Quality Control and Safety Applications” tutorial at the May 2025 Embedded Vision Summit. By using vision-language models (VLMs) or combining large language models (LLMs) with conventional computer vision models, we can create vision systems that are… “LLMs and VLMs for Regulatory

“LLMs and VLMs for Regulatory Compliance, Quality Control and Safety Applications,” a Presentation from Camio Read More +

NVIDIA Opens Portals to World of Robotics With New Omniverse Libraries, Cosmos Physical AI Models and AI Computing Infrastructure

New NVIDIA Omniverse NuRec 3D Gaussian Splatting Libraries Enable Large-Scale World Reconstruction New NVIDIA Cosmos Models Enable World Generation and Spatial Reasoning New NVIDIA RTX PRO Blackwell Servers and NVIDIA DGX Cloud Let Developers Run the Most Demanding Simulations Anywhere Physical AI Leaders Amazon Devices & Services, Boston Dynamics, Figure AI and Hexagon Embrace Simulation and Synthetic Data Generation August 11, 2025—SIGGRAPH—NVIDIA

NVIDIA Opens Portals to World of Robotics With New Omniverse Libraries, Cosmos Physical AI Models and AI Computing Infrastructure Read More +

“Simplifying Portable Computer Vision with OpenVX 2.0,” a Presentation from AMD

Kiriti Nagesh Gowda, Staff Engineer at AMD, presents the “Simplifying Portable Computer Vision with OpenVX 2.0” tutorial at the May 2025 Embedded Vision Summit. The Khronos OpenVX API offers a set of optimized primitives for low-level image processing, computer vision and neural network operators. It provides a simple method for… “Simplifying Portable Computer Vision with

“Simplifying Portable Computer Vision with OpenVX 2.0,” a Presentation from AMD Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top