Multimodal

Snapdragon Stories: Four Ways AI Has Improved My Life

This blog post was originally published at Qualcomm’s website. It is reprinted here with the permission of Qualcomm. I’ve used AI chat bots here and there, mostly for relatively simple and very specific tasks. But, I was underutilizing — and underestimating — how AI can quietly yet significantly reshape everyday moments. I don’t want to […]

Snapdragon Stories: Four Ways AI Has Improved My Life Read More +

Synaptics Launches the Next Generation of Astra Multimodal GenAI Processors to Power the Future of the Intelligent IoT Edge

San Jose, CA, October 15, 2025 – Synaptics® Incorporated (Nasdaq: SYNA) announces the new Astra™ SL2600 Series of multimodal Edge AI processors designed to deliver exceptional power and performance. The Astra SL2600 series enables a new generation of cost-effective intelligent devices that make the cognitive Internet of Things (IoT) possible. The SL2600 Series will launch

Synaptics Launches the Next Generation of Astra Multimodal GenAI Processors to Power the Future of the Intelligent IoT Edge Read More +

Open-source Physics Engine and OpenUSD Advance Robot Learning

This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. The Newton physics engine and enhanced NVIDIA Isaac GR00T models enable developers to accelerate robot learning through unified OpenUSD simulation workflows. Editor’s note: This blog is a part of Into the Omniverse, a series focused on how

Open-source Physics Engine and OpenUSD Advance Robot Learning Read More +

“Multimodal Enterprise-scale Applications in the Generative AI Era,” a Presentation from Skyworks Solutions

Mumtaz Vauhkonen, Senior Director of AI at Skyworks Solutions, presents the “Multimodal Enterprise-scale Applications in the Generative AI Era” tutorial at the May 2025 Embedded Vision Summit. As artificial intelligence is making rapid strides in use of large language models, the need for multimodality arises in multiple application scenarios. Similar… “Multimodal Enterprise-scale Applications in the

“Multimodal Enterprise-scale Applications in the Generative AI Era,” a Presentation from Skyworks Solutions Read More +

How to Integrate Computer Vision Pipelines with Generative AI and Reasoning

This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. Generative AI is opening new possibilities for analyzing existing video streams. Video analytics are evolving from counting objects to turning raw video content footage into real-time understanding. This enables more actionable insights. The NVIDIA AI Blueprint for

How to Integrate Computer Vision Pipelines with Generative AI and Reasoning Read More +

“Unlocking Visual Intelligence: Advanced Prompt Engineering for Vision-language Models,” a Presentation from LinkedIn Learning

Alina Li Zhang, Senior Data Scientist and Tech Writer at LinkedIn Learning, presents the “Unlocking Visual Intelligence: Advanced Prompt Engineering for Vision-language Models” tutorial at the May 2025 Embedded Vision Summit. Imagine a world where AI systems automatically detect thefts in grocery stores, ensure construction site safety and identify patient… “Unlocking Visual Intelligence: Advanced Prompt

“Unlocking Visual Intelligence: Advanced Prompt Engineering for Vision-language Models,” a Presentation from LinkedIn Learning Read More +

“Vision-language Models on the Edge,” a Presentation from Hugging Face

Cyril Zakka, Health Lead at Hugging Face, presents the “Vision-language Models on the Edge” tutorial at the May 2025 Embedded Vision Summit. In this presentation, Zakka provides an overview of vision-language models (VLMs) and their deployment on edge devices using Hugging Face’s recently released SmolVLM as an example. He examines… “Vision-language Models on the Edge,”

“Vision-language Models on the Edge,” a Presentation from Hugging Face Read More +

“Vision LLMs in Multi-agent Collaborative Systems: Architecture and Integration,” a Presentation from Google

Niyati Prajapati, ML and Generative AI Lead at Google, presents the “Vision LLMs in Multi-agent Collaborative Systems: Architecture and Integration” tutorial at the May 2025 Embedded Vision Summit. In this talk, Prajapati explores how vision LLMs can be used in multi-agent collaborative systems to enable new levels of capability and… “Vision LLMs in Multi-agent Collaborative

“Vision LLMs in Multi-agent Collaborative Systems: Architecture and Integration,” a Presentation from Google Read More +

“Building Agentic Applications for the Edge,” a Presentation from GMAC Intelligence

Amit Mate, Founder and CEO of GMAC Intelligence, presents the “Building Agentic Applications for the Edge” tutorial at the May 2025 Embedded Vision Summit. Along with AI agents, the new generation of large language models, vision-language models and other large multimodal models are enabling powerful new capabilities that promise to… “Building Agentic Applications for the

“Building Agentic Applications for the Edge,” a Presentation from GMAC Intelligence Read More +

Build High-performance Vision AI Pipelines with NVIDIA CUDA-accelerated VC-6

This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. The constantly increasing compute throughput of NVIDIA GPUs presents a new opportunity for optimizing vision AI workloads: keeping the hardware fed with data. As GPU performance continues to scale, traditional data pipeline stages, such as I/O from

Build High-performance Vision AI Pipelines with NVIDIA CUDA-accelerated VC-6 Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top