Software for Embedded Vision

Implementing Multimodal GenAI Models on Modalix
This blog post was originally published at SiMa.ai’s website. It is reprinted here with the permission of SiMa.ai. It has been our goal since starting SiMa.ai to create one software and hardware platform for the embedded edge that empowers companies to make their AI/ML innovations come to life. With the rise of Generative AI already

“Customizing Vision-language Models for Real-world Applications,” a Presentation from NVIDIA
Monika Jhuria, Technical Marketing Engineer at NVIDIA, presents the “Customizing Vision-language Models for Real-world Applications” tutorial at the May 2025 Embedded Vision Summit. Vision-language models (VLMs) have the potential to revolutionize various applications, and their performance can be improved through fine-tuning and customization. In this presentation, Jhuria explores the concept… “Customizing Vision-language Models for Real-world

XR Tech Market Report
Woodside Capital Partners (WCP) is pleased to share its XR Tech Market Report, authored by senior bankers Alain Bismuth and Rudy Burger, and by analyst Alex Bonilla. Why we are interested in the XR Ecosystem Investors have been pouring billions of dollars into developing enabling technologies for augmented reality (AR) glasses aimed at the consumer market,

Boosting Data Quality: Simulation-based vs. Generative AI Synthetic Data Generation
This blog post was originally published at Geisel Software’s Symage website. It is reprinted here with the permission of Geisel Software. Imagine you’re tasked with boosting data quality for your AI model. You’re at a crossroads, faced with two distinct paths for generating synthetic image data. On one side, there’s Generative AI—fast, adaptable, and capable

OpenAI’s gpt-oss-20b: Its First Open-source Reasoning Model to Run on Devices with Snapdragon
This blog post was originally published at Qualcomm’s website. It is reprinted here with the permission of Qualcomm. At Qualcomm Technologies, we’ve long believed that AI assistants will be ubiquitous, personal and on-device. Today, we’re excited to share a major milestone in that journey: OpenAI has open-sourced its first reasoning model, gpt-oss-20b, a chain-of-thought reasoning

“Visual Search: Fine-grained Recognition with Embedding Models for the Edge,” a Presentation from Gimlet Labs
Omid Azizi, Co-Founder of Gimlet Labs, presents the “Visual Search: Fine-grained Recognition with Embedding Models for the Edge” tutorial at the May 2025 Embedded Vision Summit. In the domain of AI vision, we have seen an explosion of models that can reliably detect objects of various types, from people to… “Visual Search: Fine-grained Recognition with

“Optimizing Real-time SLAM Performance for Autonomous Robots with GPU Acceleration,” a Presentation from eInfochips
Naitik Nakrani, Solution Architect Manager at eInfochips, presents the “Optimizing Real-time SLAM Performance for Autonomous Robots with GPU Acceleration” tutorial at the May 2025 Embedded Vision Summit. Optimizing execution time of long-term and large-scale SLAM algorithms is essential for real-time deployments on edge compute platforms. Faster SLAM output means faster… “Optimizing Real-time SLAM Performance for

SiMa.ai Next-Gen Platform for Physical AI in Production
Modalix in Production, Now Shipping SoM Pin-Compatible with leading GPU SoM, Dev Kits, and LLiMa for Seamless LLM-to-Modalix Integration SAN JOSE, Calif., August 12, 2025 — SiMa.ai, a pioneer in Physical AI solutions, today is making three significant product announcements to accelerate the scaling of Physical AI. Production and immediate availability of its next-generation Physical

R²D²: Boost Robot Training with World Foundation Models and Workflows from NVIDIA Research
This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. As physical AI systems advance, the demand for richly labeled datasets is accelerating beyond what we can manually capture in the real world. World foundation models (WFMs), which are generative AI models trained to simulate, predict, and

“LLMs and VLMs for Regulatory Compliance, Quality Control and Safety Applications,” a Presentation from Camio
Lazar Trifunovic, Solutions Architect at Camio, presents the “LLMs and VLMs for Regulatory Compliance, Quality Control and Safety Applications” tutorial at the May 2025 Embedded Vision Summit. By using vision-language models (VLMs) or combining large language models (LLMs) with conventional computer vision models, we can create vision systems that are… “LLMs and VLMs for Regulatory

NVIDIA Opens Portals to World of Robotics With New Omniverse Libraries, Cosmos Physical AI Models and AI Computing Infrastructure
New NVIDIA Omniverse NuRec 3D Gaussian Splatting Libraries Enable Large-Scale World Reconstruction New NVIDIA Cosmos Models Enable World Generation and Spatial Reasoning New NVIDIA RTX PRO Blackwell Servers and NVIDIA DGX Cloud Let Developers Run the Most Demanding Simulations Anywhere Physical AI Leaders Amazon Devices & Services, Boston Dynamics, Figure AI and Hexagon Embrace Simulation and Synthetic Data Generation August 11, 2025—SIGGRAPH—NVIDIA

“Simplifying Portable Computer Vision with OpenVX 2.0,” a Presentation from AMD
Kiriti Nagesh Gowda, Staff Engineer at AMD, presents the “Simplifying Portable Computer Vision with OpenVX 2.0” tutorial at the May 2025 Embedded Vision Summit. The Khronos OpenVX API offers a set of optimized primitives for low-level image processing, computer vision and neural network operators. It provides a simple method for… “Simplifying Portable Computer Vision with

“Quantization Techniques for Efficient Deployment of Large Language Models: A Comprehensive Review,” a Presentation from AMD
Dwith Chenna, MTS Product Engineer for AI Inference at AMD, presents the “Quantization Techniques for Efficient Deployment of Large Language Models: A Comprehensive Review” tutorial at the May 2025 Embedded Vision Summit. The deployment of large language models (LLMs) in resource-constrained environments is challenging due to the significant computational and… “Quantization Techniques for Efficient Deployment

Learn to Optimize Stable Diffusion on Qualcomm Cloud AI 100
This blog post was originally published at Qualcomm’s website. It is reprinted here with the permission of Qualcomm. Dive in to learn how we achieve a 1.4x latency decrease on Qualcomm Cloud AI 100 Ultra accelerators by applying an innovative DeepCache technique to text-to-image generation. What’s more, the throughput can be further improved by 3x

Texas Instruments Demonstration of Edge AI Inference and Video Streaming Over Wi-Fi
The demonstration shows how to use Texas Instruments’ AM6xA to capture live video, perform machine learning, and stream video over Wi-Fi. The video is encoded with H.264/H.265, and streamed via UDP over Wi-Fi using the CC33xx. At the receiver side, the video is decoded and displayed on a screen. The receiver side could be a

“Introduction to Data Types for AI: Trade-offs and Trends,” a Presentation from Synopsys
Joep Boonstra, Synopsys Scientist at Synopsys, presents the “Introduction to Data Types for AI: Trade-offs and Trends” tutorial at the May 2025 Embedded Vision Summit. The increasing complexity of AI models has led to a growing need for efficient data storage and processing. One critical way to gain efficiency is… “Introduction to Data Types for

Machine Vision Defect Detection: Edge AI Processing with Texas Instruments AM6xA Arm-based Processors
Texas Instruments’ portfolio of AM6xA Arm-based processors are designed to advance intelligence at the edge using high resolution camera support, an integrated image sensor processor and deep learning accelerator. This video demonstrates using AM62A to run a vision-based artificial intelligence model for defect detection for manufacturing applications. Watch the model test the produced units as

“Introduction to Radar and Its Use for Machine Perception,” a Presentation from Cadence
Amol Borkar, Product Marketing Director, and Vencatesh Subramanian, Design Engineering Architect, both of Cadence, co-present the “Introduction to Radar and Its Use for Machine Perception” tutorial at the May 2025 Embedded Vision Summit. Radar is a proven technology with a long history in various market segments and continues to plays an increasingly important role in