Robotics Applications for Embedded Vision

Open-source Physics Engine and OpenUSD Advance Robot Learning
This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. The Newton physics engine and enhanced NVIDIA Isaac GR00T models enable developers to accelerate robot learning through unified OpenUSD simulation workflows. Editor’s note: This blog is a part of Into the Omniverse, a series focused on how

Upcoming Seminar Explores the Latest Innovations in Mobile Robotics
On October 22, 2022 at 9:00 am PT, Alliance Member company NXP Semiconductors, along with Avnet, will deliver a free (advance registration required) half-day in-person robotics seminar at NXP’s office in San Jose, California. From the event page: Join us for a free in-depth seminar exploring the latest innovations in mobile robotics with a focus

“Lessons Learned Building and Deploying a Weed-killing Robot,” a Presentation from Tensorfield Agriculture
Xiong Chang, CEO and Co-founder of Tensorfield Agriculture, presents the “Lessons Learned Building and Deploying a Weed-Killing Robot” tutorial at the May 2025 Embedded Vision Summit. Agriculture today faces chronic labor shortages and growing challenges around herbicide resistance, as well as consumer backlash to chemical inputs. Smarter, more sustainable approaches… “Lessons Learned Building and Deploying

“Real-world Deployment of Mobile Material Handling Robotics in the Supply Chain,” a Presentation from Pickle Robot Company
Peter Santos, Chief Operating Officer of Pickle Robot Company, presents the “Real-World Deployment of Mobile Material Handling Robotics in the Supply Chain” tutorial at the May 2025 Embedded Vision Summit. More and more of the supply chain needs to be, and can be, automated. Demographics, particularly in the developed world,… “Real-world Deployment of Mobile Material

“Sensors and Compute Needs and Challenges for Humanoid Robots,” a Presentation from Agility Robotics
Vlad Branzoi, Perception Sensors Team Lead at Agility Robotics, presents the “Sensors and Compute Needs and Challenges for Humanoid Robots” tutorial at the September 2025 Edge AI and Vision Innovation Forum.

How to Integrate Computer Vision Pipelines with Generative AI and Reasoning
This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. Generative AI is opening new possibilities for analyzing existing video streams. Video analytics are evolving from counting objects to turning raw video content footage into real-time understanding. This enables more actionable insights. The NVIDIA AI Blueprint for

How Do You Teach an AI Model to Reason? With Humans
This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. NVIDIA’s data factory team creates the foundation for AI models like Cosmos Reason, which today topped the physical reasoning leaderboard on Hugging Face. AI models are advancing at a rapid rate and scale. But what might they

“Taking Computer Vision Products from Prototype to Robust Product,” an Interview with Blue River Technology
Chris Padwick, Machine Learning Engineer at Blue River Technology, talks with Mark Jamtgaard, Director of Technology at RetailNext for the “Taking Computer Vision Products from Prototype to Robust Product,” interview at the May 2025 Embedded Vision Summit. When developing computer vision-based products, getting from a proof of concept to a… “Taking Computer Vision Products from

Humanoids, Soft Grippers, and Delivery Robots
Robotics is a multifaceted technology sector presenting many capabilities to expand into a variety of applications, from automotives and warehousing and logistics, to domestic functions. IDTechEx‘s portfolio of Robotics & Autonomy Research Reports covers the extensive range of robotics, including humanoids, collaborative robots, and mobile robotics. Humanlike robotics for automotives and warehousing Humanoid robots possess

Upcoming Presentation and Demonstrations Showcase Autonomous Mobile Robots and Machine Vision
On Wednesday, October 15 from 11:45 AM – 12:15 PM PT, Alliance Member company eInfochips will deliver the presentation “Real-time Vision AI System on Edge AI Platforms” at the RoboBusiness and DeviceTalks West 2025 Conference in Santa Clara, California. From the event page: This session presents a real-time, edge-deployed Vision AI system for automated quality

“Integrating Cameras with the Robot Operating System (ROS),” a Presentation from Amazon Lab126
Karthik Poduval, Principal Software Development Engineer at Amazon Lab126, presents the “Integrating Cameras with the Robot Operating System (ROS)” tutorial at the May 2025 Embedded Vision Summit. In this presentation, Poduval explores the integration of cameras within the Robot Operating System (ROS) for robust embedded vision applications. He delves into… “Integrating Cameras with the Robot

RealSense and NVIDIA Collaborate to Usher in the Age of Physical AI
Integration of RealSense AI depth cameras with NVIDIA Jetson Thor and simulation platforms sets a new industry standard, driving breakthroughs in humanoids, AMRs and the future of intelligent machines SAN FRANCISCO — Aug. 25, 2025 — RealSense, Inc., the global leader in robotic perception, today announced a strategic collaboration with NVIDIA to accelerate the adoption

NVIDIA Blackwell-powered Jetson Thor Now Available, Accelerating the Age of General Robotics
News Summary: NVIDIA Jetson AGX Thor developer kit and production modules, robotics computers designed for physical AI and robotics, are now generally available. Over 2 million developers are using NVIDIA’s robotics stack, with Agility Robotics, Amazon Robotics, Boston Dynamics, Caterpillar, Figure, Hexagon, Medtronic and Meta among early Jetson Thor adopters. Jetson Thor, powered by NVIDIA

AI at the Edge: The Next Gold Rush
This blog post was originally published at SiMa.ai’s website. It is reprinted here with the permission of SiMa.ai. Generative AI has ushered in a new era of technological progress, reminiscent of the rise of the internet in the 1990s. Beyond the impressive chatbots we’re now used to, the constant flow of innovation has introduced new

Maximize Robotics Performance by Post-training NVIDIA Cosmos Reason
This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. First unveiled at NVIDIA GTC 2025, NVIDIA Cosmos Reason is an open and fully customizable reasoning vision language model (VLM) for physical AI and robotics. The VLM enables robots and vision AI agents to reason using prior

R²D²: Boost Robot Training with World Foundation Models and Workflows from NVIDIA Research
This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. As physical AI systems advance, the demand for richly labeled datasets is accelerating beyond what we can manually capture in the real world. World foundation models (WFMs), which are generative AI models trained to simulate, predict, and

NVIDIA Opens Portals to World of Robotics With New Omniverse Libraries, Cosmos Physical AI Models and AI Computing Infrastructure
New NVIDIA Omniverse NuRec 3D Gaussian Splatting Libraries Enable Large-Scale World Reconstruction New NVIDIA Cosmos Models Enable World Generation and Spatial Reasoning New NVIDIA RTX PRO Blackwell Servers and NVIDIA DGX Cloud Let Developers Run the Most Demanding Simulations Anywhere Physical AI Leaders Amazon Devices & Services, Boston Dynamics, Figure AI and Hexagon Embrace Simulation and Synthetic Data Generation August 11, 2025—SIGGRAPH—NVIDIA

Collaborating With Robots: How AI Is Enabling the Next Generation of Cobots
This blog post was originally published at Ambarella’s website. It is reprinted here with the permission of Ambarella. Collaborative robots, or cobots, are reshaping how we interact with machines. Designed to operate safely in shared environments, AI-enabled cobots are now embedded across manufacturing, logistics, healthcare, and even the home. But their role goes beyond automation—they