Software for Embedded Vision

Physical AI: 8 Questions Every Engineering Leader Is Asking
This blog post was originally published at Geisel Software’s website. It is reprinted here with the permission of Geisel Software. Jensen Huang called it at CES 2025: “The next frontier of AI is physical.” Since then, the phrase has been everywhere — in investor decks, conference keynotes, and vendor pitches. But for the software engineering managers, directors, VPs, and

Airy3D Announces Support for MediaTek Genio SoCs for Edge 3D Vision Applications
Montreal, Canada – May 11, 2026 – Airy3D today announced that its DepthIQ™ SDK is supported on the MediaTek Genio Series of System-on-Chips, enabling compact and cost-efficient passive 3D vision solutions for embedded AI vision applications across robotics, industrial, retail, and smart devices. Airy3D’s DepthIQ technology enables simultaneous capture of high-quality 2D images and depth

Face Super Resolution for Better Video Experiences
This blog post was originally published at Visidon’s website. It is reprinted here with the permission of Visidon. Video has become the primary medium for communication — from hybrid meetings to live events and social media. At the same time, expectations have risen. Faces need to look sharp, expressive, and natural — even when captured from

Case Study: How an Enterprise Tech Team Went from Dozens to 2,000+ Fine-Tuning Configurations
This blog post was originally published in expanded form at RapidFire AI’s website. It is reprinted here with the permission of RapidFire AI. The Use Case An AI-forward team at a Fortune 500 enterprise tech company builds intelligent autocomplete for enterprise form data entry: predicting what a user will select next across product dimensions, pricing fields,

Physical AI: From ST Sensors to a Robotics Platform, How Innovation Can Only Happen Through Collaboration
This blog post was originally published at STMicroelectronics’s website. It is reprinted here with the permission of STMicroelectronics. As technology aims to enable Physical AI, ST is sharing today how collaboration brought our sensors into a Holoscan Sensor Bridge module from Leopard Imaging, enabling developers to feed multi-modal sensing data to the NVIDIA Jetson Thor or

How We Built a 100% Effective Multi-Layer Safety Filter for Enterprise AI Agents
How Rapidflare’s multi-layer safety filter achieved 100% protection against harmful content while maintaining zero false positives on legitimate queries. This blog post was originally published at Rapidflare’s website. It is reprinted here with the permission of Rapidflare. When you deploy an AI agent to a public developer community, the threat model changes completely. In a

Beyond the Bench: Reinventing Embedded Hardware with Grinn
This video was originally published at Peridio’s website. It is reprinted here with the permission of Peridio. In this episode of Beyond the Bench from Peridio, Bill Brock sits down with Robert Otręba, Founder & CEO of Grinn, a Poland-based embedded engineering company operating for nearly 18 years. Robert shares how Grinn grew from a two-person

MPEG-5 LCEVC: A practical shift for industrial AI video pipelines
This blog post was originally published at V-Nova’s website. It is reprinted here with the permission of V-Nova. In Industrial and Defense environments, I hear the same story. More cameras. Higher resolutions. Stricter latency targets. Infrastructure that cannot be replaced easily. And increasing pressure around storage, bandwidth, compute, and privacy. This is why MPEG-5 LCEVC is becoming even more relevant. It improves compression

Key Trends Shaping the Semiconductor Industry in 2026
This blog post was originally published at HTEC’s website. It is reprinted here with the permission of HTEC. The hardware boom is slowing down. What comes next is a software, power, and inference problem—and most of the industry isn’t ready for any of it. AI chips are now 0.2% of all chips manufactured, but

Texas Instruments, D3 Embedded, Lattice and NVIDIA Show a Practical Radar-Camera Fusion Stack for Robotics
TI’s new application brief and companion demo outline how mmWave radar, camera input, FPGA-based sensor bridging and NVIDIA Holoscan can be combined into a low-latency perception pipeline for humanoids and other autonomous machines. Texas Instruments, D3 Embedded, Lattice Semiconductor and NVIDIA are outlining a concrete radar-camera fusion stack for robotics rather than just talking
Building Robotics Applications with Ryzen AI and ROS 2
This blog post was originally published at AMD’s website. It is reprinted here with the permission of AMD. This blog showcases how to deploy power-efficient Ryzen AI perception models with ROS 2 – the Robot Operating System. We utilize the Ryzen AI Max+ 395 (Strix-Halo) platform, which is equipped with an efficient Ryzen AI NPU and

BrainChip Unveils Radar Reference Platform to Bridge the ‘Identification Gap’ in Edge AI
LAGUNA HILLS, Calif. — April 6, 2026 — BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, BCHPY), the world’s first commercial producer of ultra-low-power, neuromorphic AI technology, today announced the launch of its Radar Reference Platform. This fully validated hardware and AI stack is designed to provide real-time object classification at the edge, solving the critical “identification gap” that limits traditional radar

Gemma 4 Models Optimized for Intel Hardware: Enabling Instant Deployment from Day Zero
We’re excited to announce Intel’s strategic partnership with Google to deliver optimized Gemma 4 models on Intel hardware from day one. This collaboration enables developers to leverage the power of Google’s latest AI models on Intel hardware: Intel® Core™ Ultra processors, Intel® Xeon® CPUs, and Intel® Arc™ GPUs. Developers can create AI applications that run

Upcoming Webinar on NVIDIA IGX Thor
On April 15, 2026, at 9:00 pm PDT (12:00 pm EDT) NVIDIA will deliver a webinar “Unlock Real-Time Physical AI for the Industrial Edge” From the event page: Join us to learn how IGX Thor’s Blackwell-powered architecture is powering autonomous robots, surgical systems, and high-performance industrial automation at the edge. NVIDIA experts will walk through

2026: The Year Intelligence Gets Physical
This article was originally published at Analog Devices’ website. It is reprinted here with the permission of Analog Devices. Artificial intelligence is entering a new phase where models interpret contextual data whilst interacting with the physical world in real time. At Analog Devices, Inc. (ADI), we call this Physical Intelligence: intelligent systems that can perceive, reason

Why Night HDR Is More Challenging Than Daytime HDR
This blog post was originally published at Visidon’s website. It is reprinted here with the permission of Visidon. High Dynamic Range (HDR) imaging has become a standard feature in modern cameras, from smartphones to automotive and surveillance systems. While daytime HDR is already a complex task, nighttime HDR introduces a completely different level of difficulty. The same

AI-Assisted Coding: The Next Step in Abstraction
This blog post was originally published at Geisel Software’s website. It is reprinted here with the permission of Geisel Software. I’ve been using AI-assisted coding for my work a lot recently, and I’ll admit, I wasn’t sure how I felt about it. Was I cheating? How do I know it’s right? Do I admit to using

NVIDIA, T-Mobile and Partners Integrate Physical AI Applications on AI-RAN-Ready Infrastructure
News Summary: T-Mobile pilots NVIDIA RTX PRO 6000 Blackwell Server Edition AI infrastructure to demonstrate physical AI applications at the edge, complementing the AI-RAN Innovation Center’s distributed network Physical AI developers including Fogsphere, LinkerVision, Levatas, Vaidio and Siemens Energy are building reasoning and vision AI agents to the edge using the NVIDIA Metropolis Blueprint for
