Development Tools for Embedded Vision
ENCOMPASSING MOST OF THE STANDARD ARSENAL USED FOR DEVELOPING REAL-TIME EMBEDDED PROCESSOR SYSTEMS
The software tools (compilers, debuggers, operating systems, libraries, etc.) encompass most of the standard arsenal used for developing real-time embedded processor systems, while adding in specialized vision libraries and possibly vendor-specific development tools for software development. On the hardware side, the requirements will depend on the application space, since the designer may need equipment for monitoring and testing real-time video data. Most of these hardware development tools are already used for other types of video system design.
Both general-purpose and vender-specific tools
Many vendors of vision devices use integrated CPUs that are based on the same instruction set (ARM, x86, etc), allowing a common set of development tools for software development. However, even though the base instruction set is the same, each CPU vendor integrates a different set of peripherals that have unique software interface requirements. In addition, most vendors accelerate the CPU with specialized computing devices (GPUs, DSPs, FPGAs, etc.) This extended CPU programming model requires a customized version of standard development tools. Most CPU vendors develop their own optimized software tool chain, while also working with 3rd-party software tool suppliers to make sure that the CPU components are broadly supported.
Heterogeneous software development in an integrated development environment
Since vision applications often require a mix of processing architectures, the development tools become more complicated and must handle multiple instruction sets and additional system debugging challenges. Most vendors provide a suite of tools that integrate development tasks into a single interface for the developer, simplifying software development and testing.
Building Robotics Applications with Ryzen AI and ROS 2
This blog post was originally published at AMD’s website. It is reprinted here with the permission of AMD. This blog showcases how to deploy power-efficient Ryzen AI perception models with ROS 2 – the Robot Operating System. We utilize the Ryzen AI Max+ 395 (Strix-Halo) platform, which is equipped with an efficient Ryzen AI NPU and
The Future of Security Is Already Running. Here Is What It Looks Like.
This blog post was originally published at Axelera AI’s website. It is reprinted here with the permission of Axelera AI. A camera sees everything and understands nothing. For decades, that has been the fundamental limitation of physical security at scale: vast amounts of footage, limited ability to act on it in real time. The gap between

Gemma 4 Models Optimized for Intel Hardware: Enabling Instant Deployment from Day Zero
We’re excited to announce Intel’s strategic partnership with Google to deliver optimized Gemma 4 models on Intel hardware from day one. This collaboration enables developers to leverage the power of Google’s latest AI models on Intel hardware: Intel® Core™ Ultra processors, Intel® Xeon® CPUs, and Intel® Arc™ GPUs. Developers can create AI applications that run

AI-Assisted Coding: The Next Step in Abstraction
This blog post was originally published at Geisel Software’s website. It is reprinted here with the permission of Geisel Software. I’ve been using AI-assisted coding for my work a lot recently, and I’ll admit, I wasn’t sure how I felt about it. Was I cheating? How do I know it’s right? Do I admit to using

NVIDIA and Global Robotics Leaders Take Physical AI to the Real World
News Summary: Physical AI leaders across robot brain developers, industrial, and surgical robot giants and humanoid pioneers including ABB Robotics, AGIBOT, Agility, CMR Surgical, FANUC, Figure, Hexagon Robotics, KUKA, Medtronic, Skild AI, Universal Robots, World Labs and YASKAWA are building on NVIDIA technology to develop and deploy physical AI at scale. NVIDIA unveils new NVIDIA

AI at the Edge: Designing for Constraints from Day One
This blog post was originally published at ModelCat’s website. It is reprinted here with the permission of ModelCat. Artificial intelligence has never been more visible yet more misunderstood. Every week seems to bring new headlines about larger models, more parameters, and benchmark-breaking performance. For developers and product teams responsible for shipping real-world AI systems, that

Introducing the Electronics Industry’s First AI Agent with Visual Reasoning
This blog post was originally published at Rapidflare’s website. It is reprinted here with the permission of Rapidflare. AI has made extraordinary progress in understanding language. But in industries like semiconductors, electronics, manufacturing, medical devices, and infrastructure, language represents only a slice of the knowledge. The most critical technical knowledge is often not written in paragraphs. It

Renesas Announces General Availability of Renesas 365
NUREMBERG, Germany and TOKYO, Japan — Renesas Electronics Corporation (TSE: 6723), a premier supplier of advanced semiconductor solutions, today announced the general availability of Renesas 365, Powered by Altium, an intelligent, model-based platform that integrates device exploration, model-based system development and early concept validation on a single unified platform. Built on a cloud environment, Renesas 365 is
