Development Tools

Development Tools for Embedded Vision

ENCOMPASSING MOST OF THE STANDARD ARSENAL USED FOR DEVELOPING REAL-TIME EMBEDDED PROCESSOR SYSTEMS

The software tools (compilers, debuggers, operating systems, libraries, etc.) encompass most of the standard arsenal used for developing real-time embedded processor systems, while adding in specialized vision libraries and possibly vendor-specific development tools for software development. On the hardware side, the requirements will depend on the application space, since the designer may need equipment for monitoring and testing real-time video data. Most of these hardware development tools are already used for other types of video system design.

Both general-purpose and vender-specific tools

Many vendors of vision devices use integrated CPUs that are based on the same instruction set (ARM, x86, etc), allowing a common set of development tools for software development. However, even though the base instruction set is the same, each CPU vendor integrates a different set of peripherals that have unique software interface requirements. In addition, most vendors accelerate the CPU with specialized computing devices (GPUs, DSPs, FPGAs, etc.) This extended CPU programming model requires a customized version of standard development tools. Most CPU vendors develop their own optimized software tool chain, while also working with 3rd-party software tool suppliers to make sure that the CPU components are broadly supported.

Heterogeneous software development in an integrated development environment

Since vision applications often require a mix of processing architectures, the development tools become more complicated and must handle multiple instruction sets and additional system debugging challenges. Most vendors provide a suite of tools that integrate development tasks into a single interface for the developer, simplifying software development and testing.

Chips&Media Completes Development of Next-Gen ‘AV2’ HW Decoder IP

Key Takeaways: Implementing AOMedia’s latest AV2 standard as world-class HW IP, leading the next-gen video ecosystem Targeting the North American Big Tech-led OTT market (YouTube, Netflix) and solidifying the justification for global standard adoption HW RTL release in May with active commercial licensing talks underway with major North American clients May 12, 2026 – Chips&Media

Read More »

Nota AI Wins Grand Prize at NVIDIA Nemotron Hackathon, Proving MoE Quantization Prowess with Synthetic Data Technology

Took 1st place in Track C and Grand Prize among all 20 competing teams with synthetic data generation technology specialized for MoE quantization Built a dataset using an agent based on Nemotron 3 Super120B, presenting a data-centric rather than algorithm-centric optimization approach SEOUL, South Korea, April 24, 2026 /PRNewswire/ — Nota AI, a leading AI model compression and optimization company,

Read More »

Physical AI: From ST Sensors to a Robotics Platform, How Innovation Can Only Happen Through Collaboration

This blog post was originally published at STMicroelectronics’s website. It is reprinted here with the permission of STMicroelectronics. As technology aims to enable Physical AI, ST is sharing today how collaboration brought our sensors into a Holoscan Sensor Bridge module from Leopard Imaging, enabling developers to feed multi-modal sensing data to the NVIDIA Jetson Thor or

Read More »

Upcoming Webinar on Sony’s IMX925/935 Sensor Series and High Performance SLVS-EC Interface

On May 12, 2026, at 10:00 am CEST, RESTAR FRAMOS will deliver a webinar “Reaching High-Speed and High-Resolution Architecture with IMX925/935 and SLVS-EC” From the event page: From sensor architecture to real-world integration — join the engineers behind the technology High-speed and high-resolution machine vision systems are pushing the limits of data throughput, latency, and

Read More »

MPEG-5 LCEVC: A practical shift for industrial AI video pipelines

This blog post was originally published at V-Nova’s website. It is reprinted here with the permission of V-Nova. In Industrial and Defense environments, I hear the same story. More cameras. Higher resolutions. Stricter latency targets. Infrastructure that cannot be replaced easily. And increasing pressure around storage, bandwidth, compute, and privacy. This is why MPEG-5 LCEVC is becoming even more relevant. It improves compression

Read More »

Upcoming Webinar on Building an Object Detection Pipeline

On May 27, 2026, at 10:00 am PDT (1:00 pm EDT) Intel will deliver a webinar “From Annotation to Deployment: Building an Object Detection Pipeline with Geti, YOLO26, and OpenVINO™” From the event page: Learn from Ultralytics and Intel® AI experts working side by side in this hands-on session and discover how to build production-ready

Read More »

Texas Instruments, D3 Embedded, Lattice and NVIDIA Show a Practical Radar-Camera Fusion Stack for Robotics

TI’s new application brief and companion demo outline how mmWave radar, camera input, FPGA-based sensor bridging and NVIDIA Holoscan can be combined into a low-latency perception pipeline for humanoids and other autonomous machines.   Texas Instruments, D3 Embedded, Lattice Semiconductor and NVIDIA are outlining a concrete radar-camera fusion stack for robotics rather than just talking

Read More »

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top