Development Tools

Development Tools for Embedded Vision

ENCOMPASSING MOST OF THE STANDARD ARSENAL USED FOR DEVELOPING REAL-TIME EMBEDDED PROCESSOR SYSTEMS

The software tools (compilers, debuggers, operating systems, libraries, etc.) encompass most of the standard arsenal used for developing real-time embedded processor systems, while adding in specialized vision libraries and possibly vendor-specific development tools for software development. On the hardware side, the requirements will depend on the application space, since the designer may need equipment for monitoring and testing real-time video data. Most of these hardware development tools are already used for other types of video system design.

Both general-purpose and vender-specific tools

Many vendors of vision devices use integrated CPUs that are based on the same instruction set (ARM, x86, etc), allowing a common set of development tools for software development. However, even though the base instruction set is the same, each CPU vendor integrates a different set of peripherals that have unique software interface requirements. In addition, most vendors accelerate the CPU with specialized computing devices (GPUs, DSPs, FPGAs, etc.) This extended CPU programming model requires a customized version of standard development tools. Most CPU vendors develop their own optimized software tool chain, while also working with 3rd-party software tool suppliers to make sure that the CPU components are broadly supported.

Heterogeneous software development in an integrated development environment

Since vision applications often require a mix of processing architectures, the development tools become more complicated and must handle multiple instruction sets and additional system debugging challenges. Most vendors provide a suite of tools that integrate development tasks into a single interface for the developer, simplifying software development and testing.

Microchip Technology Demonstration of AI-powered Face ID on the Polarfire SoC FPGA Using the Vectorblox SDK

Avery Williams, Channel Marketing Manager for Microchip Technology, demonstrates the company’s latest edge AI and vision technologies and products at the 2025 Embedded Vision Summit. Specifically, Williams demonstrates ultra-efficient AI-powered facial recognition on Microchip’s PolarFire SoC FPGA using the VectorBlox Accelerator SDK. Pre-trained neural networks are quantized to INT8 and compiled to run directly on

Read More »

How to Think About Large Language Models on the Edge

This blog post was originally published at BrainChip’s website. It is reprinted here with the permission of BrainChip. ChatGPT was released to the public on November 30th, 2022, and the world – at least, the connected world – has not been the same since. Surprisingly, almost three years later, despite massive adoption, we do not

Read More »

3LC Demonstration of Catching Synthetic Slip-ups with 3LC

Paul Endresen, CEO of 3LC, demonstrates the company’s latest edge AI and vision technologies and products at the 2025 Embedded Vision Summit. Specifically, Andresen demonstrates the investigation of a curious embryo classification study from Norway, where synthetic data was supposed to help train a model – but something didn’t quite hatch right. Using 3LC to

Read More »

Software-defined Vehicles Drive Next-generation Auto Architectures

SDV Level Chart: Major OEMs compared. The automotive industry is undergoing a foundational shift toward Software-Defined Vehicles (SDVs), where vehicle functionality, user experience, and monetization opportunities are governed increasingly by software rather than hardware. This evolution, captured comprehensively in the latest IDTechEx report, “Software-Defined Vehicles, Connected Cars, and AI in Cars 2026-2036: Markets, Trends, and

Read More »

3LC Demonstration of Debugging YOLO with 3LC’s Training-time Truth Detector

Paul Endresen, CEO of 3LC, demonstrates the company’s latest edge AI and vision technologies and products at the 2025 Embedded Vision Summit. Specifically, Andresen demonstrates how to uncover hidden treasures in the COCO dataset – like unlabeled forks and phantom objects – using his platform’s training-time introspection tools. In this demo, 3LC eavesdrops on a

Read More »

VeriSilicon Demonstration of the Open Se Cura Project

Chris Wang, VP of Multimedia Technologies and a member of CTO office at VeriSilicon, demonstrates the company’s latest edge AI and vision technologies and products at the 2025 Embedded Vision Summit. Specifically, Wang demonstrates examples from the Open Se Cura Project, a joint effort between VeriSilicon and Google. The project showcases a scalable, power-efficient, and

Read More »

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top