Development Tools

Development Tools for Embedded Vision

ENCOMPASSING MOST OF THE STANDARD ARSENAL USED FOR DEVELOPING REAL-TIME EMBEDDED PROCESSOR SYSTEMS

The software tools (compilers, debuggers, operating systems, libraries, etc.) encompass most of the standard arsenal used for developing real-time embedded processor systems, while adding in specialized vision libraries and possibly vendor-specific development tools for software development. On the hardware side, the requirements will depend on the application space, since the designer may need equipment for monitoring and testing real-time video data. Most of these hardware development tools are already used for other types of video system design.

Both general-purpose and vender-specific tools

Many vendors of vision devices use integrated CPUs that are based on the same instruction set (ARM, x86, etc), allowing a common set of development tools for software development. However, even though the base instruction set is the same, each CPU vendor integrates a different set of peripherals that have unique software interface requirements. In addition, most vendors accelerate the CPU with specialized computing devices (GPUs, DSPs, FPGAs, etc.) This extended CPU programming model requires a customized version of standard development tools. Most CPU vendors develop their own optimized software tool chain, while also working with 3rd-party software tool suppliers to make sure that the CPU components are broadly supported.

Heterogeneous software development in an integrated development environment

Since vision applications often require a mix of processing architectures, the development tools become more complicated and must handle multiple instruction sets and additional system debugging challenges. Most vendors provide a suite of tools that integrate development tasks into a single interface for the developer, simplifying software development and testing.

Nota AI Demonstration of Revolutionizing Driver Monitoring Systems

Tae-Ho Kim, CTO and Co-founder of Nota AI, demonstrates the company’s latest edge AI and vision technologies and products at the 2024 Embedded Vision Summit. Specifically, Kim demonstrates Nota DMS, his company’s state-of-the-art driver monitoring system. The solution enhances driver safety by monitoring attention and detecting drowsiness in real-time. Cutting-edge AI techniques make Nota DMS

Read More »

Steering a Revolution: Optimized Automated Driving with Heterogeneous Compute

This blog post was originally published at Qualcomm’s website. It is reprinted here with the permission of Qualcomm Qualcomm Technologies’ latest whitepaper navigates the advantages of Snapdragon Ride Solutions based on heterogeneous compute SoCs. As the automotive industry continues to progress toward automated driving, advanced driver assistance systems (ADAS) are in high demand. These systems

Read More »

Nextchip Demonstration of the APACHE5 ADAS SoC

Sophie Jeon, Global Strategy Marketing Manager at Nextchip, demonstrates the company’s latest edge AI and vision technologies and products at the 2024 Embedded Vision Summit. Specifically, Jeon demonstrates her company’s APACHE5 ADAS SoC. APACHE5 is ready for market with an accompanying SDK, and has passed all qualifications for production such as PPAP (the Production Part

Read More »

Nextchip Demonstration of the APACHE6 ADAS SoC

Sophie Jeon, Global Strategy Marketing Manager at Nextchip, demonstrates the company’s latest edge AI and vision technologies and products at the 2024 Embedded Vision Summit. Specifically, Jeon demonstrates her company’s APACHE6 ADAS SoC. With advanced computing power, APACHE6 makes your vehicle smarter, avoiding risk while driving and parking.

Read More »

Lattice Semiconductor Demonstration of a Low-latency Edge AI Sensor Bridge for NVIDIA’s Holoscan

Kambiz Khalilian, Director of Strategic Initiatives and Ecosystem Alliances for Lattice Semiconductor, demonstrates the company’s latest edge AI and vision technologies and products at the 2024 Embedded Vision Summit. Specifically, Khalilian demonstrates a low-latency edge AI sensor bridge solution for NVIDIA’s Holoscan. The Lattice FPGA-based Holoscan Sensor Bridge enables high-throughput and low-latency sensor aggregation and

Read More »

Lumotive Demonstration of a Sensor Hub for LiDAR, Radar and Camera Fusion

Kevin Camera, Vice President of Product for Lumotive, demonstrates the company’s latest edge AI and vision technologies and products in Lattice Semiconductor’s booth at the 2024 Embedded Vision Summit. Specifically, Camera demonstrates a sensor hub for concurrently fusing together and processing data from multiple sources: Velodyne’s VLP-16 LiDAR, Lumotive’s M30 solid state LiDAR, Texas Instruments’

Read More »

Develop Generative AI-powered Visual AI Agents for the Edge

This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. An exciting breakthrough in AI technology—Vision Language Models (VLMs)—offers a more dynamic and flexible method for video analysis. VLMs enable users to interact with image and video input using natural language, making the technology more accessible and

Read More »

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top