fbpx

Development Tools for Embedded Vision

ENCOMPASSING MOST OF THE STANDARD ARSENAL USED FOR DEVELOPING REAL-TIME EMBEDDED PROCESSOR SYSTEMS

The software tools (compilers, debuggers, operating systems, libraries, etc.) encompass most of the standard arsenal used for developing real-time embedded processor systems, while adding in specialized vision libraries and possibly vendor-specific development tools for software development. On the hardware side, the requirements will depend on the application space, since the designer may need equipment for monitoring and testing real-time video data. Most of these hardware development tools are already used for other types of video system design.

Both general-purpose and vender-specific tools

Many vendors of vision devices use integrated CPUs that are based on the same instruction set (ARM, x86, etc), allowing a common set of development tools for software development. However, even though the base instruction set is the same, each CPU vendor integrates a different set of peripherals that have unique software interface requirements. In addition, most vendors accelerate the CPU with specialized computing devices (GPUs, DSPs, FPGAs, etc.) This extended CPU programming model requires a customized version of standard development tools. Most CPU vendors develop their own optimized software tool chain, while also working with 3rd-party software tool suppliers to make sure that the CPU components are broadly supported.

Heterogeneous software development in an integrated development environment

Since vision applications often require a mix of processing architectures, the development tools become more complicated and must handle multiple instruction sets and additional system debugging challenges. Most vendors provide a suite of tools that integrate development tasks into a single interface for the developer, simplifying software development and testing.

Accelerating Innovation in Low Power AI Applications with Lattice FPGAs

This blog post was originally published at Lattice Semiconductor’s website. It is reprinted here with the permission of Lattice Semiconductor. On-device AI inference capability is expected to reach 60% of all devices by 2024, according to ABI Research. This underscores the rapid speed of AI innovation to take place in the last few years that

Read More »

“How Transformers are Changing the Direction of Deep Learning Architectures,” a Presentation from Synopsys

Tom Michiels, System Architect for DesignWare ARC Processors at Synopsys, presents the “How Transformers are Changing the Direction of Deep Learning Architectures” tutorial at the May 2022 Embedded Vision Summit. The neural network architectures used in embedded real-time applications are evolving quickly. Transformers are a leading deep learning approach for… “How Transformers are Changing the

Read More »

“Introduction to Computer Vision with Convolutional Neural Networks,” a Presentation from Intel

Mohammad Haghighat, Senior AI Software Product Manager at Intel, presents the “Introduction to Computer Vision with Convolutional Neural Networks” tutorial at the May 2022 Embedded Vision Summit. This presentation covers the basics of computer vision using convolutional neural networks. Haghighat begins by introducing some important conventional computer vision techniques and… “Introduction to Computer Vision with

Read More »

How to Build a Custom Embedded Stereo System for Depth Perception

This article was originally published at Teledyne FLIR’s website. It is reprinted here with the permission of Teledyne FLIR. There are various 3D sensor options for developing depth perception systems including, stereo vision with cameras, lidar, and time-of-flight sensors.  Each option has its strengths and weaknesses.  A stereo system is typically low cost, rugged enough

Read More »

“Are Neuromorphic Vision Technologies Ready for Commercial Use?,” An Embedded Vision Summit Expert Panel Discussion

Sally Ward-Foxton, European Correspondent for EE Times, moderates the “Are Neuromorphic Vision Technologies Ready for Commercial Use?” Expert Panel at the May 2022 Embedded Vision Summit. Other panelists include Garrick Orchard, Research Scientist at Intel Labs, James Marshall, Chief Scientific Officer at Opteran, Ryad Benosman, Professor at the University of… “Are Neuromorphic Vision Technologies Ready

Read More »

Arm Achieves Record Revenue and Shipments in Q1 FY 2022

August 8, 2022 – In Q1 FY 2022 Arm reported: A record Q1 total revenue of $719 million, up 6% year-over-year. A record quarterly royalty revenue of $453 million, up 22% year-over-year. This is the first time the quarterly royalty revenue has been higher than $400 million. Arm’s strategy of diversifying into markets beyond mobile,

Read More »

“Building Embedded Vision Products: Management Lessons From The School of Hard Knocks,” a Presentation from the Edge AI and Vision Alliance

Phil Lapsley, Vice President of the Edge AI and Vision Alliance, presents the “Building Embedded Vision Products: Management Lessons From The School of Hard Knocks” tutorial at the May 2022 Embedded Vision Summit. It’s hard to build embedded AI and vision products, and the challenges aren’t just technical. In this… “Building Embedded Vision Products: Management

Read More »

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

1646 N. California Blvd.,
Suite 360
Walnut Creek, CA 94596 USA

Phone
Phone: +1 (925) 954-1411
Scroll to Top