fbpx

Processors for Embedded Vision

THIS TECHNOLOGY CATEGORY INCLUDES ANY DEVICE THAT EXECUTES VISION ALGORITHMS OR VISION SYSTEM CONTROL SOFTWARE

This technology category includes any device that executes vision algorithms or vision system control software. The following diagram shows a typical computer vision pipeline; processors are often optimized for the compute-intensive portions of the software workload.

ev pipeline

The following examples represent distinctly different types of processor architectures for embedded vision, and each has advantages and trade-offs that depend on the workload. For this reason, many devices combine multiple processor types into a heterogeneous computing environment, often integrated into a single semiconductor component. In addition, a processor can be accelerated by dedicated hardware that improves performance on computer vision algorithms.

General-purpose CPUs

While computer vision algorithms can run on most general-purpose CPUs, desktop processors may not meet the design constraints of some systems. However, x86 processors and system boards can leverage the PC infrastructure for low-cost hardware and broadly-supported software development tools. Several Alliance Member companies also offer devices that integrate a RISC CPU core. A general-purpose CPU is best suited for heuristics, complex decision-making, network access, user interface, storage management, and overall control. A general purpose CPU may be paired with a vision-specialized device for better performance on pixel-level processing.

Graphics Processing Units

High-performance GPUs deliver massive amounts of parallel computing potential, and graphics processors can be used to accelerate the portions of the computer vision pipeline that perform parallel processing on pixel data. While General Purpose GPUs (GPGPUs) have primarily been used for high-performance computing (HPC), even mobile graphics processors and integrated graphics cores are gaining GPGPU capability—meeting the power constraints for a wider range of vision applications. In designs that require 3D processing in addition to embedded vision, a GPU will already be part of the system and can be used to assist a general-purpose CPU with many computer vision algorithms. Many examples exist of x86-based embedded systems with discrete GPGPUs.

Digital Signal Processors

DSPs are very efficient for processing streaming data, since the bus and memory architecture are optimized to process high-speed data as it traverses the system. This architecture makes DSPs an excellent solution for processing image pixel data as it streams from a sensor source. Many DSPs for vision have been enhanced with coprocessors that are optimized for processing video inputs and accelerating computer vision algorithms. The specialized nature of DSPs makes these devices inefficient for processing general-purpose software workloads, so DSPs are usually paired with a RISC processor to create a heterogeneous computing environment that offers the best of both worlds.

Field Programmable Gate Arrays (FPGAs)

Instead of incurring the high cost and long lead-times for a custom ASIC to accelerate computer vision systems, designers can implement an FPGA to offer a reprogrammable solution for hardware acceleration. With millions of programmable gates, hundreds of I/O pins, and compute performance in the trillions of multiply-accumulates/sec (tera-MACs), high-end FPGAs offer the potential for highest performance in a vision system. Unlike a CPU, which has to time-slice or multi-thread tasks as they compete for compute resources, an FPGA has the advantage of being able to simultaneously accelerate multiple portions of a computer vision pipeline. Since the parallel nature of FPGAs offers so much advantage for accelerating computer vision, many of the algorithms are available as optimized libraries from semiconductor vendors. These computer vision libraries also include preconfigured interface blocks for connecting to other vision devices, such as IP cameras.

Vision-Specific Processors and Cores

Application-specific standard products (ASSPs) are specialized, highly integrated chips tailored for specific applications or application sets. ASSPs may incorporate a CPU, or use a separate CPU chip. By virtue of their specialization, ASSPs for vision processing typically deliver superior cost- and energy-efficiency compared with other types of processing solutions. Among other techniques, ASSPs deliver this efficiency through the use of specialized coprocessors and accelerators. And, because ASSPs are by definition focused on a specific application, they are usually provided with extensive associated software. This same specialization, however, means that an ASSP designed for vision is typically not suitable for other applications. ASSPs’ unique architectures can also make programming them more difficult than with other kinds of processors; some ASSPs are not user-programmable.

Efinix Demonstration of Using Titanium FPGAs with Quantum Acceleration to Optimize Edge AI Performance while Reducing Time to Market

Roger Silloway, Director of North American Sales at Efinix, demonstrates the company’s latest edge AI and vision technologies and products at the 2021 Embedded Vision Summit. Specifically, Silloway demonstrates how to use the company’s Titanium FPGAs with quantum acceleration to optimize edge AI performance while reducing time to market. Efinix Quantum Acceleration provides a predefined

Read More »

Morpho and Qualcomm Technologies, Inc. Join Forces, Enabling AI and Image Processing on Snapdragon Compute Platforms

Tokyo, Japan – December 10th, 2021– Morpho, Inc. (hereinafter, “Morpho”), a global leader in image processing solutions and imaging AI solutions, announced today its collaboration with Qualcomm Technologies, Inc., a leading semiconductor company, to implement Morpho’s AI and image processing technologies in upcoming Snapdragon® compute platforms. Due to the surge in online meetings driven by

Read More »

Codeplay Software Demonstration of the Differences Between RISC-V and Arm Vector Extensions

Andrew Richards, CEO, President and Founder of Codeplay Software, demonstrates the company’s latest edge AI and vision technologies and products at the 2021 Embedded Vision Summit. Specifically, Richards demonstrates the differences in code complexity and execution between RISC-V and Arm vector extensions. The RISC-V Foundation recently announced the release of its vector (RVV) extension. Earlier

Read More »

AMD Demonstration of MIVisionX and rocAL, Two of the Company’s Computer Vision and Machine Learning Solutions

Pavel Tcherniaev, Senior Software Development Engineer at Advanced Micro Devices (AMD), demonstrates the company’s latest edge AI and vision technologies and products at the 2021 Embedded Vision Summit. Specifically, Tcherniaev demonstrates MIVisionX and rocAL, two of the company’s computer vision and machine learning solutions. AMD’s MIVisionX is a set of comprehensive computer vision and machine

Read More »

Allegro DVT Demonstration of Its Advanced Encoder and Decoder IP for Applications Demanding Highest Video Quality

Doug Ridge, former Strategic Marketing Manager at Allegro DVT, demonstrates the company’s latest edge AI and vision technologies and products at the 2021 Embedded Vision Summit. Specifically, Ridge demonstrates the company’s advanced encoder and decoder IP for applications demanding highest video quality. This demo covers the development (and integration in a SoC) of video encoder

Read More »

Imagination Launches RISC-V CPU Family

Catapult CPUs are based on RISC-V ISA and designed for heterogeneous solutions London, England –6th December 2021 – Imagination Technologies announces Catapult, a RISC-V CPU product line designed from the ground-up for next-generation heterogeneous compute needs. Based on RISC-V, the open-source CPU architecture, which is transforming processor design, Imagination’s Catapult CPUs can be configured for

Read More »

OpenFive Demonstration of How to Customize an Edge AI Vision SoC

David Lee, Director of Product Management at OpenFive, demonstrates the company’s latest edge AI and vision technologies and products at the 2021 Embedded Vision Summit. Specifically, Lee demonstrates how to customize an edge AI vision SoC. Deploying a custom accelerator in a new embedded application is difficult without the rest of the IP also required

Read More »

LAON PEOPLE Demonstration of Surface Inspection Using a Deep Learning Solution

Henry Sang, Director of Business Development at LAON PEOPLE, demonstrates the company’s latest edge AI and vision technologies and products at the 2021 Embedded Vision Summit. Specifically, Sang demonstrates fully automatic inspection of complex surfaces (e.g., automobiles) using LAON PEOPLE’s advanced machine vision camera and deep learning algorithms. The solution is able to spot defects

Read More »

LAON PEOPLE Demonstration of Traffic Analysis Using a Deep Learning Solution

Luke Faubion, Traffic Solution Director at LAON PEOPLE, demonstrates the company’s latest edge AI and vision technologies and products at the 2021 Embedded Vision Summit. Specifically, Faubion demonstrates traffic analysis using the company’s deep learning solution. The traffic analysis program Faubion demonstrates doesn’t require installing a new IP camera. LAON PEOPLE’s AI solution provides vehicle,

Read More »

Imagination Technologies Demonstration of Deploying Hardware-accelerated Long Short-term Memory Neural Networks on Edge Devices

Gilberto Rodriguez, Director of AI Product Management at Imagination Technologies, demonstrates the company’s latest edge AI and vision technologies and products at the 2021 Embedded Vision Summit. Specifically, Rodriguez demonstrates deploying hardware-accelerated long short-term memory (LSTM) neural networks on edge devices, using Imagination Technologies’ neural network acceleration (NNA) IP. Rodriguez shows how to unroll and

Read More »

EdgeCortix Demonstration of the Dynamic Neural Accelerator F-series and MERA Compiler for Low-latency Deep Neural Network Inference

Hamid Zohouri, Director of Product at EdgeCortix, demonstrates the company’s latest edge AI and vision technologies and products at the 2021 Embedded Vision Summit. Specifically, Zohouri demonstrates the company’s Dynamic Neural Accelerator (DNA) F-series architecture and MERA compiler for low-latency deep neural network (DNN) inference. EdgeCortix’s DNA architecture is a runtime-reconfigurable, highly scalable and power-efficient

Read More »

STMicroelectronics Streamlines Machine-Learning Software Development for Connected Devices and Industrial Equipment with Upgrades to NanoEdge™ AI Studio

New algorithms to better predict equipment anomalies and future behavior New capabilities to ease use of industrial sensor data acquisition and management using an ST development board Enhanced user interface to make machine-learning implementation easier for embedded developers with no data-science skills STMicroelectronics (NYSE: STM), a global semiconductor leader serving customers across the spectrum of

Read More »

ADLINK Releases its First SMARC Module Based on Qualcomm QRB5165 Enabling High Performance Robots and Drones at Low Power

Integrated IoT technologies provide on-device AI capabilities at the edge Summary: The LEC-RB5 SMARC is a high-performance module, built with the Qualcomm® QRB5165 processor, allowing on-device AI and 5G connectivity capabilities for consumer, enterprise and industrial robots. It features a high performance NPU, Octa-Core (8x Arm Coretex-A77 cores) CPU, low power consumption and support for

Read More »

Flex Logix Joins the Edge AI and Vision Alliance

Membership will help support Flex Logix’s rapid growth in the edge vision market with its inference accelerator chips and boards MOUNTAIN VIEW, Calif. – December 1, 2021 – Flex Logix® Technologies, Inc., supplier of the most-efficient AI edge inference accelerator and the leading supplier of eFPGA IP, today announced that it has joined the Edge

Read More »

Coherent Logix Demonstration of Virtual Surround View Fire Detection Using a Deep Neural Network and the HyperX Processor

Martin Hunt, Director of Applications Engineering at Coherent Logix, demonstrates the company’s latest edge AI and vision technologies and products at the 2021 Embedded Vision Summit. Specifically, Hunt demonstrates virtual surround view fire detection using a deep neural network (DNN) and the company’s HyperX processor. In this demo, Hunt shows the detection of fire using

Read More »

Coherent Logix Demonstration of Ultra-low Latency Industrial Inspection at the Edge Using the HyperX Processor

Martin Hunt, Director of Applications Engineering at Coherent Logix, demonstrates the company’s latest edge AI and vision technologies and products at the 2021 Embedded Vision Summit. Specifically, Hunt demonstrates ultra-low latency industrial inspection at the edge using the company’s HyperX processor. In this demo, Hunt shows how to use the HyperX Memory Network parallel processor

Read More »

MicroSys Puts Hailo AI Performance on Its SoM Platforms with NXP S32G Vehicle Network Processors

Performance boost for artificial intelligence in situational awareness, autonomous driving and predictive maintenance applications Columbia, MD / Munich, Germany, and Tel Aviv, Israel, November 30th, 2021 – NXP Gold partner MicroSys Electronics announced that its new embedded SoM (System-on-Module) platform miriac AIP-S32G274A, which is based on NXP S32G vehicle network processors, now supports Hailo-8 AI accelerator modules.

Read More »

BrainChip Demonstration of How the Akida Neural Processor Solves Problems At the Edge

Todd Vierra, Director of Customer Engagements at BrainChip, demonstrates the company’s latest edge AI and vision technologies and products at the 2021 Embedded Vision Summit. Specifically, Vierra demonstrates how the company’s Akida event-based neural processor (NPU) solves problems at the edge. Utilizing BrainChip’s Akida NPU, you can leverage advanced neuromorphic computing as the engine for

Read More »

BrainChip Demonstration of AI at the Sensor with 3D Point Cloud Solutions Based on the Akida Neural Processor

Todd Vierra, Director of Customer Engagements at BrainChip, demonstrates the company’s latest edge AI and vision technologies and products at the 2021 Embedded Vision Summit. Specifically, Vierra demonstrates AI at the sensor with 3D point cloud solutions based on the company’s Akida event-based neural processor (NPU). Utilizing BrainChip’s Akida NPU, you can leverage advanced neuromorphic

Read More »

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

1646 North California Blvd.,
Suite 360
Walnut Creek, CA 94596 USA

Phone
Phone: +1 (925) 954-1411
Scroll to Top