fbpx

Processors for Embedded Vision

THIS TECHNOLOGY CATEGORY INCLUDES ANY DEVICE THAT EXECUTES VISION ALGORITHMS OR VISION SYSTEM CONTROL SOFTWARE

This technology category includes any device that executes vision algorithms or vision system control software. The following diagram shows a typical computer vision pipeline; processors are often optimized for the compute-intensive portions of the software workload.

ev pipeline

The following examples represent distinctly different types of processor architectures for embedded vision, and each has advantages and trade-offs that depend on the workload. For this reason, many devices combine multiple processor types into a heterogeneous computing environment, often integrated into a single semiconductor component. In addition, a processor can be accelerated by dedicated hardware that improves performance on computer vision algorithms.

General-purpose CPUs

While computer vision algorithms can run on most general-purpose CPUs, desktop processors may not meet the design constraints of some systems. However, x86 processors and system boards can leverage the PC infrastructure for low-cost hardware and broadly-supported software development tools. Several Alliance Member companies also offer devices that integrate a RISC CPU core. A general-purpose CPU is best suited for heuristics, complex decision-making, network access, user interface, storage management, and overall control. A general purpose CPU may be paired with a vision-specialized device for better performance on pixel-level processing.

Graphics Processing Units

High-performance GPUs deliver massive amounts of parallel computing potential, and graphics processors can be used to accelerate the portions of the computer vision pipeline that perform parallel processing on pixel data. While General Purpose GPUs (GPGPUs) have primarily been used for high-performance computing (HPC), even mobile graphics processors and integrated graphics cores are gaining GPGPU capability—meeting the power constraints for a wider range of vision applications. In designs that require 3D processing in addition to embedded vision, a GPU will already be part of the system and can be used to assist a general-purpose CPU with many computer vision algorithms. Many examples exist of x86-based embedded systems with discrete GPGPUs.

Digital Signal Processors

DSPs are very efficient for processing streaming data, since the bus and memory architecture are optimized to process high-speed data as it traverses the system. This architecture makes DSPs an excellent solution for processing image pixel data as it streams from a sensor source. Many DSPs for vision have been enhanced with coprocessors that are optimized for processing video inputs and accelerating computer vision algorithms. The specialized nature of DSPs makes these devices inefficient for processing general-purpose software workloads, so DSPs are usually paired with a RISC processor to create a heterogeneous computing environment that offers the best of both worlds.

Field Programmable Gate Arrays (FPGAs)

Instead of incurring the high cost and long lead-times for a custom ASIC to accelerate computer vision systems, designers can implement an FPGA to offer a reprogrammable solution for hardware acceleration. With millions of programmable gates, hundreds of I/O pins, and compute performance in the trillions of multiply-accumulates/sec (tera-MACs), high-end FPGAs offer the potential for highest performance in a vision system. Unlike a CPU, which has to time-slice or multi-thread tasks as they compete for compute resources, an FPGA has the advantage of being able to simultaneously accelerate multiple portions of a computer vision pipeline. Since the parallel nature of FPGAs offers so much advantage for accelerating computer vision, many of the algorithms are available as optimized libraries from semiconductor vendors. These computer vision libraries also include preconfigured interface blocks for connecting to other vision devices, such as IP cameras.

Vision-Specific Processors and Cores

Application-specific standard products (ASSPs) are specialized, highly integrated chips tailored for specific applications or application sets. ASSPs may incorporate a CPU, or use a separate CPU chip. By virtue of their specialization, ASSPs for vision processing typically deliver superior cost- and energy-efficiency compared with other types of processing solutions. Among other techniques, ASSPs deliver this efficiency through the use of specialized coprocessors and accelerators. And, because ASSPs are by definition focused on a specific application, they are usually provided with extensive associated software. This same specialization, however, means that an ASSP designed for vision is typically not suitable for other applications. ASSPs’ unique architectures can also make programming them more difficult than with other kinds of processors; some ASSPs are not user-programmable.

“PyTorch Deep Learning Framework: Status and Directions,” a Presentation from Facebook

Joseph Spisak, Product Manager at Facebook, delivers the presentation “PyTorch Deep Learning Framework: Status and Directions” at the Embedded Vision Alliance’s December 2019 Vision Industry and Technology Forum. Spisak gives an update on the Torch deep learning framework and where it’s heading. “PyTorch Deep Learning Framework: Status and Directions,” a Presentation from Facebook Register or

Read More »

“Current and Planned Standards for Computer Vision and Machine Learning,” a Presentation from the Khronos Group

Neil Trevett, President of the Khronos Group and Vice President of Developer Ecosystems at NVIDIA, delivers the presentation “Current and Planned Standards for Computer Vision and Machine Learning” at the Embedded Vision Alliance’s December 2019 Vision Industry and Technology Forum. Trevett shares updates on recent, current and planned Khronos standardization… “Current and Planned Standards for

Read More »

“Edge/Cloud Tradeoffs and Scaling a Consumer Computer Vision Product,” a Presentation from Cocoon Health

Pavan Kumar, Co-founder and CTO of Cocoon Cam (formerly Cocoon Health), delivers the presentation “Edge/Cloud Tradeoffs and Scaling a Consumer Computer Vision Product” at the Embedded Vision Alliance’s September 2019 Vision Industry and Technology Forum. Kumar explains how his company is evolving its use of edge and cloud vision computing… “Edge/Cloud Tradeoffs and Scaling a

Read More »

“Quantizing Deep Networks for Efficient Inference at the Edge,” a Presentation from Facebook

Raghuraman Krishnamoorthi, Software Engineer at Facebook, delivers the presentation “Quantizing Deep Networks for Efficient Inference at the Edge” at the Embedded Vision Alliance’s September 2019 Vision Industry and Technology Forum. Krishnamoorthi gives an overview of practical deep neural network quantization techniques and tools. “Quantizing Deep Networks for Efficient Inference at the Edge,” a Presentation from

Read More »

“Embedded Vision Applications Lead Way for Processors in AI: A Market Analysis of Vision Processors,” a Presentation from IHS Markit

Tom Hackenberg, Principal Analyst at IHS Markit, presents the “Embedded Vision Applications Lead Way for Processors in AI: A Market Analysis of Vision Processors” tutorial at the May 2019 Embedded Vision Summit. Artificial intelligence is not a new concept. Machine learning has been used for decades in large server and… “Embedded Vision Applications Lead Way

Read More »

“Five+ Techniques for Efficient Implementation of Neural Networks,” a Presentation from Synopsys

Bert Moons, Hardware Design Architect at Synopsys, presents the “Five+ Techniques for Efficient Implementation of Neural Networks” tutorial at the May 2019 Embedded Vision Summit. Embedding real-time, large-scale deep learning vision applications at the edge is challenging due to their huge computational, memory and bandwidth requirements. System architects can mitigate… “Five+ Techniques for Efficient Implementation

Read More »

“Building Complete Embedded Vision Systems on Linux — From Camera to Display,” a Presentation from Montgomery One

Clay D. Montgomery, Freelance Embedded Multimedia Developer at Montgomery One, presents the “Building Complete Embedded Vision Systems on Linux—From Camera to Display” tutorial at the May 2019 Embedded Vision Summit. There’s a huge wealth of open-source software components available today for embedding vision on the latest SoCs from suppliers such… “Building Complete Embedded Vision Systems

Read More »

Rapid Prototyping on NVIDIA Jetson Platforms with MATLAB

This article was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. This article discusses how an application developer can prototype and deploy deep learning algorithms on hardware like the NVIDIA Jetson Nano Developer Kit with MATLAB. In previous posts, we explored how you can… Rapid Prototyping on NVIDIA Jetson

Read More »

“Selecting the Right Imager for Your Embedded Vision Application,” a Presentation from Capable Robot Components

Chris Osterwood, Founder and CEO of Capable Robot Components, presents the “Selecting the Right Imager for Your Embedded Vision Application” tutorial at the May 2019 Embedded Vision Summit. The performance of your embedded vision product is inexorably linked to the imager and lens it uses. Selecting these critical components is… “Selecting the Right Imager for

Read More »

“Game Changing Depth Sensing Technique Enables Simpler, More Flexible 3D Solutions,” a Presentation from Magik Eye

Takeo Miyazawa, Founder and CEO of Magik Eye, presents the “Game Changing Depth Sensing Technique Enables Simpler, More Flexible 3D Solutions” tutorial at the May 2019 Embedded Vision Summit. Magik Eye is a global team of computer vision veterans that have developed a new method to determine depth from light… “Game Changing Depth Sensing Technique

Read More »

“Machine Learning at the Edge in Smart Factories Using TI Sitara Processors,” a Presentation from Texas Instruments

Manisha Agrawal, Software Applications Engineer at Texas Instruments, presents the “Machine Learning at the Edge in Smart Factories Using TI Sitara Processors” tutorial at the May 2019 Embedded Vision Summit. Whether it’s called “Industry 4.0,” “industrial internet of things” (IIOT) or “smart factories,” a fundamental shift is underway in manufacturing:… “Machine Learning at the Edge

Read More »

“Using High-level Synthesis to Bridge the Gap Between Deep Learning Frameworks and Custom Hardware Accelerators,” a Presentation from Mentor

Michael Fingeroff, HLS Technologist at Mentor, presents the “Using High-level Synthesis to Bridge the Gap Between Deep Learning Frameworks and Custom Hardware Accelerators” tutorial at the May 2019 Embedded Vision Summit. Recent years have seen an explosion in machine learning/AI algorithms with a corresponding need to use custom hardware for… “Using High-level Synthesis to Bridge

Read More »

“Fundamental Security Challenges of Embedded Vision,” a Presentation from Synopsys

Mike Borza, Principal Security Technologist at Synopsys, presents the “Fundamental Security Challenges of Embedded Vision” tutorial at the May 2019 Embedded Vision Summit. As facial recognition, surveillance and smart vehicles become an accepted part of our daily lives, product and chip designers are coming to grips with the business need… “Fundamental Security Challenges of Embedded

Read More »

“Introduction to Optics for Embedded Vision,” a Presentation from Jessica Gehlhar

Jessica Gehlhar, formerly an imaging engineer at Edmund Optics, presents the “Introduction to Optics for Embedded Vision” tutorial at the May 2019 Embedded Vision Summit. This talk provides an introduction to optics for embedded vision system and algorithm developers. Gehlhar begins by presenting fundamental imaging lens specifications and quality metrics… “Introduction to Optics for Embedded

Read More »

“Practical Approaches to Training Data Strategy: Bias, Legal and Ethical Considerations,” a Presentation from Samasource

Audrey Jill Boguchwal, Senior Product Manager at Samasource, presents the “Practical Approaches to Training Data Strategy: Bias, Legal and Ethical Considerations” tutorial at the May 2019 Embedded Vision Summit. Recent McKinsey research cites the top five limitations that prevent companies from adopting AI technology. Training data strategy is a common… “Practical Approaches to Training Data

Read More »

“OpenCV: Current Status and Future Plans,” a Presentation from OpenCV.org

Satya Mallick, Interim CEO of OpenCV.org, presents the “OpenCV: Current Status and Future Plans” tutorial at the May 2019 Embedded Vision Summit. With over two million downloads per week, OpenCV is the most popular open source computer vision library in the world. It implements over 2500 opt- imized algorithms, works… “OpenCV: Current Status and Future

Read More »

“Improving the Safety and Performance of Automated Vehicles Through Precision Localization,” a Presentation from VSI Labs

Phil Magney, founder of VSI Labs, presents the “Improving the Safety and Performance of Automated Vehicles Through Precision Localization” tutorial at the May 2019 Embedded Vision Summit. How does a self-driving car know where it is? Magney explains how autonomous vehicles localize themselves against their surroundings through the use of… “Improving the Safety and Performance

Read More »

“AI Reliability Against Adversarial Inputs,” a Presentation from Intel

Gokcen Cilingir, AI Software Architect, and Li Chen, Data Scientist and Research Scientist, both at Intel, presents the “AI Reliability Against Adversarial Inputs” tutorial at the May 2019 Embedded Vision Summit. As artificial intelligence solutions are becoming ubiquitous, the security and reliability of AI algorithms is becoming an important consideration… “AI Reliability Against Adversarial Inputs,”

Read More »

“Distance Estimation Solutions for ADAS and Automated Driving,” a Presentation from AImotive

Gergely Debreczeni, Chief Scientist at AImotive, presents the “Distance Estimation Solutions for ADAS and Automated Driving” tutorial at the May 2019 Embedded Vision Summit. Distance estimation is at the heart of automotive driver assistance systems (ADAS) and automated driving (AD). Simply stated, safe operation of vehicles requires robust distance estimation.… “Distance Estimation Solutions for ADAS

Read More »
logo_2020

May 18 - 21, Santa Clara, California

The preeminent event for practical, deployable computer vision and visual AI, for product creators who want to bring visual intelligence to products.

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

1646 North California Blvd.,
Suite 360
Walnut Creek, CA 94596 USA

Phone
Phone: +1 (925) 954-1411
Scroll to Top //