Processors for Embedded Vision
THIS TECHNOLOGY CATEGORY INCLUDES ANY DEVICE THAT EXECUTES VISION ALGORITHMS OR VISION SYSTEM CONTROL SOFTWARE
This technology category includes any device that executes vision algorithms or vision system control software. The following diagram shows a typical computer vision pipeline; processors are often optimized for the compute-intensive portions of the software workload.

The following examples represent distinctly different types of processor architectures for embedded vision, and each has advantages and trade-offs that depend on the workload. For this reason, many devices combine multiple processor types into a heterogeneous computing environment, often integrated into a single semiconductor component. In addition, a processor can be accelerated by dedicated hardware that improves performance on computer vision algorithms.
General-purpose CPUs
While computer vision algorithms can run on most general-purpose CPUs, desktop processors may not meet the design constraints of some systems. However, x86 processors and system boards can leverage the PC infrastructure for low-cost hardware and broadly-supported software development tools. Several Alliance Member companies also offer devices that integrate a RISC CPU core. A general-purpose CPU is best suited for heuristics, complex decision-making, network access, user interface, storage management, and overall control. A general purpose CPU may be paired with a vision-specialized device for better performance on pixel-level processing.
Graphics Processing Units
High-performance GPUs deliver massive amounts of parallel computing potential, and graphics processors can be used to accelerate the portions of the computer vision pipeline that perform parallel processing on pixel data. While General Purpose GPUs (GPGPUs) have primarily been used for high-performance computing (HPC), even mobile graphics processors and integrated graphics cores are gaining GPGPU capability—meeting the power constraints for a wider range of vision applications. In designs that require 3D processing in addition to embedded vision, a GPU will already be part of the system and can be used to assist a general-purpose CPU with many computer vision algorithms. Many examples exist of x86-based embedded systems with discrete GPGPUs.
Digital Signal Processors
DSPs are very efficient for processing streaming data, since the bus and memory architecture are optimized to process high-speed data as it traverses the system. This architecture makes DSPs an excellent solution for processing image pixel data as it streams from a sensor source. Many DSPs for vision have been enhanced with coprocessors that are optimized for processing video inputs and accelerating computer vision algorithms. The specialized nature of DSPs makes these devices inefficient for processing general-purpose software workloads, so DSPs are usually paired with a RISC processor to create a heterogeneous computing environment that offers the best of both worlds.
Field Programmable Gate Arrays (FPGAs)
Instead of incurring the high cost and long lead-times for a custom ASIC to accelerate computer vision systems, designers can implement an FPGA to offer a reprogrammable solution for hardware acceleration. With millions of programmable gates, hundreds of I/O pins, and compute performance in the trillions of multiply-accumulates/sec (tera-MACs), high-end FPGAs offer the potential for highest performance in a vision system. Unlike a CPU, which has to time-slice or multi-thread tasks as they compete for compute resources, an FPGA has the advantage of being able to simultaneously accelerate multiple portions of a computer vision pipeline. Since the parallel nature of FPGAs offers so much advantage for accelerating computer vision, many of the algorithms are available as optimized libraries from semiconductor vendors. These computer vision libraries also include preconfigured interface blocks for connecting to other vision devices, such as IP cameras.
Vision-Specific Processors and Cores
Application-specific standard products (ASSPs) are specialized, highly integrated chips tailored for specific applications or application sets. ASSPs may incorporate a CPU, or use a separate CPU chip. By virtue of their specialization, ASSPs for vision processing typically deliver superior cost- and energy-efficiency compared with other types of processing solutions. Among other techniques, ASSPs deliver this efficiency through the use of specialized coprocessors and accelerators. And, because ASSPs are by definition focused on a specific application, they are usually provided with extensive associated software. This same specialization, however, means that an ASSP designed for vision is typically not suitable for other applications. ASSPs’ unique architectures can also make programming them more difficult than with other kinds of processors; some ASSPs are not user-programmable.

Accelerating Innovation in Low Power AI Applications with Lattice FPGAs
This blog post was originally published at Lattice Semiconductor’s website. It is reprinted here with the permission of Lattice Semiconductor. On-device AI inference capability is expected to reach 60% of all devices by 2024, according to ABI Research. This underscores the rapid speed of AI innovation to take place in the last few years that

Measuring NPU Performance
This blog post was originally published at Expedera’s website. It is reprinted here with the permission of Expedera. There is a lot more to understanding the capabilities of an AI engine than TOPS per watt. A rather arbitrary measure of the number of operations of an engine per unit of power, this metric completely misses

“How Transformers are Changing the Direction of Deep Learning Architectures,” a Presentation from Synopsys
Tom Michiels, System Architect for DesignWare ARC Processors at Synopsys, presents the “How Transformers are Changing the Direction of Deep Learning Architectures” tutorial at the May 2022 Embedded Vision Summit. The neural network architectures used in embedded real-time applications are evolving quickly. Transformers are a leading deep learning approach for… “How Transformers are Changing the

“Introduction to Computer Vision with Convolutional Neural Networks,” a Presentation from Intel
Mohammad Haghighat, Senior AI Software Product Manager at Intel, presents the “Introduction to Computer Vision with Convolutional Neural Networks” tutorial at the May 2022 Embedded Vision Summit. This presentation covers the basics of computer vision using convolutional neural networks. Haghighat begins by introducing some important conventional computer vision techniques and… “Introduction to Computer Vision with

How to Build a Custom Embedded Stereo System for Depth Perception
This article was originally published at Teledyne FLIR’s website. It is reprinted here with the permission of Teledyne FLIR. There are various 3D sensor options for developing depth perception systems including, stereo vision with cameras, lidar, and time-of-flight sensors. Each option has its strengths and weaknesses. A stereo system is typically low cost, rugged enough

NVIDIA Jetson AGX Orin 32GB Production Modules Now Available; Partner Ecosystem Appliances and Servers Arrive
Nearly three dozen partners are offering feature-packed systems based on the new Jetson Orin module to help customers accelerate AI and robotics deployments. Bringing new AI and robotics applications and products to market, or supporting existing ones, can be challenging for developers and enterprises. The NVIDIA Jetson AGX Orin 32GB production module — available now

“Are Neuromorphic Vision Technologies Ready for Commercial Use?,” An Embedded Vision Summit Expert Panel Discussion
Sally Ward-Foxton, European Correspondent for EE Times, moderates the “Are Neuromorphic Vision Technologies Ready for Commercial Use?” Expert Panel at the May 2022 Embedded Vision Summit. Other panelists include Garrick Orchard, Research Scientist at Intel Labs, James Marshall, Chief Scientific Officer at Opteran, Ryad Benosman, Professor at the University of… “Are Neuromorphic Vision Technologies Ready

CEVA Celebrates 15 Billionth CEVA-powered Chip Shipped
Milestone underscores CEVA’s central role in the IoT era, enabling wireless connectivity and intelligence in billions of smartphones, consumer electronics, wearables, IoT endpoints and edge AI devices ROCKVILLE, MD, August 9, 2022 – CEVA, Inc. (NASDAQ: CEVA), the leading licensor of wireless connectivity and smart sensing technologies and co-creation solutions, announced today that cumulative royalty-bearing

“Embedded Vision in Robotics, Biotech and Education,” a Conversation with Dean Kamen
Dean Kamen, Founder of DEKA Research and Development, talks with Jeff Bier, Founder of the Edge AI and Vision Alliance, for the “Embedded Vision in Robotics, Biotech and Education” fireside chat at the May 2022 Embedded Vision Summit. In his 2018 keynote presentation at the Embedded Vision Summit, legendary inventor… “Embedded Vision in Robotics, Biotech

Arm Achieves Record Revenue and Shipments in Q1 FY 2022
August 8, 2022 – In Q1 FY 2022 Arm reported: A record Q1 total revenue of $719 million, up 6% year-over-year. A record quarterly royalty revenue of $453 million, up 22% year-over-year. This is the first time the quarterly royalty revenue has been higher than $400 million. Arm’s strategy of diversifying into markets beyond mobile,

“Building Embedded Vision Products: Management Lessons From The School of Hard Knocks,” a Presentation from the Edge AI and Vision Alliance
Phil Lapsley, Vice President of the Edge AI and Vision Alliance, presents the “Building Embedded Vision Products: Management Lessons From The School of Hard Knocks” tutorial at the May 2022 Embedded Vision Summit. It’s hard to build embedded AI and vision products, and the challenges aren’t just technical. In this… “Building Embedded Vision Products: Management

May 2022 Embedded Vision Summit Opening Remarks (May 18)
Jeff Bier, Founder of the Edge AI and Vision Alliance, welcomes attendees to the May 2022 Embedded Vision Summit on May 18, 2022. Bier provides an overview of the edge AI and vision market opportunities, challenges, solutions and trends. He also introduces the Edge AI and Vision Alliance and the… May 2022 Embedded Vision Summit

May 2022 Embedded Vision Summit Opening Remarks (May 17)
Jeff Bier, Founder of the Edge AI and Vision Alliance, welcomes attendees to the May 2022 Embedded Vision Summit on May 17, 2022. Bier provides an overview of the edge AI and vision market opportunities, challenges, solutions and trends. He also introduces the Edge AI and Vision Alliance and the… May 2022 Embedded Vision Summit

Sequitur Labs First to Provide Chip-to-Cloud Embedded Security in Support of New NVIDIA Jetson Orin Platform
EmSPARK Security Suite for Jetson AGX Orin helps protect autonomous machines at the edge SEATTLE, August 3, 2022 /Business Wire/ – Sequitur Labs today announced that its EmSPARK™ Security Suite for the NVIDIA Jetson™ edge AI platform has been qualified with the new Jetson AGX Orin™ 32 GB module to support trial deployments of the

Edge Impulse Releases Deployment Support for BrainChip Akida Neuromorphic IP
The tech firms’ collaboration augments brain-mimicking Spiking Neural Networks. Edge Impulse, the leading platform for enabling ML at the edge, and BrainChip, the leading provider of neuromorphic AI IP technology, announced support for deploying Edge Impulse projects on the BrainChip MetaTF platform. Edge Impulse enables developers to rapidly build enterprise-grade ML algorithms, trained on real

“Event-Based Neuromorphic Perception and Computation: The Future of Sensing and AI,” a Keynote Presentation from Ryad Benosman
Ryad Benosman, Professor at the University of Pittsburgh and Adjunct Professor at the CMU Robotics Institute, presents the “Event-Based Neuromorphic Perception and Computation: The Future of Sensing and AI” tutorial at the May 2022 Embedded Vision Summit. We say that today’s mainstream computer vision technologies enable machines to “see,” much… “Event-Based Neuromorphic Perception and Computation:

“A New AI Platform Architecture for the Smart Toys of the Future,” a Presentation from Xperi
Gabriel Costache, Senior R&D Director at Xperi, presents the “New AI Platform Architecture for the Smart Toys of the Future” tutorial at the May 2022 Embedded Vision Summit. From a parent’s perspective, toys should be safe, private, entertaining and educational, with the ability to adapt and grow with the child.… “A New AI Platform Architecture

The AI Semiconductor Market 1st Half 2022
This market research report was originally published at Woodside Capital Partners’ website. It is reprinted here with the permission of Woodside Capital Partners. Palo Alto – July 29, 2022 – Woodside Capital Partners (WCP) is pleased to share our Industry Report on the AI Semiconductor Market 1st Half 2022, authored by Managing Director Shusaku Sumida.

“Build Smarter, Safer and Efficient Autonomous Robots and Mobile Machines,” a Presentation from Texas Instruments
Manisha Agrawal, Product Marketing Manager at Texas Instruments, presents the “Build Smarter, Safer and Efficient Autonomous Robots and Mobile Machines” tutorial at the May 2022 Embedded Vision Summit. Automation is expanding rapidly from the factory floor to the consumer’s front door. Examples include autonomous mobile robots used in warehouses and… “Build Smarter, Safer and Efficient

“Intelligent Vision for the Industrial, Automotive and IoT Edge with the i.MX 8M Plus Applications Processor,” a Presentation from NXP Semiconductors
Ali Osman Örs, Director of AI ML Strategy and Technologies for Edge Processing at NXP Semiconductors, presents the “Intelligent Vision for the Industrial, Automotive and IoT Edge with the i.MX 8M Plus Applications Processor” tutorial at the May 2022 Embedded Vision Summit. Today’s edge-based ML solutions need a powerful multicore… “Intelligent Vision for the Industrial,