Processors

Processors for Embedded Vision

THIS TECHNOLOGY CATEGORY INCLUDES ANY DEVICE THAT EXECUTES VISION ALGORITHMS OR VISION SYSTEM CONTROL SOFTWARE

This technology category includes any device that executes vision algorithms or vision system control software. The following diagram shows a typical computer vision pipeline; processors are often optimized for the compute-intensive portions of the software workload.

ev pipeline

The following examples represent distinctly different types of processor architectures for embedded vision, and each has advantages and trade-offs that depend on the workload. For this reason, many devices combine multiple processor types into a heterogeneous computing environment, often integrated into a single semiconductor component. In addition, a processor can be accelerated by dedicated hardware that improves performance on computer vision algorithms.

General-purpose CPUs

While computer vision algorithms can run on most general-purpose CPUs, desktop processors may not meet the design constraints of some systems. However, x86 processors and system boards can leverage the PC infrastructure for low-cost hardware and broadly-supported software development tools. Several Alliance Member companies also offer devices that integrate a RISC CPU core. A general-purpose CPU is best suited for heuristics, complex decision-making, network access, user interface, storage management, and overall control. A general purpose CPU may be paired with a vision-specialized device for better performance on pixel-level processing.

Graphics Processing Units

High-performance GPUs deliver massive amounts of parallel computing potential, and graphics processors can be used to accelerate the portions of the computer vision pipeline that perform parallel processing on pixel data. While General Purpose GPUs (GPGPUs) have primarily been used for high-performance computing (HPC), even mobile graphics processors and integrated graphics cores are gaining GPGPU capability—meeting the power constraints for a wider range of vision applications. In designs that require 3D processing in addition to embedded vision, a GPU will already be part of the system and can be used to assist a general-purpose CPU with many computer vision algorithms. Many examples exist of x86-based embedded systems with discrete GPGPUs.

Digital Signal Processors

DSPs are very efficient for processing streaming data, since the bus and memory architecture are optimized to process high-speed data as it traverses the system. This architecture makes DSPs an excellent solution for processing image pixel data as it streams from a sensor source. Many DSPs for vision have been enhanced with coprocessors that are optimized for processing video inputs and accelerating computer vision algorithms. The specialized nature of DSPs makes these devices inefficient for processing general-purpose software workloads, so DSPs are usually paired with a RISC processor to create a heterogeneous computing environment that offers the best of both worlds.

Field Programmable Gate Arrays (FPGAs)

Instead of incurring the high cost and long lead-times for a custom ASIC to accelerate computer vision systems, designers can implement an FPGA to offer a reprogrammable solution for hardware acceleration. With millions of programmable gates, hundreds of I/O pins, and compute performance in the trillions of multiply-accumulates/sec (tera-MACs), high-end FPGAs offer the potential for highest performance in a vision system. Unlike a CPU, which has to time-slice or multi-thread tasks as they compete for compute resources, an FPGA has the advantage of being able to simultaneously accelerate multiple portions of a computer vision pipeline. Since the parallel nature of FPGAs offers so much advantage for accelerating computer vision, many of the algorithms are available as optimized libraries from semiconductor vendors. These computer vision libraries also include preconfigured interface blocks for connecting to other vision devices, such as IP cameras.

Vision-Specific Processors and Cores

Application-specific standard products (ASSPs) are specialized, highly integrated chips tailored for specific applications or application sets. ASSPs may incorporate a CPU, or use a separate CPU chip. By virtue of their specialization, ASSPs for vision processing typically deliver superior cost- and energy-efficiency compared with other types of processing solutions. Among other techniques, ASSPs deliver this efficiency through the use of specialized coprocessors and accelerators. And, because ASSPs are by definition focused on a specific application, they are usually provided with extensive associated software. This same specialization, however, means that an ASSP designed for vision is typically not suitable for other applications. ASSPs’ unique architectures can also make programming them more difficult than with other kinds of processors; some ASSPs are not user-programmable.

GenAI Firsts: Redefining What’s Possible At the Edge

This blog post was originally published at Qualcomm’s website. It is reprinted here with the permission of Qualcomm. How our pioneering research and leading proof-of-concepts are paving the way for generative AI to scale What you should know: Qualcomm AI Research is pioneering research and inventing novel techniques to deliver efficient, high-performance GenAI solutions. Our

Read More »

Software-defined Vehicles: Built For Users, or For the Industry?

SDV Level Chart: IDTechEx defines SDV performance using six levels. Most consumers still have limited awareness of the deeper value behind “software-defined” capabilities The concept of the Software-Defined Vehicle (SDV) has rapidly emerged as a transformative trend reshaping the automotive industry. Yet, despite widespread use of the term, there remains significant confusion around its core

Read More »

Andes Technology Announces D23-SE: A Functional Safety RISC-V Core with DCLS and Split-lock for ASIL-B/D Automotive Applications

Hsinchu, Taiwan – September 03, 2025 – Andes Technology, a leading supplier of high-efficiency, low-power 32/64-bit RISC-V processor cores, today announced the launch of its new D23-SE core, a compact and secure processor designed for functional safety applications. Based on the production-proven D23, the D23-SE is engineered to meet the stringent safety and performance requirements of

Read More »

How to Support Multi-planar Format in Python V4L2 Applications on i.MX8M Plus

This blog post was originally published at e-con Systems’ website. It is reprinted here with the permission of e-con Systems. The default Python V4L2 library module contains critical details related to the V4L2 capture method. Learn how to implement basic definitions (missing the default library module) and capture images in the V4L2 multi-planar format. Python

Read More »

Advanced Packaging Market Set to Reach $79.4 Billion by 2030

This market research report was originally published at the Yole Group’s website. It is reprinted here with the permission of the Yole Group. Yole Group releases its annual report, Status of the Advanced Packaging Industry 2025, featuring exclusive market ranking and in-depth analysis of leading players. KEY TAKEAWAYS: Advanced packaging market reached $46 billion in

Read More »

Into the Omniverse: World Foundation Models Advance Autonomous Vehicle Simulation and Safety

This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. AI and OpenUSD accelerate safe, scalable autonomous vehicle development by enabling simulation-first approaches. Editor’s note: This blog is a part of Into the Omniverse, a series focused on how developers, 3D practitioners and enterprises can transform their

Read More »

LLiMa: Real-time Edge Generative AI Under 10W, Built for You

This blog post was originally published at SiMa.ai’s website. It is reprinted here with the permission of SiMa.ai. LLiMa represents a paradigm shift in physical AI deployment that fundamentally changes how enterprises approach GenAI integration, enabling real Physical AI. While competitors typically offer pre-optimized models that were manually tuned for specific hardware configurations, LLiMa takes

Read More »

“Using Computer Vision for Early Detection of Cognitive Decline via Sleep-wake Data,” a Presentation from AI Tensors

Ravi Kota, CEO of AI Tensors, presents the “Using Computer Vision for Early Detection of Cognitive Decline via Sleep-wake Data” tutorial at the May 2025 Embedded Vision Summit. AITCare-Vision predicts cognitive decline by analyzing sleep-wake disorders data in older adults. Using computer vision and motion sensors coupled with AI algorithms,… “Using Computer Vision for Early

Read More »

Andes Technology Further Expands Long-term Collaboration with Sequans Communications with AndesCore A25MP and N25F RISC-V CPU Core Licenses

Hsinchu Science Park, Taiwan – August 28th, 2025 – Andes Technology Corporation (TWSE: 6533) today celebrated ten years of collaboration with Sequans Communications, with the latest licensing of Andes’ A25MP and N25F RISC-V CPU intellectual property (IP) cores for Sequans’ next-generation Internet of Things (IoT) chipsets featuring 4G and 5G eRedCap cellular network technologies. Sequans, a fabless

Read More »

Capgemini Leverages Qualcomm Dragonwing Portfolio to Enhance Railway Monitoring with Edge AI

This blog post was originally published at Qualcomm’s website. It is reprinted here with the permission of Qualcomm. AI device powered by Qualcomm Dragonwing boosts productivity and reduces cloud dependence in Capgemini’s monitoring application for grade crossings Capgemini moved from their previous hardware solution to an edge AI device powered by the Qualcomm® Dragonwing™ QCS6490

Read More »

“Vision-based Aircraft Functions for Autonomous Flight Systems,” a Presentation from Acubed (an Airbus Innovation Center)

Arne Stoschek, Vice President of AI and Autonomy at Acubed (an Airbus innovation center), presents the “Vision-based Aircraft Functions for Autonomous Flight Systems” tutorial at the May 2025 Embedded Vision Summit. At Acubed, an Airbus innovation center, the mission is to accelerate AI and autonomy in aerospace. Stoschek gives an… “Vision-based Aircraft Functions for Autonomous

Read More »

Chips&Media’s ‘WAVE’ IP Successfully Enters the Global Mobile Market

2025-08-27 – WAVE, video multi-CODEC HW IP from Chips&Media, is being deployed to global mobile market this August. This remarkable achievement is the result of the multi-year collaboration with one of leading mobile handset makers in Android industry, and WAVE is playing the irreplaceable key roles to deliver the best video capture and playback experience.

Read More »

“Edge AI and Vision at Scale: What’s Real, What’s Next, What’s Missing?,” An Embedded Vision Summit Expert Panel Discussion

Sally Ward-Foxton, Senior Reporter at EE Times, moderates the “Edge AI and Vision at Scale: What’s Real, What’s Next, What’s Missing?” Expert Panel at the May 2025 Embedded Vision Summit. Other panelists include Chen Wu, Director and Head of Perception at Waymo, Vikas Bhardwaj, Director of AI in the Reality… “Edge AI and Vision at

Read More »

Hot Topics at Hot Chips: Inference, Networking, AI Innovation at Every Scale — All Built on NVIDIA

This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. At the conference in Palo Alto, California, NVIDIA experts detail how NVIDIA NVLink and Spectrum-X Ethernet technologies, Blackwell and CUDA accelerate inference for millions of AI workflows across the globe. AI reasoning, inference and networking will be

Read More »

NVIDIA Blackwell-powered Jetson Thor Now Available, Accelerating the Age of General Robotics

News Summary: NVIDIA Jetson AGX Thor developer kit and production modules, robotics computers designed for physical AI and robotics, are now generally available. Over 2 million developers are using NVIDIA’s robotics stack, with Agility Robotics, Amazon Robotics, Boston Dynamics, Caterpillar, Figure, Hexagon, Medtronic and Meta among early Jetson Thor adopters. Jetson Thor, powered by NVIDIA

Read More »

e-con Systems Launches AI-powered Vision and Compute Solutions Accelerated by NVIDIA Jetson Thor

California & Chennai (August 25, 2025): e-con Systems®, a global leader in embedded vision solutions, announces support for the newly launched NVIDIA Jetson Thor series, covering a comprehensive range of vision solutions including USB Series, RouteCAM GigE series, 10G Holoscan Camera solutions and robust ECU platform purpose-built for real‑time edge AI applications. This strategic expansion

Read More »

AI at the Edge: The Next Gold Rush

This blog post was originally published at SiMa.ai’s website. It is reprinted here with the permission of SiMa.ai. Generative AI has ushered in a new era of technological progress, reminiscent of the rise of the internet in the 1990s. Beyond the impressive chatbots we’re now used to, the constant flow of innovation has introduced new

Read More »

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top