Processors for Embedded Vision
THIS TECHNOLOGY CATEGORY INCLUDES ANY DEVICE THAT EXECUTES VISION ALGORITHMS OR VISION SYSTEM CONTROL SOFTWARE
This technology category includes any device that executes vision algorithms or vision system control software. The following diagram shows a typical computer vision pipeline; processors are often optimized for the compute-intensive portions of the software workload.

The following examples represent distinctly different types of processor architectures for embedded vision, and each has advantages and trade-offs that depend on the workload. For this reason, many devices combine multiple processor types into a heterogeneous computing environment, often integrated into a single semiconductor component. In addition, a processor can be accelerated by dedicated hardware that improves performance on computer vision algorithms.
General-purpose CPUs
While computer vision algorithms can run on most general-purpose CPUs, desktop processors may not meet the design constraints of some systems. However, x86 processors and system boards can leverage the PC infrastructure for low-cost hardware and broadly-supported software development tools. Several Alliance Member companies also offer devices that integrate a RISC CPU core. A general-purpose CPU is best suited for heuristics, complex decision-making, network access, user interface, storage management, and overall control. A general purpose CPU may be paired with a vision-specialized device for better performance on pixel-level processing.
Graphics Processing Units
High-performance GPUs deliver massive amounts of parallel computing potential, and graphics processors can be used to accelerate the portions of the computer vision pipeline that perform parallel processing on pixel data. While General Purpose GPUs (GPGPUs) have primarily been used for high-performance computing (HPC), even mobile graphics processors and integrated graphics cores are gaining GPGPU capability—meeting the power constraints for a wider range of vision applications. In designs that require 3D processing in addition to embedded vision, a GPU will already be part of the system and can be used to assist a general-purpose CPU with many computer vision algorithms. Many examples exist of x86-based embedded systems with discrete GPGPUs.
Digital Signal Processors
DSPs are very efficient for processing streaming data, since the bus and memory architecture are optimized to process high-speed data as it traverses the system. This architecture makes DSPs an excellent solution for processing image pixel data as it streams from a sensor source. Many DSPs for vision have been enhanced with coprocessors that are optimized for processing video inputs and accelerating computer vision algorithms. The specialized nature of DSPs makes these devices inefficient for processing general-purpose software workloads, so DSPs are usually paired with a RISC processor to create a heterogeneous computing environment that offers the best of both worlds.
Field Programmable Gate Arrays (FPGAs)
Instead of incurring the high cost and long lead-times for a custom ASIC to accelerate computer vision systems, designers can implement an FPGA to offer a reprogrammable solution for hardware acceleration. With millions of programmable gates, hundreds of I/O pins, and compute performance in the trillions of multiply-accumulates/sec (tera-MACs), high-end FPGAs offer the potential for highest performance in a vision system. Unlike a CPU, which has to time-slice or multi-thread tasks as they compete for compute resources, an FPGA has the advantage of being able to simultaneously accelerate multiple portions of a computer vision pipeline. Since the parallel nature of FPGAs offers so much advantage for accelerating computer vision, many of the algorithms are available as optimized libraries from semiconductor vendors. These computer vision libraries also include preconfigured interface blocks for connecting to other vision devices, such as IP cameras.
Vision-Specific Processors and Cores
Application-specific standard products (ASSPs) are specialized, highly integrated chips tailored for specific applications or application sets. ASSPs may incorporate a CPU, or use a separate CPU chip. By virtue of their specialization, ASSPs for vision processing typically deliver superior cost- and energy-efficiency compared with other types of processing solutions. Among other techniques, ASSPs deliver this efficiency through the use of specialized coprocessors and accelerators. And, because ASSPs are by definition focused on a specific application, they are usually provided with extensive associated software. This same specialization, however, means that an ASSP designed for vision is typically not suitable for other applications. ASSPs’ unique architectures can also make programming them more difficult than with other kinds of processors; some ASSPs are not user-programmable.

LM Studio Accelerates LLM Performance With NVIDIA GeForce RTX GPUs and CUDA 12.8
This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. Latest release of the desktop application brings enhanced dev tools and model controls, as well as better performance for RTX GPUs. As AI use cases continue to expand — from document summarization to custom software agents —

See the 2025 Best Edge AI Processor IP at the Embedded Vision Summit
Burlingame, CA – May 12, 2025 – Quadric® today announced that it will showcase its ChimeraTM general-purpose neural processing unit (GPNPU) at the Embedded Vision Summit, May 20-22, 2025, at the Santa Clara Convention Center. The Chimera GPNPU was recently named the 2025 Best Edge AI Processor IP by the Edge AI and Vision Alliance. Quadric

Lattice to Showcase Innovative Edge AI and Vision Solutions at Embedded Vision Summit 2025
HILLSBORO, Ore. – May 8, 2025 – Lattice Semiconductor (NASDAQ: LSCC), the low power programmable leader, today announced its exhibition plan for Embedded Vision Summit 2025. Lattice will discuss its latest FPGA technology during an expert-led track session, in addition to a demo-filled booth display focused on Edge AI, embedded vision, sensor fusion, and robotics.

Advancing Generative AI at the Edge During CES 2025
This blog post was originally published at Ambarella’s website. It is reprinted here with the permission of Ambarella. For this year’s CES, our theme was Your GenAI Edge—highlighting how Ambarella’s AI SoCs continue to redefine what’s possible with generative AI at the edge. Building on last year’s edge GenAI demos, we debuted a new 25-stream,

AI Chips for Data Center and Cloud to Exceed $400 Billion by 2030
AI Chips Market by AI Chip Type. For full data, refer to “AI Chips for Data Centers and Cloud 2025-2035: Technologies, Market, Forecasts”. By 2030, IDTechEx forecasts that the deployment of AI data centers, commercialization of AI, and the increasing performance requirements from large AI models will perpetuate the already soaring market size of AI

Image Sensor Selection: Five Tradeoffs Every Vision Engineer Should Nail Before Tapeout
This blog post was originally published at Macnica’s website. It is reprinted here with the permission of Macnica. Choosing an image sensor isn’t just a line item on the BOM – it defines how well your camera, robot or inspection system will perform for the next decade. Our new whitepaper, “Image Sensor Selection: Key Factors to

Imagination Announces E-Series: A New Era of On-Device AI and Graphics
Massive gains in Imagination E-Series establishes the GPU as the principal accelerator for both graphics and AI at the edge London, UK – 8 May 2025 – Imagination Technologies redefines edge AI and graphics system design with the launch of Imagination E-Series GPU IP. E-Series leverages its highly efficient parallel processing architecture to provide exceptional graphics

Cadence Accelerates Physical AI Applications with Tensilica NeuroEdge 130 AI Co-processor
New class of processor raises the bar for performance efficiency, delivering 30% area savings and 20% lower power 08 May 2025 — SAN JOSE, Calif.— Cadence (Nasdaq: CDNS) today announced the Cadence® Tensilica® NeuroEdge 130 AI Co-Processor (AICP), a new class of processor designed to complement any neural processing unit (NPU) and enable end-to-end execution

Optimizing Transformer-based Diffusion Models for Video Generation with NVIDIA TensorRT
This article was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. State-of-the-art image diffusion models take tens of seconds to process a single image. This makes video diffusion even more challenging, requiring significant computational resources and high costs. By leveraging the latest FP8 quantization features on NVIDIA Hopper GPUs

STMicroelectronics Smart Vision Solutions at the 2025 Embedded Vision Summit
STMicroelectronics continues to revolutionize the world of imaging and edge-AI technologies with its innovative ST BrightSense Imaging solutions, ST Flightsense Time-of-Flight technologies and its new Arm® Cortex®-M55-based MCU. Leveraging cutting-edge advancements in CMOS image sensors, in mini-LiDAR with flood illumination and in the ST Neural-ART Accelerator, STMicroelectronics offers demos that highlight the capabilities of their

Enable Pose Detection on Snapdragon X Elite: Step-by-step Tutorial
This article was originally published at Qualcomm’s website. It is reprinted here with the permission of Qualcomm. I know why you’re here; you’ve decided to buy your first device with Snapdragon X Elite processor, awesome choice! You now ventured over to Qualcomm AI Hub, grabbed a model and excitedly watched as it downloaded. “Hmmm okay…

R²D²: Adapting Dexterous Robots with NVIDIA Research Workflows and Models
This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. Robotic arms are used today for assembly, packaging, inspection, and many more applications. However, they are still preprogrammed to perform specific and often repetitive tasks. To meet the increasing need for adaptability in most environments, perceptive arms

Microchip Expands Connectivity, Storage and Compute Portfolios to Meet the Growing Demands of AI Data Center Applications
Company delivers an innovative, secure and scalable ecosystem to support modern servers CHANDLER, Ariz., April 28, 2025 — The rapid growth of artificial intelligence (AI) is transforming data centers, creating an unprecedented demand for high-performance, secure, reliable and innovative solutions. Microchip Technology (Nasdaq: MCHP) is addressing these evolving market needs by developing advanced technologies for data center

Closing the Gap: How Autofocus Empowers Mixed Reality to Rival the Human Eye
This blog post was originally published at Inuitive’s website. It is reprinted here with the permission of Inuitive. The human eye is an engineering marvel, seamlessly adjusting focus between near and far objects with speed and precision. For mixed reality (MR) to achieve true immersion, it must replicate this natural capability. Autofocus (AF) isn’t just

Andes Technology, Baya Systems and Imagination Technologies to Present on Heterogeneous Compute Architectures at Andes RISC-V CON Silicon Valley
Technical session to explore memory hierarchy, CPU-GPU interaction and real-world integration strategies for accelerating AI and edge workloads SANTA CLARA, CA – April 24, 2025 – Baya Systems, a leader in high-performance system architecture and design tools, today announced it will participate in Andes RISC-V CON Silicon Valley. In a joint developer track session with

AI Chips for Data Centers and Cloud 2025-2035: Technologies, Market, Forecasts
For more information, visit https://www.idtechex.com/en/research-report/ai-chips-for-data-centers-and-cloud-2025-2035-technologies-market-forecasts/1095. AI chips market to reach US$453 billion by 2030 at a CAGR of 14% from 2025 Frontier AI attracts hundreds of billions in global investment, with governments and hyperscalers racing to lead in domains like drug discovery and autonomous infrastructure. Graphics processing units (GPUs) and other AI chips have been

Intelligence Everywhere: How Particle Created the Equivalent of a Raspberry Pi Powered by Qualcomm Dragonwing
This blog post was originally published at Qualcomm’s website. It is reprinted here with the permission of Qualcomm. Particle is at the forefront of helping companies bring smarts and connectivity to their products — it’s going a step further with Tachyon Key Takeaways: Particle is breaking into a new industry with Tachyon, a 5G and

BrainChip Extends RISC-V Reach with Andes Technology Integration
April 23, 2025–LAGUNA HILLS, Calif.–(BUSINESS WIRE)–BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY), the world’s first commercial producer of ultra-low power, fully digital, event-based, brain-inspired AI, today announced the integration of its NPUs with RISC-V cores from Andes Technology, the industry leading provider of RISC-V embedded cores. The companies will demonstrate BrainChip’s Akida™ AKD1500

Andes Technology and Imagination Technologies Showcase Android 15 on High-Performance RISC-V Based Platform
San Jose, CA – April 23, 2025 – Andes Technology (TWSE: 6533; SIN: US03420C2089; ISIN: US03420C1099), the leading supplier of high-efficiency, low-power 32/64-bit RISC-V processor cores and Founding Premier member of RISC-V International, in collaboration with Imagination Technologies, today announces the successful demonstration of Android 15 (Vanilla Ice Cream) running on a high-performance RISC-V-based hardware system. This

Quadric’s Chimera GPNPU Named Best Edge AI Processor IP by Edge AI and Vision Alliance
Burlingame, CA – April 21, 2025 – Quadric® today announced that its ChimeraTM QC general-purpose neural processing unit (GPNPU) was named the 2025 best edge AI processor IP by the Edge AI and Vision AllianceTM. The annual Edge AI and Vision Product of the Year Awards celebrate the top building-block components that enable edge AI