Processors for Embedded Vision
THIS TECHNOLOGY CATEGORY INCLUDES ANY DEVICE THAT EXECUTES VISION ALGORITHMS OR VISION SYSTEM CONTROL SOFTWARE
This technology category includes any device that executes vision algorithms or vision system control software. The following diagram shows a typical computer vision pipeline; processors are often optimized for the compute-intensive portions of the software workload.

The following examples represent distinctly different types of processor architectures for embedded vision, and each has advantages and trade-offs that depend on the workload. For this reason, many devices combine multiple processor types into a heterogeneous computing environment, often integrated into a single semiconductor component. In addition, a processor can be accelerated by dedicated hardware that improves performance on computer vision algorithms.
General-purpose CPUs
While computer vision algorithms can run on most general-purpose CPUs, desktop processors may not meet the design constraints of some systems. However, x86 processors and system boards can leverage the PC infrastructure for low-cost hardware and broadly-supported software development tools. Several Alliance Member companies also offer devices that integrate a RISC CPU core. A general-purpose CPU is best suited for heuristics, complex decision-making, network access, user interface, storage management, and overall control. A general purpose CPU may be paired with a vision-specialized device for better performance on pixel-level processing.
Graphics Processing Units
High-performance GPUs deliver massive amounts of parallel computing potential, and graphics processors can be used to accelerate the portions of the computer vision pipeline that perform parallel processing on pixel data. While General Purpose GPUs (GPGPUs) have primarily been used for high-performance computing (HPC), even mobile graphics processors and integrated graphics cores are gaining GPGPU capability—meeting the power constraints for a wider range of vision applications. In designs that require 3D processing in addition to embedded vision, a GPU will already be part of the system and can be used to assist a general-purpose CPU with many computer vision algorithms. Many examples exist of x86-based embedded systems with discrete GPGPUs.
Digital Signal Processors
DSPs are very efficient for processing streaming data, since the bus and memory architecture are optimized to process high-speed data as it traverses the system. This architecture makes DSPs an excellent solution for processing image pixel data as it streams from a sensor source. Many DSPs for vision have been enhanced with coprocessors that are optimized for processing video inputs and accelerating computer vision algorithms. The specialized nature of DSPs makes these devices inefficient for processing general-purpose software workloads, so DSPs are usually paired with a RISC processor to create a heterogeneous computing environment that offers the best of both worlds.
Field Programmable Gate Arrays (FPGAs)
Instead of incurring the high cost and long lead-times for a custom ASIC to accelerate computer vision systems, designers can implement an FPGA to offer a reprogrammable solution for hardware acceleration. With millions of programmable gates, hundreds of I/O pins, and compute performance in the trillions of multiply-accumulates/sec (tera-MACs), high-end FPGAs offer the potential for highest performance in a vision system. Unlike a CPU, which has to time-slice or multi-thread tasks as they compete for compute resources, an FPGA has the advantage of being able to simultaneously accelerate multiple portions of a computer vision pipeline. Since the parallel nature of FPGAs offers so much advantage for accelerating computer vision, many of the algorithms are available as optimized libraries from semiconductor vendors. These computer vision libraries also include preconfigured interface blocks for connecting to other vision devices, such as IP cameras.
Vision-Specific Processors and Cores
Application-specific standard products (ASSPs) are specialized, highly integrated chips tailored for specific applications or application sets. ASSPs may incorporate a CPU, or use a separate CPU chip. By virtue of their specialization, ASSPs for vision processing typically deliver superior cost- and energy-efficiency compared with other types of processing solutions. Among other techniques, ASSPs deliver this efficiency through the use of specialized coprocessors and accelerators. And, because ASSPs are by definition focused on a specific application, they are usually provided with extensive associated software. This same specialization, however, means that an ASSP designed for vision is typically not suitable for other applications. ASSPs’ unique architectures can also make programming them more difficult than with other kinds of processors; some ASSPs are not user-programmable.

NVIDIA and Intel to Develop AI Infrastructure and Personal Computing Products
Intel to design and manufacture custom data center and client CPUs with NVIDIA NVLink; NVIDIA to invest $5 billion in Intel common stock September 18, 2025 – NVIDIA (NASDAQ: NVDA) and Intel Corporation (NASDAQ: INTC) today announced a collaboration to jointly develop multiple generations of custom data center and PC products that accelerate applications and workloads

Shifting AI Inference from the Cloud to Your Phone Can Reduce AI Costs
This blog post was originally published at Qualcomm’s website. It is reprinted here with the permission of Qualcomm. Every AI query has a cost, and not just in dollars. Study shows distributing AI workloads to your devices — such as your smartphone — can reduce costs and decrease water consumption What you should know: Study

“Enabling Ego Vision Applications on Smart Eyewear Devices,” a Presentation from EssilorLuxottica
Francesca Palermo, Research Principal Investigator at EssilorLuxottica, presents the “Enabling Ego Vision Applications on Smart Eyewear Devices” tutorial at the May 2025 Embedded Vision Summit. Ego vision technology is revolutionizing the capabilities of smart eyewear, enabling applications that understand user actions, estimate human pose and provide spatial awareness through simultaneous… “Enabling Ego Vision Applications on

2025 Andes RISC-V CON Debuts in Seoul
Showcasing AI and Automotive Solutions Powered by RISC-V September 12, 2025 – Seoul, South Korea – As AI and automotive systems evolve at unprecedented speed, engineers are seeking more flexible, efficient, and secure computing solutions. RISC-V, with its open and extensible architecture, is fast becoming the preferred foundation for next-generation SoC designs. To explore this

Altera Closes Silver Lake Investment to Become World’s Largest Pure-play FPGA Solutions Provider
Independence accelerates innovation, enables customer focus, and drives long-term value creation SAN JOSE, Calif. and MENLO PARK, Calif. , 2025-09-15 – Altera Corporation, a leader in FPGA innovations, today announced that Silver Lake, a global leader in technology investing, has completed its acquisition of a 51% stake in the company from Intel Corporation, which will

LLiMa: SiMa.ai’s Automated Code Generation Framework for LLMs and VLMs for <10W
This blog post was originally published at SiMa.ai’s website. It is reprinted here with the permission of SiMa.ai. In our blog post titled “Implementing Multimodal GenAI Models on Modalix”, we describe how SiMa.ai’s MLSoC Modalix enables Generative AI models to be implemented for Physical AI applications with low latency and low power consumption. We implemented

“Deep Sentinel: Lessons Learned Building, Operating and Scaling an Edge AI Computer Vision Company,” a Presentation from Deep Sentinel
David Selinger, CEO of Deep Sentinel, presents the “Deep Sentinel: Lessons Learned Building, Operating and Scaling an Edge AI Computer Vision Company” tutorial at the May 2025 Embedded Vision Summit. Deep Sentinel’s edge AI security cameras stop some 45,000 crimes per year. Unlike most security camera systems, they don’t just… “Deep Sentinel: Lessons Learned Building,

Axelera AI Boosts LLMs at the Edge by 2x with Metis M.2 Max Introduction
First of its kind performance for LLMs and VLMs for low power, embedded devices. EINDHOVEN, NL – September 8, 2025 – Axelera AI, the leading provider of purpose-built AI hardware acceleration technology, today announced Metis® M.2 Max, a new addition to the company’s class-leading Metis AI processor unit (AIPU) family. Delivering the performance of a

Automated Driving for All: Snapdragon Ride Pilot System Brings State-of-the-art Safety and Comfort Features to Drivers Across the Globe
This blog post was originally published at Qualcomm’s website. It is reprinted here with the permission of Qualcomm. Qualcomm Technologies, Inc. introduces Snapdragon Ride Pilot at IAA Mobility 2025 What you should know: Qualcomm Technologies, Inc. has introduced Snapdragon Ride Pilot to help make driving more safety-focused and convenient for people around the world. Features

Smarter, Faster, More Personal AI Delivered on Consumer Devices with Arm’s New Lumex CSS Platform, Driving Double-digit Performance Gains
News Highlights: Arm Lumex CSS platform unlocks real-time on-device AI use cases like assistants, voice translation and personalization, with new SME2-enabled Arm CPUs delivering up to 5x faster AI performance Developers can access SME2 performance with KleidiAI, now integrated into all major mobile OSes and AI frameworks, including PyTorch ExecuTorch, Google LiteRT, Alibaba MNN and

Qualcomm and Google Cloud Deepen Collaboration to Bring Agentic AI Experiences to the Auto Industry
Highlights: Landmark technical collaboration brings together the strengths of two industry leaders with Google Gemini models and Qualcomm Snapdragon Digital Chassis solutions to help automakers create deeply personalized and advanced AI agents that will redefine customers’ experiences at every point in their journeys. Combines the best of both worlds – powerful on-device AI for instant,

Accelerate Autonomous Vehicle Development with the NVIDIA DRIVE AGX Thor Developer Kit
This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. Autonomous vehicle (AV) technology is rapidly evolving, fueled by ever-larger and more complex AI models deployed at the edge. Modern vehicles now require not only advanced perception and sensor fusion, but also end-to-end deep learning pipelines that

Qualcomm and BMW Group Unveil Groundbreaking Automated Driving System with Jointly Developed Software Stack
Highlights: AI-enabled Snapdragon Ride Pilot Automated Driving System, powered by Snapdragon Ride system-on-chips and a new jointly developed automated driving software stack, debuts in the all-new BMW iX3 at IAA Mobility 2025. System is validated in 60 countries worldwide and is targeted to be available in more than 100 countries by 2026. Scalable platform enabling

“Taking Computer Vision Products from Prototype to Robust Product,” an Interview with Blue River Technology
Chris Padwick, Machine Learning Engineer at Blue River Technology, talks with Mark Jamtgaard, Director of Technology at RetailNext for the “Taking Computer Vision Products from Prototype to Robust Product,” interview at the May 2025 Embedded Vision Summit. When developing computer vision-based products, getting from a proof of concept to a… “Taking Computer Vision Products from

GenAI Firsts: Redefining What’s Possible At the Edge
This blog post was originally published at Qualcomm’s website. It is reprinted here with the permission of Qualcomm. How our pioneering research and leading proof-of-concepts are paving the way for generative AI to scale What you should know: Qualcomm AI Research is pioneering research and inventing novel techniques to deliver efficient, high-performance GenAI solutions. Our

Software-defined Vehicles: Built For Users, or For the Industry?
SDV Level Chart: IDTechEx defines SDV performance using six levels. Most consumers still have limited awareness of the deeper value behind “software-defined” capabilities The concept of the Software-Defined Vehicle (SDV) has rapidly emerged as a transformative trend reshaping the automotive industry. Yet, despite widespread use of the term, there remains significant confusion around its core

Andes Technology Announces D23-SE: A Functional Safety RISC-V Core with DCLS and Split-lock for ASIL-B/D Automotive Applications
Hsinchu, Taiwan – September 03, 2025 – Andes Technology, a leading supplier of high-efficiency, low-power 32/64-bit RISC-V processor cores, today announced the launch of its new D23-SE core, a compact and secure processor designed for functional safety applications. Based on the production-proven D23, the D23-SE is engineered to meet the stringent safety and performance requirements of

How to Support Multi-planar Format in Python V4L2 Applications on i.MX8M Plus
This blog post was originally published at e-con Systems’ website. It is reprinted here with the permission of e-con Systems. The default Python V4L2 library module contains critical details related to the V4L2 capture method. Learn how to implement basic definitions (missing the default library module) and capture images in the V4L2 multi-planar format. Python

Upcoming Presentation and Demonstrations Showcase Autonomous Mobile Robots and Machine Vision
On Wednesday, October 15 from 11:45 AM – 12:15 PM PT, Alliance Member company eInfochips will deliver the presentation “Real-time Vision AI System on Edge AI Platforms” at the RoboBusiness and DeviceTalks West 2025 Conference in Santa Clara, California. From the event page: This session presents a real-time, edge-deployed Vision AI system for automated quality

Advanced Packaging Market Set to Reach $79.4 Billion by 2030
This market research report was originally published at the Yole Group’s website. It is reprinted here with the permission of the Yole Group. Yole Group releases its annual report, Status of the Advanced Packaging Industry 2025, featuring exclusive market ranking and in-depth analysis of leading players. KEY TAKEAWAYS: Advanced packaging market reached $46 billion in