Processors for Embedded Vision
THIS TECHNOLOGY CATEGORY INCLUDES ANY DEVICE THAT EXECUTES VISION ALGORITHMS OR VISION SYSTEM CONTROL SOFTWARE
This technology category includes any device that executes vision algorithms or vision system control software. The following diagram shows a typical computer vision pipeline; processors are often optimized for the compute-intensive portions of the software workload.
The following examples represent distinctly different types of processor architectures for embedded vision, and each has advantages and trade-offs that depend on the workload. For this reason, many devices combine multiple processor types into a heterogeneous computing environment, often integrated into a single semiconductor component. In addition, a processor can be accelerated by dedicated hardware that improves performance on computer vision algorithms.
While computer vision algorithms can run on most general-purpose CPUs, desktop processors may not meet the design constraints of some systems. However, x86 processors and system boards can leverage the PC infrastructure for low-cost hardware and broadly-supported software development tools. Several Alliance Member companies also offer devices that integrate a RISC CPU core. A general-purpose CPU is best suited for heuristics, complex decision-making, network access, user interface, storage management, and overall control. A general purpose CPU may be paired with a vision-specialized device for better performance on pixel-level processing.
Graphics Processing Units
High-performance GPUs deliver massive amounts of parallel computing potential, and graphics processors can be used to accelerate the portions of the computer vision pipeline that perform parallel processing on pixel data. While General Purpose GPUs (GPGPUs) have primarily been used for high-performance computing (HPC), even mobile graphics processors and integrated graphics cores are gaining GPGPU capability—meeting the power constraints for a wider range of vision applications. In designs that require 3D processing in addition to embedded vision, a GPU will already be part of the system and can be used to assist a general-purpose CPU with many computer vision algorithms. Many examples exist of x86-based embedded systems with discrete GPGPUs.
Digital Signal Processors
DSPs are very efficient for processing streaming data, since the bus and memory architecture are optimized to process high-speed data as it traverses the system. This architecture makes DSPs an excellent solution for processing image pixel data as it streams from a sensor source. Many DSPs for vision have been enhanced with coprocessors that are optimized for processing video inputs and accelerating computer vision algorithms. The specialized nature of DSPs makes these devices inefficient for processing general-purpose software workloads, so DSPs are usually paired with a RISC processor to create a heterogeneous computing environment that offers the best of both worlds.
Field Programmable Gate Arrays (FPGAs)
Instead of incurring the high cost and long lead-times for a custom ASIC to accelerate computer vision systems, designers can implement an FPGA to offer a reprogrammable solution for hardware acceleration. With millions of programmable gates, hundreds of I/O pins, and compute performance in the trillions of multiply-accumulates/sec (tera-MACs), high-end FPGAs offer the potential for highest performance in a vision system. Unlike a CPU, which has to time-slice or multi-thread tasks as they compete for compute resources, an FPGA has the advantage of being able to simultaneously accelerate multiple portions of a computer vision pipeline. Since the parallel nature of FPGAs offers so much advantage for accelerating computer vision, many of the algorithms are available as optimized libraries from semiconductor vendors. These computer vision libraries also include preconfigured interface blocks for connecting to other vision devices, such as IP cameras.
Vision-Specific Processors and Cores
Application-specific standard products (ASSPs) are specialized, highly integrated chips tailored for specific applications or application sets. ASSPs may incorporate a CPU, or use a separate CPU chip. By virtue of their specialization, ASSPs for vision processing typically deliver superior cost- and energy-efficiency compared with other types of processing solutions. Among other techniques, ASSPs deliver this efficiency through the use of specialized coprocessors and accelerators. And, because ASSPs are by definition focused on a specific application, they are usually provided with extensive associated software. This same specialization, however, means that an ASSP designed for vision is typically not suitable for other applications. ASSPs’ unique architectures can also make programming them more difficult than with other kinds of processors; some ASSPs are not user-programmable.
Amol Borkar, Director of Product Management and Marketing for Tensilica Vision and AI DSPs at Cadence, presents the “Designing the Next Ultra-Low-Power Always-On Solution” tutorial at the May 2022 Embedded Vision Summit. Increasingly, users expect their systems to be ready to respond at any time—for example, using a voice command… “Designing the Next Ultra-Low-Power Always-On
News Highlights: New flagship Immortalis GPU will supercharge the Android gaming experience, including hardware-based ray tracing for the first time Latest Armv9 CPUs deliver new levels of peak and efficient performance New Arm Total Compute Solutions address every level of performance, efficiency and scalability for specialized processing across all consumer device markets This time last
VVDN Technologies, a global provider of Engineering, Manufacturing, Digital Solutions, and Services launched the Nvidia Jetson-powered AI development system. The system is a combination of a custom carrier board developed by VVDN and Jetson Xavier™ NX Module. The NVIDIA JetPack™ SDK supports the entire development system with a customized BSP. The development system is backed
Meet Your Next-generation Visual AI Intelligence Application Requirement with the VVDN-QCS610/410 Development Kit
VVDN Technologies, a global provider of digital engineering, manufacturing, solutions, and services, has released the VVDN-QCS610/410 Development Kit, a compact and powerful solution to meet your new visual intelligence application needs. The VVDN-QCS610/QCS410 development kit is a compact and powerful solution for advanced visual intelligence applications. The development kit is a combination of the powerful
“TensorFlow Lite for Microcontrollers (TFLM): Recent Developments,” a Presentation from BDTI and Google
David Davis, Senior Embedded Software Engineer, and John Withers, Automation and Systems Engineer, both of BDTI, present the “TensorFlow Lite for Microcontrollers (TFLM): Recent Developments” tutorial at the May 2022 Embedded Vision Summit. TensorFlow Lite Micro (TFLM) is a generic inference framework designed to run TensorFlow models on digital signal processors (DSPs), microcontrollers and other
Flex Logix and CEVA Announce First Working Silicon of a DSP with Embedded FPGA to Allow a Flexible/Changeable ISA
Flex Logix® EFLX embedded FPGA brings reconfigurable computing to CEVA-X2 DSP instruction extension to support demanding and changing workloads MOUNTAIN VIEW, Calif. – June 27, 2022 – Flex Logix Technologies, Inc., the leading supplier of reconfigurable computing solutions, architecture and software, and CEVA, Inc. (NASDAQ:CEVA), the leading licensor of wireless connectivity and smart sensing technologies
“Jumpstart Your Edge AI Vision Application with New Development Kits from Avnet,” a Presentation from Avnet
Monica Houston, Technical Solutions Manager at Avnet, presents the “Jumpstart Your Edge AI Vision Application with New Development Kits from Avnet” tutorial at the May 2022 Embedded Vision Summit. Choosing the right processing solution for your embedded vision application can make or break your next development effort. This presentation introduces… “Jumpstart Your Edge AI Vision
“Arm Cortex-M Series Processors Spark a New Era of Use Cases, Enabling Low-cost, Low-power Computer Vision and Machine Learning,” A Presentation from Arm
Stephen Su, Senior Product Manager at Arm, presents the “Arm Cortex-M Series Processors Spark a New Era of Use Cases, Enabling Low-cost, Low-power Computer Vision and Machine Learning” tutorial at the May 2022 Embedded Vision Summit. The Arm Cortex-M processor family of microcontrollers is designed and optimized for cost- and… “Arm Cortex-M Series Processors Spark
Join Capital leads Seed+ as Opteran delivers genuine brain biomimicry inspired by insects London, June 23 2022 – Opteran, the Natural Intelligence company, has secured $12 million in funding led by Join Capital, with additional funding from IQ Capital, Northern Gritstone, Seraphim, Episode 1 and Schauenburg Ventures. In the next two years, Opteran will expand
“Introducing the Kria Robotics Starter Kit: Robotics and Machine Vision for Smart Factories,” a Presentation from AMD
Chetan Khona, Director of Industrial, Vision, Healthcare and Sciences Markets at AMD, presents the “Introducing the Kria Robotics Starter Kit: Robotics and Machine Vision for Smart Factories” tutorial at the May 2022 Embedded Vision Summit. A robot is a system of systems with diverse sensors and embedded processing nodes focused… “Introducing the Kria Robotics Starter
Improvements in Feature Extraction and Data Training Create Even Lower Power Dissipation for Remote Monitoring of Industrial Machinery Irvine, Calif., June 21, 2022 — Syntiant Corp., a provider of deep learning solutions making edge AI a reality for always-on applications, and Denmark-based CeramicSpeed, one of the world’s leading manufacturers of ceramic bearing products, today announced advancements
Autonomous Trucking Pioneer Inceptio Technology Partners With Ambarella to Deliver Level 3 Automated Driving, Including Surround Camera and Front ADAS Perception With AI Compute
Inceptio Technology Selects Ambarella’s Edge AI SoCs for Multi-Camera Perception Processing in the Automotive-Grade Central Computing Platform SANTA CLARA, Calif., June 22, 2022 — Ambarella, Inc. (NASDAQ: AMBA), an edge AI semiconductor company, and Inceptio Technology, an autonomous driving truck technology and operation company, today announced that Inceptio selected two each of Ambarella’s CV2FS and
The company’s latest chipset offers device makers another high-performance option for flagship smartphones HSINCHU, Taiwan – June 22, 2022 – MediaTek today announced the Dimensity 9000+, an enhancement to the company’s top-of-the-line 5G smartphone chipset. This new high-end offering delivers a boost in performance over the Dimensity 9000 to make the next generation of flagship
Kristof Denolf, Principal Engineer, and Bader Alam, Director of Software Engineering, both of AMD, present the “Programming Vision Pipelines on AMD’s AI Engines” tutorial at the May 2022 Embedded Vision Summit. AMD’s latest generation of Adaptive Compute Acceleration Platforms (ACAP), Versal AI Core and Versal AI Edge, include an array… “Programming Vision Pipelines on AMD’s
Rajy Rawther, PMTS Software Architect at AMD, presents the “Is Your AI Data Pre-processing Fast Enough? Speed It Up Using rocAL” tutorial at the May 2022 Embedded Vision Summit. AMD’s rocAL (ROCm Augmentation Library) is an open-source library for decoding and augmenting images, video and audio to accelerate the loading… “Is Your AI Data Pre-processing
Configurable CPU for mainstream smart embedded and networking devices including storage controllers and packet management solutions London, England – 21st June 2022 – Imagination Technologies announces IMG RTXM-2200, its first real-time embedded RISC-V CPU, a highly scalable, feature-rich, 32-bit embedded solution with a flexible design for a wide range of high-volume devices. IMG RTXM-2200 is
“Optimization Techniques with Intel’s OpenVINO to Enhance Performance on Your Existing Hardware,” a Presentation from Intel
Nico Galoppo, Principal Engineer (substituting for Ansley Dunn, Product Marketing Manager), and Ryan Loney, Technical Product Manager, both of Intel, present the “Optimization Techniques with Intel’s OpenVINO to Enhance Performance on Your Existing Hardware” tutorial at the May 2022 Embedded Vision Summit. Whether you’re using TensorFlow, PyTorch or another framework,… “Optimization Techniques with Intel’s OpenVINO
Basler Announces Elite-Level Status in NVIDIA Partner Network to Expand Support for Jetson Edge AI Platform
After a long-standing, successful membership in the NVIDIA Partner Network, Basler AG is elevated to Elite partner-level status. The collaboration provides Basler customers with the opportunity to combine the NVIDIA Jetson platform with vision AI technology even more seamlessly and with an intensified level of support. Ahrensburg, June 16, 2022 – Basler offers fully integrated
“Intel Video AI Box—Converging AI, Media and Computing in a Compact and Open Platform,” a Presentation from Intel
Richard Chuang, Principal AI Engineer at Intel, presents the “Intel Video AI Box—Converging AI, Media and Computing in a Compact and Open Platform” tutorial at the May 2022 Embedded Vision Summit. As a system integrator, solution provider or AI developer, you need to run your AI applications efficiently at the… “Intel Video AI Box—Converging AI,
“High-Efficiency Edge Vision Processing Based on Dynamically Reconfigurable TPU Technology,” a Presentation from Flex Logix
Cheng Wang, Senior Vice President and Co-founder of Flex Logix, presents the “High-Efficiency Edge Vision Processing Based on Dynamically Reconfigurable TPU Technology” tutorial at the May 2022 Embedded Vision Summit. To achieve high accuracy, edge computer vision requires teraops of processing to be executed in fractions of a second. Additionally,… “High-Efficiency Edge Vision Processing Based