Processors for Embedded Vision
THIS TECHNOLOGY CATEGORY INCLUDES ANY DEVICE THAT EXECUTES VISION ALGORITHMS OR VISION SYSTEM CONTROL SOFTWARE
This technology category includes any device that executes vision algorithms or vision system control software. The following diagram shows a typical computer vision pipeline; processors are often optimized for the compute-intensive portions of the software workload.
The following examples represent distinctly different types of processor architectures for embedded vision, and each has advantages and trade-offs that depend on the workload. For this reason, many devices combine multiple processor types into a heterogeneous computing environment, often integrated into a single semiconductor component. In addition, a processor can be accelerated by dedicated hardware that improves performance on computer vision algorithms.
While computer vision algorithms can run on most general-purpose CPUs, desktop processors may not meet the design constraints of some systems. However, x86 processors and system boards can leverage the PC infrastructure for low-cost hardware and broadly-supported software development tools. Several Alliance Member companies also offer devices that integrate a RISC CPU core. A general-purpose CPU is best suited for heuristics, complex decision-making, network access, user interface, storage management, and overall control. A general purpose CPU may be paired with a vision-specialized device for better performance on pixel-level processing.
Graphics Processing Units
High-performance GPUs deliver massive amounts of parallel computing potential, and graphics processors can be used to accelerate the portions of the computer vision pipeline that perform parallel processing on pixel data. While General Purpose GPUs (GPGPUs) have primarily been used for high-performance computing (HPC), even mobile graphics processors and integrated graphics cores are gaining GPGPU capability—meeting the power constraints for a wider range of vision applications. In designs that require 3D processing in addition to embedded vision, a GPU will already be part of the system and can be used to assist a general-purpose CPU with many computer vision algorithms. Many examples exist of x86-based embedded systems with discrete GPGPUs.
Digital Signal Processors
DSPs are very efficient for processing streaming data, since the bus and memory architecture are optimized to process high-speed data as it traverses the system. This architecture makes DSPs an excellent solution for processing image pixel data as it streams from a sensor source. Many DSPs for vision have been enhanced with coprocessors that are optimized for processing video inputs and accelerating computer vision algorithms. The specialized nature of DSPs makes these devices inefficient for processing general-purpose software workloads, so DSPs are usually paired with a RISC processor to create a heterogeneous computing environment that offers the best of both worlds.
Field Programmable Gate Arrays (FPGAs)
Instead of incurring the high cost and long lead-times for a custom ASIC to accelerate computer vision systems, designers can implement an FPGA to offer a reprogrammable solution for hardware acceleration. With millions of programmable gates, hundreds of I/O pins, and compute performance in the trillions of multiply-accumulates/sec (tera-MACs), high-end FPGAs offer the potential for highest performance in a vision system. Unlike a CPU, which has to time-slice or multi-thread tasks as they compete for compute resources, an FPGA has the advantage of being able to simultaneously accelerate multiple portions of a computer vision pipeline. Since the parallel nature of FPGAs offers so much advantage for accelerating computer vision, many of the algorithms are available as optimized libraries from semiconductor vendors. These computer vision libraries also include preconfigured interface blocks for connecting to other vision devices, such as IP cameras.
Vision-Specific Processors and Cores
Application-specific standard products (ASSPs) are specialized, highly integrated chips tailored for specific applications or application sets. ASSPs may incorporate a CPU, or use a separate CPU chip. By virtue of their specialization, ASSPs for vision processing typically deliver superior cost- and energy-efficiency compared with other types of processing solutions. Among other techniques, ASSPs deliver this efficiency through the use of specialized coprocessors and accelerators. And, because ASSPs are by definition focused on a specific application, they are usually provided with extensive associated software. This same specialization, however, means that an ASSP designed for vision is typically not suitable for other applications. ASSPs’ unique architectures can also make programming them more difficult than with other kinds of processors; some ASSPs are not user-programmable.
Powerful new AMD Ryzen and Radeon products deliver leadership performance and gaming experiences for notebook and desktop PCs SANTA CLARA, Calif. – 01/04/2022 – Today in its 2022 Product Premiere livestream, AMD (NASDAQ: AMD) announced new products that deliver leadership productivity, content creation, and gaming experiences. During the livestream, AMD President and CEO Dr. Lisa
AMD Unveils New Ryzen Mobile Processors Uniting “Zen 3+” core with AMD RDNA 2 Graphics in Powerhouse Design
Ryzen 6000 Series processors offer huge generational uplift with up to 11% more single threaded performance, up to 28% more multi-threaded performance, and up to 2x more graphics performance compared to the Ryzen 5000 Series New AMD Ryzen 7 5800X3D desktop processors with powerful 3D V-Cache technology elevate gaming performance SANTA CLARA, Calif. – 01/04/2022
AMD Unveils New Power-Efficient, High-Performance Mobile Graphics for Premium and Thin-and-Light Laptops, and New Desktop Graphics Cards
Expanded AMD Radeon RX 6000M Series mobile graphics offer 20 percent faster performance on average than current lineup1; New AMD Radeon RX 6000S Series mobile graphics deliver world-class, high-performance gaming for next-gen thin-and-light laptops AMD Radeon RX 6500 XT desktop graphics cards, starting at $199 SEP USD, make incredible 1080p gaming accessible to more gamers;
ASPEED and CEVA Collaborate to Enable Superior Voice Experience on 2nd Generation Cupola360 SoC for Smart Cameras and Video Conferencing Systems
CEVA-BX1 DSP powers audio/voice workloads in ASPEED’s AST1230 smart camera SoC; CEVA ClearVox voice front-end software is available to ASPEED’s customers to address most challenging multi-microphone conferencing use cases LAS VEGAS, Jan. 3, 2022 /PRNewswire/ — Consumer Electronics Show – CEVA, Inc. (NASDAQ: CEVA), the leading licensor of wireless connectivity and smart sensing technologies and integrated
TI Edge AI Cloud is a free service that lets you evaluate accelerated deep learning inference using TDA4x processors from Texas Instruments that you access via the cloud. No hardware purchase or software installation is required. TI Edge AI Cloud lets you: Connect to a Jacinto™ TDA4x processor evaluation board via the cloud Experience software
Afshin Niktash, Senior Principal Member of the Technical Staff at Maxim Integrated (now part of Analog Devices) demonstrates the keyword spotting and other audio recognition capabilities of the MAX78000 in a fun snake game application.
This blog post was originally published at e-con Systems’ website. It is reprinted here with the permission of e-con Systems. An ISP is a key component in an embedded camera system since a sensor provides the output only in the RAW format. An ISP (Image Signal Processor) is a dedicated processor that converts this RAW
This blog post was originally published at Ambarella’s website. It is reprinted here with the permission of Ambarella. The road to vehicular autonomy will take a detour between Levels 2 and 3. Hands-on, actively supervised self-driving features, including automated steering, acceleration, and braking, have seen a surge in popularity in recent years. Pioneered by Tesla’s
SmarteCAM, a Ready-to-deploy Smart AI Camera from e-con Systems, is Now Listed On the AWS Partner Device Catalog
San Jose and Chennai (November 29, 2021) – e-con Systems’ SmarteCAM, a ready to deploy smart AI camera system has now been validated by the AWS Device Qualification Program (DQP) under the AWS IoT Greengrass category. The camera has also been listed in the AWS Partner device catalog. SmarteCAM has been certified by AWS owing
Leading Edge AI Chipmaker Hailo Partners with NXP to Launch High-Performance, Scalable, AI Solutions for the Automotive Industry
NXP’s automotive processors, combined with the Hailo-8™AI processor, offer powerful, scalable, safe, and efficient deep learning processing for automotive ECUs TEL AVIV, Israel, Dec. 16, 2021 /PRNewswire/ — Hailo, the leading edge Artificial Intelligence chipmaker today announced its partnership with NXP® Semiconductors, an automotive market innovator, to launch a number of joint AI solutions for
News Highlights Armv9 architecture at the foundation of MediaTek Dimensity 9000 premium SoC Specialized processing empowers MediaTek to redefine the flagship mobile experience First partner adoption of Total Compute solution with Arm Cortex-X2 2021 has been a year of continued innovation for Arm and its partners as we look to empower the ultimate digital experiences
MediaTek Officially Launches Dimensity 9000 Flagship Chip And Announces Adoption by Global Device Makers
Built on the leading TSMC N4 process, Dimensity 9000 brings full flagship performance and power-efficiency to smartphones. First MediaTek powered devices will be available in Q1 of 2022 HSINCHU, Taiwan – December 16, 2021 – MediaTek today launched its Dimensity 9000 5G smartphone chip for next-generation flagship smartphones, and announced device maker adoption and endorsements
Lattice Expands Automate Solution Stack and Propel Design Tool Capabilities to Accelerate Industrial Application Development
Improved user experience and more performance for applications in robotics, smart factory, and motion control HILLSBORO, Ore – Dec. 15, 2021 – Lattice Semiconductor Corporation (NASDAQ: LSCC), the low power programmable leader, today launched the latest version of its Lattice Automate™ solution stack for industrial automation systems featuring new real-time networking capabilities, AI-based predictive maintenance, increased
Manny Singh, Principal Product Marketing Manager at Renesas, demonstrates the company’s latest edge AI and vision technologies and products at the 2021 Embedded Vision Summit. Specifically, Singh demonstrates the company’s the RZ/V microprocessor with a power-efficient AI accelerator. In this demo of object detection and recognition on Renesas’ proprietary Dynamically Reconfigurable Processor (DRP-AI), Singh shows
Microchip Technology Demonstration of Its VectorBlox Software Development Kits and PolarFire FPGAs for Artificial Intelligence and Machine Learning
Avery Williams, Technical Marketing Engineer at Microchip Technology, demonstrates the company’s latest edge AI and vision technologies and products at the 2021 Embedded Vision Summit. Specifically, Williams demonstrates the company’s VectorBlox software development kits and PolarFire FPGAs for artificial intelligence and machine learning. Williams demonstrates the steps required to quickly get started with evaluating artificial
This blog post was originally published at Opteran Technologies’ website. It is reprinted here with the permission of Opteran Technologies. Opteran enables robust, fast, GPS free, verifiable autonomy for aerial systems, at previously unimaginable size, weight (~30g), power (~3W) and hardware costs using consumer 2D cameras Today we’re happy to share a first glance at our
Inuitive Demonstration of Its Multi-core Processor For 3D Imaging, Deep Learning and Computer Vision
Dor Zepeniuk, Chief Technology Officer and Vice President of Products at Inuitive, demonstrates the company’s latest edge AI and vision technologies and products at the 2021 Embedded Vision Summit. Specifically, Zepeniuk demonstrates the company’s NU4000, a multi-core processor for 3D imaging, deep learning and computer vision. Zepeniuk demos the diverse edge computing and other capabilities
Immervision Demonstration of How Its Super-wide-angle Camera and Pixel Processing Can Improve Machine Perception
Patrice Roulet, Vice President of Technology and Co-founder of Immervision, demonstrates the company’s latest edge AI and vision technologies and products at the 2021 Embedded Vision Summit. Specifically, Roulet demonstrates how the company’s super-wide-angle camera and pixel processing can improve machine perception. Roulet describes how to specify, simulate and design using cameras equipped with super-wide-angle
eYs3D Microelectronics Demonstration of Stereo Vision for Robotic Automation and Depth Sensor Fusion
James Wang, Technical General Manager at eYs3D Microelectronics, demonstrates the company’s latest edge AI and vision technologies and products at the 2021 Embedded Vision Summit. Specifically, Wang demonstrates stereo vision for robotic automation and depth sensor fusion. Depth-sensing technology is now being widely adopted commercially in various consumer and industrial products. It’s commonly recognized that
Efinix Demonstration of Using Titanium FPGAs with Quantum Acceleration to Optimize Edge AI Performance while Reducing Time to Market
Roger Silloway, Director of North American Sales at Efinix, demonstrates the company’s latest edge AI and vision technologies and products at the 2021 Embedded Vision Summit. Specifically, Silloway demonstrates how to use the company’s Titanium FPGAs with quantum acceleration to optimize edge AI performance while reducing time to market. Efinix Quantum Acceleration provides a predefined