Vision Algorithms for Embedded Vision
Most computer vision algorithms were developed on general-purpose computer systems with software written in a high-level language
Most computer vision algorithms were developed on general-purpose computer systems with software written in a high-level language. Some of the pixel-processing operations (ex: spatial filtering) have changed very little in the decades since they were first implemented on mainframes. With today’s broader embedded vision implementations, existing high-level algorithms may not fit within the system constraints, requiring new innovation to achieve the desired results.
Some of this innovation may involve replacing a general-purpose algorithm with a hardware-optimized equivalent. With such a broad range of processors for embedded vision, algorithm analysis will likely focus on ways to maximize pixel-level processing within system constraints.
This section refers to both general-purpose operations (ex: edge detection) and hardware-optimized versions (ex: parallel adaptive filtering in an FPGA). Many sources exist for general-purpose algorithms. The Embedded Vision Alliance is one of the best industry resources for learning about algorithms that map to specific hardware, since Alliance Members will share this information directly with the vision community.
General-purpose computer vision algorithms
One of the most-popular sources of computer vision algorithms is the OpenCV Library. OpenCV is open-source and currently written in C, with a C++ version under development. For more information, see the Alliance’s interview with OpenCV Foundation President and CEO Gary Bradski, along with other OpenCV-related materials on the Alliance website.
Hardware-optimized computer vision algorithms
Several programmable device vendors have created optimized versions of off-the-shelf computer vision libraries. NVIDIA works closely with the OpenCV community, for example, and has created algorithms that are accelerated by GPGPUs. MathWorks provides MATLAB functions/objects and Simulink blocks for many computer vision algorithms within its Vision System Toolbox, while also allowing vendors to create their own libraries of functions that are optimized for a specific programmable architecture. National Instruments offers its LabView Vision module library. And Xilinx is another example of a vendor with an optimized computer vision library that it provides to customers as Plug and Play IP cores for creating hardware-accelerated vision algorithms in an FPGA.
Other vision libraries
- Matrox Imaging Library (MIL)
- Cognex VisionPro
On August 25, 2020 at noon ET (9 am PT), Alliance Member Company PathPartner Technology will deliver a webinar that explores the challenges faced in developing facial recognition technology. From the event page: Per a Markets & Markets research report, the facial recognition market is expected to grow at a CAGR of 16.6% from $
This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. The NVIDIA A100, based on the NVIDIA Ampere GPU architecture, offers a suite of exciting new features: third-generation Tensor Cores, Multi-Instance GPU (MIG) and third-generation NVLink. Ampere Tensor Cores introduce a novel math mode dedicated for AI
Multiple Conformant OpenXR Implementations Ship Bringing to Life the Dream of Portable XR Applications
Khronos launches OpenXR 1.0 Adopters Program; Multiple implementations from Microsoft and Oculus already conformant; New advanced cross-vendor hand and eye tracking extensions; Minecraft, Blender, Chromium, and Firefox Reality embracing OpenXR Beaverton, OR – July 28, 2020 – Today, The Khronos® Group, an open consortium of industry-leading companies creating graphics and compute interoperability standards, announces multiple
Jay Cormier, CEO of Eyedaptic, Vaibhav Ghadiok, VP of Engineering at Hayden AI, Chuck Gershman, CEO and Co-founder of Owl Autonomous Imaging, Gregor Horstmeyer, Head of Product at Ramona Optics, and Owen Nicholson, CEO of SLAMcore, deliver their Vision Tank presentations at the July 16, 2020 online finalist competition round. The Vision Tank introduces companies
Lattice sensAI 3.0 Solutions Stack Doubles Performance, Cuts Power Consumption in Half for Edge AI Applications
Enhanced Version of Award-winning Solutions Stack Now Available on Low Power, 28 nm FD-SOI-based Lattice CrossLink-NX FPGAs HILLSBORO, OR – May 20, 2020 – Lattice Semiconductor Corporation (NASDAQ: LSCC), the low power programmable leader, today launched the latest version of its complete solutions stack for on-device AI processing at the Edge, Lattice sensAI™ 3.0. The
This blog post was originally published at Intel’s website. It is reprinted here with the permission of Intel. 2020 has presented significant challenges, such as COVID-19, as well as opportunities that put human resiliency into perspective and priorities reconsidered across industries and our daily lives. However, the need to do more with data in more
Khronos Steps Towards Widespread Deployment of SYCL with Release of SYCL 2020 Provisional Specification
SYCL 2020 features are available now in Intel’s DPC++ and Codeplay’s ComputeCpp; Developers encouraged to provide feedback on the publicly available specification for C++ based heterogeneous parallel programming Beaverton, OR – June 30, 2020 – Today, The Khronos® Group, an open consortium of industry-leading companies creating graphics and compute interoperability standards, announces the ratification and public
This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. This is the third post in the CUDA Refresher series, which has the goal of refreshing key concepts in CUDA, tools, and optimization for beginning or intermediate developers. Ease of programming and a giant leap in performance
Intel Announces Unmatched AI and Analytics Platform with New Processor, Memory, Storage and FPGA Solutions
What’s New: Intel today introduced its 3rd Gen Intel® Xeon® Scalable processors and additions to its hardware and software AI portfolio, enabling customers to accelerate the development and use of artificial intelligence (AI) and analytics workloads running in data center, network and intelligent-edge environments. As the industry’s first mainstream server processor with built-in bfloat16 support, Intel’s
SLAMcore Raises $5 Million to Satisfy Spiking Demand for Robust Spatial AI as COVID Triggers an Explosion in Robotics
June 17, 2020 – Robots are being deployed in ever greater numbers around the world and across industries to help manage the COVID crisis. Their use in many new and highly public scenarios is raising awareness and acceptance dramatically. Organizations of all types and sizes are now seriously considering robots as not only an immediate
This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. This is the second post in the CUDA Refresher series, which has the goal of refreshing key concepts in CUDA, tools, and optimization for beginning or intermediate developers. Advancements in science and business drive an insatiable demand
This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. This is the first post in the CUDA Refresher series, which has the goal of refreshing key concepts in CUDA, tools, and optimization for beginning or intermediate developers. Scientific discovery and business analytics drive an insatiable demand
This blog post was originally published at Intel’s website. It is reprinted here with the permission of Intel. We’re coming up on the second anniversary of Intel Distribution of OpenVINO toolkit, which makes this the perfect time to look at the past, present, and future of the toolkit that’s helping companies worldwide accelerate the performance
Open Sourcing the AI Model Efficiency Toolkit: Contributing State-of-the-art Compression and Quantization Techniques from Qualcomm AI Research
This blog post was originally published at Qualcomm’s website. It is reprinted here with the permission of Qualcomm. QuIC is excited to open source the AI Model Efficiency Toolkit on GitHub to collaborate with other leading AI researchers and to provide a simple library plugin for AI developers to utilize for state-of-the-art model efficiency performance.
Image Quality Optimization by AI based Segmentation and Pixel Filtering Technology was Selected as the Best AI Software/Algorithm Award Tokyo, Japan –June 4, 2020– Morpho, Inc. (hereinafter, “Morpho”), a global leader in image processing and imaging AI solutions, has been named as the winner of Best AI Software/Algorithm Award for “Morpho Semantic Filtering™”, image quality optimization
Upcoming Virtual Seminar Series Discusses Smart Devices That Understand and Seamlessly Communicate With the World Around Them
On June 2 (Communication), 3 (Connectivity and Sensing), 9 (AI and Vision), and 10 (Wireless Audio), Edge AI and Vision Alliance Member company CEVA will deliver a four-part virtual seminar series focusing on how to create smart devices that understand the world around them and seamlessly communicate. Also included in the event, and available in