Vision Algorithms for Embedded Vision
Most computer vision algorithms were developed on general-purpose computer systems with software written in a high-level language
Most computer vision algorithms were developed on general-purpose computer systems with software written in a high-level language. Some of the pixel-processing operations (ex: spatial filtering) have changed very little in the decades since they were first implemented on mainframes. With today’s broader embedded vision implementations, existing high-level algorithms may not fit within the system constraints, requiring new innovation to achieve the desired results.
Some of this innovation may involve replacing a general-purpose algorithm with a hardware-optimized equivalent. With such a broad range of processors for embedded vision, algorithm analysis will likely focus on ways to maximize pixel-level processing within system constraints.
This section refers to both general-purpose operations (ex: edge detection) and hardware-optimized versions (ex: parallel adaptive filtering in an FPGA). Many sources exist for general-purpose algorithms. The Embedded Vision Alliance is one of the best industry resources for learning about algorithms that map to specific hardware, since Alliance Members will share this information directly with the vision community.
General-purpose computer vision algorithms
One of the most-popular sources of computer vision algorithms is the OpenCV Library. OpenCV is open-source and currently written in C, with a C++ version under development. For more information, see the Alliance’s interview with OpenCV Foundation President and CEO Gary Bradski, along with other OpenCV-related materials on the Alliance website.
Hardware-optimized computer vision algorithms
Several programmable device vendors have created optimized versions of off-the-shelf computer vision libraries. NVIDIA works closely with the OpenCV community, for example, and has created algorithms that are accelerated by GPGPUs. MathWorks provides MATLAB functions/objects and Simulink blocks for many computer vision algorithms within its Vision System Toolbox, while also allowing vendors to create their own libraries of functions that are optimized for a specific programmable architecture. National Instruments offers its LabView Vision module library. And Xilinx is another example of a vendor with an optimized computer vision library that it provides to customers as Plug and Play IP cores for creating hardware-accelerated vision algorithms in an FPGA.
Other vision libraries
- Matrox Imaging Library (MIL)
- Cognex VisionPro
This blog post was originally published at Intel’s website. It is reprinted here with the permission of Intel. 2020 has presented significant challenges, such as COVID-19, as well as opportunities that put human resiliency into perspective and priorities reconsidered across industries and our daily lives. However, the need to do more with data in more
Khronos Steps Towards Widespread Deployment of SYCL with Release of SYCL 2020 Provisional Specification
SYCL 2020 features are available now in Intel’s DPC++ and Codeplay’s ComputeCpp; Developers encouraged to provide feedback on the publicly available specification for C++ based heterogeneous parallel programming Beaverton, OR – June 30, 2020 – Today, The Khronos® Group, an open consortium of industry-leading companies creating graphics and compute interoperability standards, announces the ratification and public
This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. This is the third post in the CUDA Refresher series, which has the goal of refreshing key concepts in CUDA, tools, and optimization for beginning or intermediate developers. Ease of programming and a giant leap in performance
Intel Announces Unmatched AI and Analytics Platform with New Processor, Memory, Storage and FPGA Solutions
What’s New: Intel today introduced its 3rd Gen Intel® Xeon® Scalable processors and additions to its hardware and software AI portfolio, enabling customers to accelerate the development and use of artificial intelligence (AI) and analytics workloads running in data center, network and intelligent-edge environments. As the industry’s first mainstream server processor with built-in bfloat16 support, Intel’s
SLAMcore Raises $5 Million to Satisfy Spiking Demand for Robust Spatial AI as COVID Triggers an Explosion in Robotics
June 17, 2020 – Robots are being deployed in ever greater numbers around the world and across industries to help manage the COVID crisis. Their use in many new and highly public scenarios is raising awareness and acceptance dramatically. Organizations of all types and sizes are now seriously considering robots as not only an immediate
This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. This is the second post in the CUDA Refresher series, which has the goal of refreshing key concepts in CUDA, tools, and optimization for beginning or intermediate developers. Advancements in science and business drive an insatiable demand
This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. This is the first post in the CUDA Refresher series, which has the goal of refreshing key concepts in CUDA, tools, and optimization for beginning or intermediate developers. Scientific discovery and business analytics drive an insatiable demand
This blog post was originally published at Intel’s website. It is reprinted here with the permission of Intel. We’re coming up on the second anniversary of Intel Distribution of OpenVINO toolkit, which makes this the perfect time to look at the past, present, and future of the toolkit that’s helping companies worldwide accelerate the performance
Open Sourcing the AI Model Efficiency Toolkit: Contributing State-of-the-art Compression and Quantization Techniques from Qualcomm AI Research
This blog post was originally published at Qualcomm’s website. It is reprinted here with the permission of Qualcomm. QuIC is excited to open source the AI Model Efficiency Toolkit on GitHub to collaborate with other leading AI researchers and to provide a simple library plugin for AI developers to utilize for state-of-the-art model efficiency performance.
Image Quality Optimization by AI based Segmentation and Pixel Filtering Technology was Selected as the Best AI Software/Algorithm Award Tokyo, Japan –June 4, 2020– Morpho, Inc. (hereinafter, “Morpho”), a global leader in image processing and imaging AI solutions, has been named as the winner of Best AI Software/Algorithm Award for “Morpho Semantic Filtering™”, image quality optimization
Upcoming Virtual Seminar Series Discusses Smart Devices That Understand and Seamlessly Communicate With the World Around Them
On June 2 (Communication), 3 (Connectivity and Sensing), 9 (AI and Vision), and 10 (Wireless Audio), Edge AI and Vision Alliance Member company CEVA will deliver a four-part virtual seminar series focusing on how to create smart devices that understand the world around them and seamlessly communicate. Also included in the event, and available in
Unified Programming Model Critical to Uncompromised Application Performance, Saves Time and Money, Study Finds
What’s New: New computing accelerators are rapidly emerging, and organizations need to examine time and financial considerations associated with developing performance-sensitive applications that can run on both new and existing computing platforms. Commissioned by Intel, a recent research report from J.Gold Associates, “oneAPI: Software Abstraction for a Heterogeneous Computing World,” discusses the importance of application
This article was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. Supervised training of deep neural networks is now a common method of creating AI applications. To achieve accurate AI for your application, you generally need a very large dataset especially if you create… Training with Custom Pretrained Models
This blog post was originally published at SLAMcore’s website. It is reprinted here with the permission of SLAMcore. The three most challenging questions in autonomous robotics are: where am I? how far away are the objects around me? and, what are those objects? The vast majority of robot failures stem from an inability to answer
Provisional Specifications publicly available today for industry feedback Enhanced deployment flexibility sets stage for new pervasively available core functionality IWOCL – April 27, 2020 – 6:00 AM GMT – Today, The Khronos® Group, an open consortium of industry-leading companies creating advanced interoperability standards, publicly releases the OpenCL™ 3.0 Provisional Specifications. OpenCL 3.0 realigns the OpenCL
This blog post was originally published at Codeplay Software’s website. It is reprinted here with the permission of Codeplay Software. Software developers are looking more than ever at how they can accelerate their applications without having to write optimized processor specific code. SYCL is the industry standard for C++ acceleration, giving developers a platform to