fbpx

Vision Algorithms for Embedded Vision

Most computer vision algorithms were developed on general-purpose computer systems with software written in a high-level language

Most computer vision algorithms were developed on general-purpose computer systems with software written in a high-level language. Some of the pixel-processing operations (ex: spatial filtering) have changed very little in the decades since they were first implemented on mainframes. With today’s broader embedded vision implementations, existing high-level algorithms may not fit within the system constraints, requiring new innovation to achieve the desired results.

Some of this innovation may involve replacing a general-purpose algorithm with a hardware-optimized equivalent. With such a broad range of processors for embedded vision, algorithm analysis will likely focus on ways to maximize pixel-level processing within system constraints.

This section refers to both general-purpose operations (ex: edge detection) and hardware-optimized versions (ex: parallel adaptive filtering in an FPGA). Many sources exist for general-purpose algorithms. The Embedded Vision Alliance is one of the best industry resources for learning about algorithms that map to specific hardware, since Alliance Members will share this information directly with the vision community.

General-purpose computer vision algorithms

Introduction To OpenCV Figure 1

One of the most-popular sources of computer vision algorithms is the OpenCV Library. OpenCV is open-source and currently written in C, with a C++ version under development. For more information, see the Alliance’s interview with OpenCV Foundation President and CEO Gary Bradski, along with other OpenCV-related materials on the Alliance website.

Hardware-optimized computer vision algorithms

Several programmable device vendors have created optimized versions of off-the-shelf computer vision libraries. NVIDIA works closely with the OpenCV community, for example, and has created algorithms that are accelerated by GPGPUs. MathWorks provides MATLAB functions/objects and Simulink blocks for many computer vision algorithms within its Vision System Toolbox, while also allowing vendors to create their own libraries of functions that are optimized for a specific programmable architecture. National Instruments offers its LabView Vision module library. And Xilinx is another example of a vendor with an optimized computer vision library that it provides to customers as Plug and Play IP cores for creating hardware-accelerated vision algorithms in an FPGA.

Other vision libraries

  • Halcon
  • Matrox Imaging Library (MIL)
  • Cognex VisionPro
  • VXL
  • CImg
  • Filters

Accelerating TensorFlow on NVIDIA A100 GPUs

This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. The NVIDIA A100, based on the NVIDIA Ampere GPU architecture, offers a suite of exciting new features: third-generation Tensor Cores, Multi-Instance GPU (MIG) and third-generation NVLink. Ampere Tensor Cores introduce a novel math mode dedicated for AI

Read More »

Multiple Conformant OpenXR Implementations Ship Bringing to Life the Dream of Portable XR Applications

Khronos launches OpenXR 1.0 Adopters Program; Multiple implementations from Microsoft and Oculus already conformant; New advanced cross-vendor hand and eye tracking extensions; Minecraft, Blender, Chromium, and Firefox Reality embracing OpenXR Beaverton, OR – July 28, 2020 – Today, The Khronos® Group, an open consortium of industry-leading companies creating graphics and compute interoperability standards, announces multiple

Read More »

2020 Vision Tank Start-Up Final Competition Round

Jay Cormier, CEO of Eyedaptic, Vaibhav Ghadiok, VP of Engineering at Hayden AI, Chuck Gershman, CEO and Co-founder of Owl Autonomous Imaging, Gregor Horstmeyer, Head of Product at Ramona Optics, and Owen Nicholson, CEO of SLAMcore, deliver their Vision Tank presentations at the July 16, 2020 online finalist competition round. The Vision Tank introduces companies

Read More »

Lattice sensAI 3.0 Solutions Stack Doubles Performance, Cuts Power Consumption in Half for Edge AI Applications

Enhanced Version of Award-winning Solutions Stack Now Available on Low Power, 28 nm FD-SOI-based Lattice CrossLink-NX FPGAs HILLSBORO, OR – May 20, 2020 – Lattice Semiconductor Corporation (NASDAQ: LSCC), the low power programmable leader, today launched the latest version of its complete solutions stack for on-device AI processing at the Edge, Lattice sensAI™ 3.0. The

Read More »

Edge Computing Software that Simplifies Edge AI

This blog post was originally published at Intel’s website. It is reprinted here with the permission of Intel. 2020 has presented significant challenges, such as COVID-19, as well as opportunities that put human resiliency into perspective and priorities reconsidered across industries and our daily lives. However, the need to do more with data in more

Read More »

Khronos Steps Towards Widespread Deployment of SYCL with Release of SYCL 2020 Provisional Specification

SYCL 2020 features are available now in Intel’s DPC++ and Codeplay’s ComputeCpp; Developers encouraged to provide feedback on the publicly available specification for C++ based heterogeneous parallel programming Beaverton, OR – June 30, 2020 – Today, The Khronos® Group, an open consortium of industry-leading companies creating graphics and compute interoperability standards, announces the ratification and public

Read More »

CUDA Refresher: The GPU Computing Ecosystem

This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. This is the third post in the CUDA Refresher series, which has the goal of refreshing key concepts in CUDA, tools, and optimization for beginning or intermediate developers. Ease of programming and a giant leap in performance

Read More »

Intel Announces Unmatched AI and Analytics Platform with New Processor, Memory, Storage and FPGA Solutions

What’s New: Intel today introduced its 3rd Gen Intel® Xeon® Scalable processors and additions to its hardware and software AI portfolio, enabling customers to accelerate the development and use of artificial intelligence (AI) and analytics workloads running in data center, network and intelligent-edge environments. As the industry’s first mainstream server processor with built-in bfloat16 support, Intel’s

Read More »

CUDA Refresher: Getting Started with CUDA

This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. This is the second post in the CUDA Refresher series, which has the goal of refreshing key concepts in CUDA, tools, and optimization for beginning or intermediate developers. Advancements in science and business drive an insatiable demand

Read More »

CUDA Refresher: Reviewing the Origins of GPU Computing

This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. This is the first post in the CUDA Refresher series, which has the goal of refreshing key concepts in CUDA, tools, and optimization for beginning or intermediate developers. Scientific discovery and business analytics drive an insatiable demand

Read More »

Open Sourcing the AI Model Efficiency Toolkit: Contributing State-of-the-art Compression and Quantization Techniques from Qualcomm AI Research

This blog post was originally published at Qualcomm’s website. It is reprinted here with the permission of Qualcomm. QuIC is excited to open source the AI Model Efficiency Toolkit on GitHub to collaborate with other leading AI researchers and to provide a simple library plugin for AI developers to utilize for state-of-the-art model efficiency performance.

Read More »

Morpho Wins the Award for Three Consecutive Years from the Edge AI and Vision Alliance

Image Quality Optimization by AI based Segmentation and Pixel Filtering Technology was Selected as the Best AI Software/Algorithm Award Tokyo, Japan –June 4, 2020– Morpho, Inc. (hereinafter, “Morpho”), a global leader in image processing and imaging AI solutions, has been named as the winner of Best AI Software/Algorithm Award for “Morpho Semantic Filtering™”, image quality optimization

Read More »

Upcoming Virtual Seminar Series Discusses Smart Devices That Understand and Seamlessly Communicate With the World Around Them

On June 2 (Communication), 3 (Connectivity and Sensing), 9 (AI and Vision), and 10 (Wireless Audio), Edge AI and Vision Alliance Member company CEVA will deliver a four-part virtual seminar series focusing on how to create smart devices that understand the world around them and seamlessly communicate. Also included in the event, and available in

Read More »

Unified Programming Model Critical to Uncompromised Application Performance, Saves Time and Money, Study Finds

What’s New: New computing accelerators are rapidly emerging, and organizations need to examine time and financial considerations associated with developing performance-sensitive applications that can run on both new and existing computing platforms. Commissioned by Intel, a recent research report from J.Gold Associates, “oneAPI: Software Abstraction for a Heterogeneous Computing World,” discusses the importance of application

Read More »

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

1646 North California Blvd.,
Suite 360
Walnut Creek, CA 94596 USA

Phone
Phone: +1 (925) 954-1411
Scroll to Top