Vision Algorithms for Embedded Vision

Most computer vision algorithms were developed on general-purpose computer systems with software written in a high-level language

Most computer vision algorithms were developed on general-purpose computer systems with software written in a high-level language. Some of the pixel-processing operations (ex: spatial filtering) have changed very little in the decades since they were first implemented on mainframes. With today’s broader embedded vision implementations, existing high-level algorithms may not fit within the system constraints, requiring new innovation to achieve the desired results.

Some of this innovation may involve replacing a general-purpose algorithm with a hardware-optimized equivalent. With such a broad range of processors for embedded vision, algorithm analysis will likely focus on ways to maximize pixel-level processing within system constraints.

This section refers to both general-purpose operations (ex: edge detection) and hardware-optimized versions (ex: parallel adaptive filtering in an FPGA). Many sources exist for general-purpose algorithms. The Embedded Vision Alliance is one of the best industry resources for learning about algorithms that map to specific hardware, since Alliance Members will share this information directly with the vision community.

General-purpose computer vision algorithms

Introduction To OpenCV Figure 1

One of the most-popular sources of computer vision algorithms is the OpenCV Library. OpenCV is open-source and currently written in C, with a C++ version under development. For more information, see the Alliance’s interview with OpenCV Foundation President and CEO Gary Bradski, along with other OpenCV-related materials on the Alliance website.

Hardware-optimized computer vision algorithms

Several programmable device vendors have created optimized versions of off-the-shelf computer vision libraries. NVIDIA works closely with the OpenCV community, for example, and has created algorithms that are accelerated by GPGPUs. MathWorks provides MATLAB functions/objects and Simulink blocks for many computer vision algorithms within its Vision System Toolbox, while also allowing vendors to create their own libraries of functions that are optimized for a specific programmable architecture. National Instruments offers its LabView Vision module library. And Xilinx is another example of a vendor with an optimized computer vision library that it provides to customers as Plug and Play IP cores for creating hardware-accelerated vision algorithms in an FPGA.

Other vision libraries

  • Halcon
  • Matrox Imaging Library (MIL)
  • Cognex VisionPro
  • VXL
  • CImg
  • Filters

“Current and Planned Standards for Computer Vision and Machine Learning,” a Presentation from the Khronos Group

Neil Trevett, President of the Khronos Group and Vice President of Developer Ecosystems at NVIDIA, delivers the presentation “Current and Planned Standards for Computer Vision and Machine Learning” at the Embedded Vision Alliance’s December 2019 Vision Industry and Technology Forum. Trevett shares updates on recent, current and planned Khronos standardization activities aimed at streamlining the

Read More »

A Guide to Video Analytics: Applications and Opportunities

This article was originally published at Tryolabs’ website. It is reprinted here with the permission of Tryolabs. Introduction In the past few years, video analytics, also known as video content analysis or intelligent video analytics, has attracted increasing interest from both industry and the academic world. Thanks to the enormous advances made in deep learning,

Read More »

“Edge/Cloud Tradeoffs and Scaling a Consumer Computer Vision Product,” a Presentation from Cocoon Health

Pavan Kumar, Co-founder and CTO of Cocoon Cam (formerly Cocoon Health), delivers the presentation “Edge/Cloud Tradeoffs and Scaling a Consumer Computer Vision Product” at the Embedded Vision Alliance’s September 2019 Vision Industry and Technology Forum. Kumar explains how his company is evolving its use of edge and cloud vision computing in continuing to bring new

Read More »

“Embedded Vision Applications Lead Way for Processors in AI: A Market Analysis of Vision Processors,” a Presentation from IHS Markit

Tom Hackenberg, Principal Analyst at IHS Markit, presents the "Embedded Vision Applications Lead Way for Processors in AI: A Market Analysis of Vision Processors" tutorial at the May 2019 Embedded Vision Summit. Artificial intelligence is not a new concept. Machine learning has been used for decades in large server and high performance computing environments. Why

Read More »

“Five+ Techniques for Efficient Implementation of Neural Networks,” a Presentation from Synopsys

Bert Moons, Hardware Design Architect at Synopsys, presents the "Five+ Techniques for Efficient Implementation of Neural Networks" tutorial at the May 2019 Embedded Vision Summit. Embedding real-time, large-scale deep learning vision applications at the edge is challenging due to their huge computational, memory and bandwidth requirements. System architects can mitigate these demands by modifying deep

Read More »

“Building Complete Embedded Vision Systems on Linux — From Camera to Display,” a Presentation from Montgomery One

Clay D. Montgomery, Freelance Embedded Multimedia Developer at Montgomery One, presents the "Building Complete Embedded Vision Systems on Linux—From Camera to Display" tutorial at the May 2019 Embedded Vision Summit. There’s a huge wealth of open-source software components available today for embedding vision on the latest SoCs from suppliers such as NXP, Broadcom, TI and

Read More »

Rapid Prototyping on NVIDIA Jetson Platforms with MATLAB

This article was originally published at NVIDIA's website. It is reprinted here with the permission of NVIDIA. This article discusses how an application developer can prototype and deploy deep learning algorithms on hardware like the NVIDIA Jetson Nano Developer Kit with MATLAB. In previous posts, we explored how you can design and train deep learning

Read More »

“Creating Efficient, Flexible and Scalable Cloud Computer Vision Applications: An Introduction,” a Presentation from GumGum

Nishita Sant, Computer Vision Manager, and Greg Chu, Senior Computer Vision Scientist, both of GumGum, present the "Creating Efficient, Flexible and Scalable Cloud Computer Vision Applications: An Introduction" tutorial at the May 2019 Embedded Vision Summit. Given the growing utility of computer vision applications, how can you deploy these services in high-traffic production environments? Sant

Read More »

“Selecting the Right Imager for Your Embedded Vision Application,” a Presentation from Capable Robot Components

Chris Osterwood, Founder and CEO of Capable Robot Components, presents the "Selecting the Right Imager for Your Embedded Vision Application" tutorial at the May 2019 Embedded Vision Summit. The performance of your embedded vision product is inexorably linked to the imager and lens it uses. Selecting these critical components is sometimes overwhelming due to the

Read More »

“Machine Learning at the Edge in Smart Factories Using TI Sitara Processors,” a Presentation from Texas Instruments

Manisha Agrawal, Software Applications Engineer at Texas Instruments, presents the "Machine Learning at the Edge in Smart Factories Using TI Sitara Processors" tutorial at the May 2019 Embedded Vision Summit. Whether it’s called “Industry 4.0,” “industrial internet of things” (IIOT) or “smart factories,” a fundamental shift is underway in manufacturing: factories are becoming smarter. This

Read More »

“Using High-level Synthesis to Bridge the Gap Between Deep Learning Frameworks and Custom Hardware Accelerators,” a Presentation from Mentor

Michael Fingeroff, HLS Technologist at Mentor, presents the "Using High-level Synthesis to Bridge the Gap Between Deep Learning Frameworks and Custom Hardware Accelerators" tutorial at the May 2019 Embedded Vision Summit. Recent years have seen an explosion in machine learning/AI algorithms with a corresponding need to use custom hardware for best performance and power efficiency.

Read More »
logo_2020

May 18 - 21, Santa Clara

The preeminent event for practical, deployable computer vision and visual AI, for product creators who want to bring visual intelligence to products.

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

1646 North California Blvd.,
Suite 360
Walnut Creek, CA 94596 USA

Phone
Phone: +1 (925) 954-1411
Scroll to Top //