Vision Algorithms

Vision Algorithms for Embedded Vision

Most computer vision algorithms were developed on general-purpose computer systems with software written in a high-level language

Most computer vision algorithms were developed on general-purpose computer systems with software written in a high-level language. Some of the pixel-processing operations (ex: spatial filtering) have changed very little in the decades since they were first implemented on mainframes. With today’s broader embedded vision implementations, existing high-level algorithms may not fit within the system constraints, requiring new innovation to achieve the desired results.

Some of this innovation may involve replacing a general-purpose algorithm with a hardware-optimized equivalent. With such a broad range of processors for embedded vision, algorithm analysis will likely focus on ways to maximize pixel-level processing within system constraints.

This section refers to both general-purpose operations (ex: edge detection) and hardware-optimized versions (ex: parallel adaptive filtering in an FPGA). Many sources exist for general-purpose algorithms. The Embedded Vision Alliance is one of the best industry resources for learning about algorithms that map to specific hardware, since Alliance Members will share this information directly with the vision community.

General-purpose computer vision algorithms

Introduction To OpenCV Figure 1

One of the most-popular sources of computer vision algorithms is the OpenCV Library. OpenCV is open-source and currently written in C, with a C++ version under development. For more information, see the Alliance’s interview with OpenCV Foundation President and CEO Gary Bradski, along with other OpenCV-related materials on the Alliance website.

Hardware-optimized computer vision algorithms

Several programmable device vendors have created optimized versions of off-the-shelf computer vision libraries. NVIDIA works closely with the OpenCV community, for example, and has created algorithms that are accelerated by GPGPUs. MathWorks provides MATLAB functions/objects and Simulink blocks for many computer vision algorithms within its Vision System Toolbox, while also allowing vendors to create their own libraries of functions that are optimized for a specific programmable architecture. National Instruments offers its LabView Vision module library. And Xilinx is another example of a vendor with an optimized computer vision library that it provides to customers as Plug and Play IP cores for creating hardware-accelerated vision algorithms in an FPGA.

Other vision libraries

  • Halcon
  • Matrox Imaging Library (MIL)
  • Cognex VisionPro
  • VXL
  • CImg
  • Filters

Develop Generative AI-powered Visual AI Agents for the Edge

This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. An exciting breakthrough in AI technology—Vision Language Models (VLMs)—offers a more dynamic and flexible method for video analysis. VLMs enable users to interact with image and video input using natural language, making the technology more accessible and

Read More »

DEEPX Demonstration of Its DX-M1 AIoT Booster

Taisik Won, the President of DEEPX USA, demonstrates the company’s latest edge AI and vision technologies and products at the 2024 Embedded Vision Summit. Specifically, Won demonstrates the company’s AIoT Booster, the DX-M1. The DX-M1 is DEEPX’s flagship AI chip, meticulously engineered for seamless integration into any AIoT application. This cutting-edge chip can simultaneously process

Read More »

DEEPX Demonstration of Its DX-V1 and DX-V3 AI Vision Processors

Aiden Song, PR Manager at DEEPX, demonstrates the company’s latest edge AI and vision technologies and products at the 2024 Embedded Vision Summit. Specifically, Song demonstrates the company’s DX-V1 and DX-V3 AI vision processors. The DX-V1 and DX-V3 are AI enabler chips for vision systems. The DX-V1 is a standalone edge AI chip that can

Read More »

What’s Next in On-device Generative AI?

This blog post was originally published at Qualcomm’s website. It is reprinted here with the permission of Qualcomm. Upcoming generative AI trends and Qualcomm Technologies’ role in enabling the next wave of innovation on-device The generative artificial intelligence (AI) era has begun. Generative AI innovations continue at a rapid pace and are being woven into

Read More »

DEEPX Demonstration of Its DX-H1 Green AI Computing Card

Aiden Song, PR Manager at DEEPX, demonstrates the company’s latest edge AI and vision technologies and products at the 2024 Embedded Vision Summit. Specifically, Song demonstrates the company’s latest innovation, the DX-H1 Green AI Computing Card. The DX-H1 is designed for eco-friendly data centers, delivering 10 times better power and cost efficiency than GPGPU solutions.

Read More »

Cadence Demonstration of Time-of-Flight Decoding on the Tensilica Vision Q7 DSP

Amol Borkar, Director of Product Marketing for Cadence Tensilica DSPs and Automotive Segment Director, demonstrates the company’s latest edge AI and vision technologies and products at the 2024 Embedded Vision Summit. Specifically, Borkar demonstrates the use of a Tensilica Vision Q7 DSP for Time-of-Flight (ToF) decoding. In this demonstration, the Tensilica Vision Q7 DSP integrated

Read More »

BrainChip Demonstration of the Power of Temporal Event-based Neural Networks (TENNs)

Todd Vierra, Vice President of Customer Engagement at BrainChip, demonstrates the company’s latest edge AI and vision technologies and products at the 2024 Embedded Vision Summit. Specifically, Vierra demonstrates the efficient processing of generative text using Temporal Event-based Neural Networks (TENNs) compared to ChatGPT. The TENN, an innovative, light-weight neural network architecture, combines convolution in

Read More »

BrainChip Demonstration of Analyzing Head Pose, Eye Gaze and Emotion with Human Behavior AI

Todd Vierra, Vice President of Customer Engagement at BrainChip, demonstrates the company’s latest edge AI and vision technologies and products at the 2024 Embedded Vision Summit. Specifically, Vierra demonstrates how BrainChip’s Akida AKD1000 neuromorphic processor detects human emotion. Partnered with BeEmotion.ai, the system monitors the state of the user through real-time observation and perception of

Read More »

Navigating the LiDAR Revolution: Trends and Innovations Ahead

This market research report was originally published at the Yole Group’s website. It is reprinted here with the permission of the Yole Group. There are today two distinct LiDAR markets: China and the rest of the world. In China, approximately 128 car models equipped with LiDAR are expected to be released by Chinese OEMs in

Read More »

Axelera AI Demonstration of Fast and Efficient Workplace Safety with the Metis AIPU

Bram Verhoef, Co-founder of Axelera AI, demonstrates the company’s latest edge AI and vision technologies and products at the 2024 Embedded Vision Summit. Specifically, Verhoef demonstrates how his company’s Metis AIPU can accelerate computer vision applications. Axelera AI, together with its partner FogSphere, has developed a computer vision system that detects if people are wearing

Read More »

AMD to Acquire Silo AI to Expand Enterprise AI Solutions Globally

Europe’s largest private AI lab to accelerate the development and deployment of AMD-powered AI models and software solutions Enhances open-source AI software capabilities for efficient training and inference on AMD compute platforms SANTA CLARA, Calif. — July 10, 2024 — AMD (NASDAQ: AMD) today announced the signing of a definitive agreement to acquire Silo AI,

Read More »

Decoding How the Generative AI Revolution BeGAN

This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. NVIDIA Research’s GauGAN demo set the scene for a new wave of generative AI apps supercharging creative workflows. Editor’s note: This post is part of the AI Decoded series, which demystifies AI by making the technology more

Read More »

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

1646 N. California Blvd.,
Suite 360
Walnut Creek, CA 94596 USA

Phone
Phone: +1 (925) 954-1411
Scroll to Top