Vision Algorithms

Vision Algorithms for Embedded Vision

Most computer vision algorithms were developed on general-purpose computer systems with software written in a high-level language

Most computer vision algorithms were developed on general-purpose computer systems with software written in a high-level language. Some of the pixel-processing operations (ex: spatial filtering) have changed very little in the decades since they were first implemented on mainframes. With today’s broader embedded vision implementations, existing high-level algorithms may not fit within the system constraints, requiring new innovation to achieve the desired results.

Some of this innovation may involve replacing a general-purpose algorithm with a hardware-optimized equivalent. With such a broad range of processors for embedded vision, algorithm analysis will likely focus on ways to maximize pixel-level processing within system constraints.

This section refers to both general-purpose operations (ex: edge detection) and hardware-optimized versions (ex: parallel adaptive filtering in an FPGA). Many sources exist for general-purpose algorithms. The Embedded Vision Alliance is one of the best industry resources for learning about algorithms that map to specific hardware, since Alliance Members will share this information directly with the vision community.

General-purpose computer vision algorithms

Introduction To OpenCV Figure 1

One of the most-popular sources of computer vision algorithms is the OpenCV Library. OpenCV is open-source and currently written in C, with a C++ version under development. For more information, see the Alliance’s interview with OpenCV Foundation President and CEO Gary Bradski, along with other OpenCV-related materials on the Alliance website.

Hardware-optimized computer vision algorithms

Several programmable device vendors have created optimized versions of off-the-shelf computer vision libraries. NVIDIA works closely with the OpenCV community, for example, and has created algorithms that are accelerated by GPGPUs. MathWorks provides MATLAB functions/objects and Simulink blocks for many computer vision algorithms within its Vision System Toolbox, while also allowing vendors to create their own libraries of functions that are optimized for a specific programmable architecture. National Instruments offers its LabView Vision module library. And Xilinx is another example of a vendor with an optimized computer vision library that it provides to customers as Plug and Play IP cores for creating hardware-accelerated vision algorithms in an FPGA.

Other vision libraries

  • Halcon
  • Matrox Imaging Library (MIL)
  • Cognex VisionPro
  • VXL
  • CImg
  • Filters

Gimlet Labs Demonstration of Instant Deployment of Custom AI to a Device

Natalie Serrino, cofounder of Gimlet Labs, demonstrates the company’s latest edge AI and vision technologies and products at the September 2024 Edge AI and Vision Alliance Forum. Specifically, Serrino demonstrates Gimlet’s capabilities for deploying and monitoring edge AI workloads. She deploys a custom pipeline for hardhat compliance detection to an Intel NUC, and shows live

Read More »

Lotus Deploys Ambarella’s Oculii AI 4D Imaging Radar Technology in L2+ Semi-Autonomous Systems for Eletre SUV and Emeya Hyper-GT Electric Vehicles

Lotus Achieves Ultra-Long-Range Detection of Over 300 Meters With High Angular Resolution for Automated Safety and Autopilot Features at Racetrack Speeds Using Fewer Radar Antennas SANTA CLARA, Calif., Sept. 24, 2024 — Ambarella, Inc. (NASDAQ: AMBA), an edge AI semiconductor company, today announced in advance of AutoSens Europe that its Oculii™ AI 4D imaging radar

Read More »

“Practical Strategies for Successful Implementation and Deployment of AI-based Solutions,” a Presentation from Globus Medical

Ritesh Agarwal, Computer Vision Lead at Globus Medical, presents the “Practical Strategies for Successful Implementation and Deployment of AI-based Solutions” tutorial at the May 2024 Embedded Vision Summit. AI models that produce accurate results on test data are a necessary component of successful applications, but by themselves they are insufficient.… “Practical Strategies for Successful Implementation

Read More »

“Using Synthetic Data to Train Computer Vision Models,” a Presentation from Geisel Software

Brian Geisel, CEO of Geisel Software, presents the “Using Synthetic Data to Train Computer Vision Models” tutorial at the May 2024 Embedded Vision Summit. Developers of machine-learning based computer vision applications often face difficulties obtaining sufficient data for training and evaluating models. In this talk, Geisel explores the use of… “Using Synthetic Data to Train

Read More »

“Introduction to Computer Vision with Convolutional Neural Networks,” a Presentation from eBay

Mohammad Haghighat, Senior Manager for CoreAI at eBay, presents the “Introduction to Computer Vision with Convolutional Neural Networks” tutorial at the May 2024 Embedded Vision Summit. This presentation covers the basics of computer vision using convolutional neural networks. Haghighat begins by introducing some important conventional computer vision techniques and then… “Introduction to Computer Vision with

Read More »

“Building Meaningful Products Using Complex Sensor Systems,” a Presentation from DEKA Research & Development

Dirk van der Merwe, Autonomous Robotics Lead at DEKA Research & Development, presents the “Building Meaningful Products Using Complex Sensor Systems” tutorial at the May 2024 Embedded Vision Summit. Most complex sensor systems begin with a simple goal—ensuring safety and efficiency. Whether it’s avoiding collisions between vehicles or predicting future… “Building Meaningful Products Using Complex

Read More »

“Latest Trends in AI Semiconductors,” an Interview with D2D Advisory

Jay Goldberg, CEO and Founder of D2D Advisory, talks with Phil Lapsley, Co-Founder and Vice President of BDTI and Vice President of Business Development at the Edge AI and Vision Alliance, for the “Latest Trends in AI Semiconductors” interview at the May 2024 Embedded Vision Summit. In this wide-ranging, insightful… “Latest Trends in AI Semiconductors,”

Read More »

“Entering the Era of Multimodal Perception,” a Presentation from Connected Vision Advisors

Simon Morris, Serial Tech Entrepreneur and Start-Up Advisor at Connected Vision Advisors, presents the “Entering the Era of Multimodal Perception” tutorial at the May 2024 Embedded Vision Summit. Humans rely on multiple senses to quickly and accurately obtain the most important information we need. Similarly, developers have begun using multiple… “Entering the Era of Multimodal

Read More »

Elevate Your Video Conferencing with Visidon AI Upscale

As remote work and hybrid meetings continue to shape our professional landscape, the need for high-quality, engaging video conferencing has never been more critical. Traditional digital zoom solutions often fall short, resulting in blurry, pixelated images that can detract from the meeting experience. Enter Visidon AI Upscale, an AI-powered technology designed to work with embedded

Read More »

“Federated ML Architecture for Computer Vision in the IoT Edge,” a Presentation from Cisco

Akram Sheriff, Senior Manager for Software Engineering at Cisco, presents the “Federated ML Architecture for Computer Vision in the IoT Edge” tutorial at the May 2024 Embedded Vision Summit. In this talk, Sheriff begins by introducing federated learning (FL) for computer vision in IoT edge applications. Federated learning is an… “Federated ML Architecture for Computer

Read More »

b<>com *Sublima* Implemented on Synaptics VS680 SoC for First AI-enabled Frame-accurate SDR-to-HDR Video Conversion for Set-top Boxes

Algorithm fully leverages VS680’s optimized NPU and market-leading TOPS for the AI efficiency, performance, and security required to enhance protected video in real time on edge devices. Amsterdam, The Netherlands, September 12, 2024 – b<>com and Synaptics® Incorporated (Nasdaq: SYNA) announced today that b<>com has implemented its market-proven *Sublima*™ algorithm on Synaptics’ VS680 multimedia system

Read More »

What on Earth is a Copilot+ PC?

This blog post was originally published at Qualcomm’s website. It is reprinted here with the permission of Qualcomm. Everything you need to know about this new class of Windows PCs powered by Snapdragon X Series processors Copilot+ PCs are an entirely new class of Windows PCs powered today exclusively by Snapdragon X Elite and Snapdragon

Read More »

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top