fbpx

Industrial Applications for Embedded Vision

Computer vision processing-based products are established in a number of industrial applications

Computer vision processing-based products have established themselves in a number of industrial applications, with the most prominent one being factory automation (where the application is commonly referred to as machine vision). Primary factory automation sectors include:

  • Automotive — motor vehicle and related component manufacturing
  • Chemical & Pharmaceutical — chemical and pharmaceutical manufacturing plants and related industries
  • Packaging — packaging machinery, packaging manufacturers and dedicated packaging companies not aligned to any one industry
  • Robotics — guidance of robots and robotic machines
  • Semiconductors & Electronics — semiconductor machinery makers, semiconductor device manufacturers, electronic equipment manufacturing and assembly facilities

What are the primary embedded vision products used in factory automation applications?

The primary embedded vision products used in factory automation applications are:

  • Smart Sensors — A single unit that is designed to perform a single machine vision task. Smart sensors require little or no configuring and have limited on-board processing. Frequently a lens and lighting are also incorporated into the unit.
  • Smart Cameras — This is a single unit that incorporates a machine vision camera, a processor and I/O in a compact enclosure. Smart cameras are configurable and so can be used for a number of different applications. Most have the facility to change lenses and are also available with built-in LED lighting.
  • Compact Vision System — This is a complete machine vision system, not based on a PC, consisting of one or more cameras and a processor module. Some products have an LCD screen incorporated as part of the processor module. This obviates the need to connect the devices to a monitor for set-up. The principal feature that distinguishes compact vision systems (CVS) from smart cameras is their ability to take information from a number of cameras. This can be more cost-effective where an application requires multiple images.
  • Machine Vision Cameras (MV Cameras) — These are devices that convert an optical image into an analogue or digital signal. This may be stored in random access memory, but not processed, within the device.
  • Frame Grabbers — This is a device (usually a PCB card) for interfacing the video output from a camera with a PC or other control device. Frame grabbers are sometimes called video-capture boards or cards. They vary from being a simple interface to a more complex device that can handle many functions including triggering, exposure rates, shutter speeds and complex signal processing.
  • Machine Vision Lighting — This refers to any device that is used to light a scene being viewed by a machine-vision camera or sensor. This report considers only those devices that are designed and marketed for use in machine-vision applications in an industrial automation environment.
  • Machine Vision Lenses — This category includes all lenses used in a machine-vision application, whether sold with a camera or as a spare or additional part.
  • Machine Vision Software — This category includes all software that is sold as a product in its own right, and is designed specifically for machine-vision applications. It is split into:
    • Library Software — allows users to develop their own MV system architecture. There are many different types, some offering great flexibility. They are often called SDKs (Software Development Kits).
    • System Software — which is designed for a particular application. Some are very comprehensive and require little or no set-up.

“Machine Learning at the Edge in Smart Factories Using TI Sitara Processors,” a Presentation from Texas Instruments

Manisha Agrawal, Software Applications Engineer at Texas Instruments, presents the “Machine Learning at the Edge in Smart Factories Using TI Sitara Processors” tutorial at the May 2019 Embedded Vision Summit. Whether it’s called “Industry 4.0,” “industrial internet of things” (IIOT) or “smart factories,” a fundamental shift is underway in manufacturing:… “Machine Learning at the Edge

Read More »

“Sensory Fusion for Scalable Indoor Navigation,” a Presentation from Brain Corp

Oleg Sinyavskiy, Director of Research and Development at Brain Corp, presents the “Sensory Fusion for Scalable Indoor Navigation” tutorial at the May 2019 Embedded Vision Summit. Indoor autonomous navigation requires using a variety of sensors in different modalities. Merging together RGB, depth, lidar and odometry data streams to achieve autonomous… “Sensory Fusion for Scalable Indoor

Read More »

“Deep Learning for Manufacturing Inspection Applications,” a Presentation from FLIR Systems

Stephen Se, Research Manager at FLIR Systems, presents the “Deep Learning for Manufacturing Inspection Applications” tutorial at the May 2019 Embedded Vision Summit. Recently, deep learning has revolutionized artificial intelligence and has been shown to provide the best solutions to many problems in computer vision, image classification, speech recognition and… “Deep Learning for Manufacturing Inspection

Read More »

May 2019 Embedded Vision Summit Slides

The Embedded Vision Summit was held on May 20-23, 2019 in Santa Clara, California, as an educational forum for product creators interested in incorporating visual intelligence into electronic systems and software. The presentations delivered at the Summit are listed below. All of the slides from these presentations are included in… May 2019 Embedded Vision Summit

Read More »
Figure6

Multi-sensor Fusion for Robust Device Autonomy

While visible light image sensors may be the baseline "one sensor to rule them all" included in all autonomous system designs, they're not necessarily a sole panacea. By combining them with other sensor technologies: "Situational awareness" sensors; standard and high-resolution radar, LiDAR, infrared and UV, ultrasound and sonar, etc., and "Positional awareness" sensors such as

Read More »

“Computer Vision for Industrial Inspection: From PCs to Embedded,” a Presentation from NET GmbH

Thomas Däubler, CTO of NET New Electronic Technology GmbH, presents the “Computer Vision for Industrial Inspection: The Evolution from PCs to Embedded Solutions” tutorial at the May 2018 Embedded Vision Summit. In this presentation, Däubler introduces current industrial inspection computer vision applications and solutions, and explores how vision solutions are evolving for this market. In

Read More »

“A New Generation of Camera Modules: A Novel Approach and Its Benefits for Embedded Systems,” a Presentation from Allied Vision Technologies

Paul Maria Zalewski, Product Line Manager at Allied Vision Technologies, presents the “A New Generation of Camera Modules: A Novel Approach and Its Benefits for Embedded Systems” tutorial at the May 2018 Embedded Vision Summit. Embedded vision systems have typically relied on low-cost image sensor modules with a MIPI CSI-2 interface. Now, machine vision camera

Read More »

“High-end Multi-camera Technology, Applications and Examples,” a Presentation from XIMEA

Max Larin, CEO of XIMEA, presents the “High-end Multi-camera Technology, Applications and Examples” tutorial at the May 2018 Embedded Vision Summit. For OEMs and system integrators, many of today’s applications in VR/AR/MR, ADAS, measurement and automation require multiple coordinated high performance cameras. Current generic components are not optimized to achieve the desired traits in terms

Read More »

May 2018 Embedded Vision Summit Slides

The Embedded Vision Summit was held on May 21-24, 2018 in Santa Clara, California, as an educational forum for product creators interested in incorporating visual intelligence into electronic systems and software. The presentations delivered at the Summit are listed below. All of the slides from these presentations are included in… May 2018 Embedded Vision Summit

Read More »

Beyond-visible Light Applications in Computer Vision

Computer vision systems aren’t necessarily restricted to solely analyzing the portion of the electromagnetic spectrum that is visually perceivable by humans. Expanding the analysis range to encompass the infrared and/or ultraviolet spectrum, either broadly or selectively and either solely or in conjunction with visible spectrum analysis, can be of great benefit in a range of

Read More »

“PCI Express – A High-bandwidth Interface for Multi-camera Embedded Systems,” a Presentation from XIMEA

Max Larin, CEO of XIMEA, presents the "PCI Express – A High-bandwidth Interface for Multi-camera Embedded Systems" tutorial at the May 2017 Embedded Vision Summit. In this presentation, Larin provides an overview of existing camera interfaces for embedded systems and explores their strengths and weaknesses.  He also examines the differences between integration of a sensor

Read More »

Visual Intelligence Opportunities in Industry 4.0

In order for industrial automation systems to meaningfully interact with the objects they're identifying, inspecting and assembling, they must be able to see and understand their surroundings. Cost-effective and capable vision processors, fed by depth-discerning image sensors and running robust software algorithms, continue to transform longstanding industrial automation aspirations into reality. And, with the emergence

Read More »

“Embedded Vision Made Smart: Introduction to the HALCON Embedded Machine Vision Library,” a Presentation from MVTec

Olaf Munkelt, Co-founder and Managing Director at MVTec Software GmbH, presents the "Embedded Vision Made Smart: Introduction to the HALCON Embedded Machine Vision Library" tutorial at the May 2017 Embedded Vision Summit. In this presentation, Munkelt demonstrates how easy it is to develop an embedded vision (identification) application based on the HALCON Embedded standard software

Read More »

“Developing Real-time Video Applications with CoaXPress,” A Presentation from Euresys

Jean-Michel Wintgens, Vice President of Engineering at Euresys, presents the "Developing Real-time Video Applications with CoaXPress" tutorial at the May 2017 Embedded Vision Summit. CoaXPress is a modern, high-performance video transport interface. Using a standard coaxial cable, it provides a point-to-point connection that is reliable, scalable and versatile. Wintgens shows, using real application cases and

Read More »

“A Multi-purpose Vision Processor for Embedded Systems,” a Presentation from Allied Vision

Michael Melle, Sales Development Manager at Allied Vision, and Felix Nikolaus, Firmware Designer at Allied Vision, presents the "A Multi-purpose Vision Processor for Embedded Systems" tutorial at the May 2017 Embedded Vision Summit. This presentation gives an overview of an innovative vision processor that delivers the superior image quality of industrial cameras while enabling the

Read More »
evsummit_logo

May 2017 Embedded Vision Summit Slides

The Embedded Vision Summit was held on May 1-3, 2017 in Santa Clara, California, as a educational forum for product creators interested in incorporating visual intelligence into electronic systems and software. The presentations delivered at the Summit are listed below. All of the slides from these presentations are included in… May 2017 Embedded Vision Summit

Read More »
evsummit_logo

May 2016 Embedded Vision Summit Proceedings

The Embedded Vision Summit was held on May 2-4, 2016 in Santa Clara, California, as a educational forum for product creators interested in incorporating visual intelligence into electronic systems and software. The presentations presented at the Summit are listed below. All of the slides from these presentations are included in… May 2016 Embedded Vision Summit

Read More »
Tractica-Logo-e1431719018493

Deep Learning Use Cases for Computer Vision (Download)

Six Deep Learning-Enabled Vision Applications in Digital Media, Healthcare, Agriculture, Retail, Manufacturing, and Other Industries The enterprise applications for deep learning have only scratched the surface of their potential applicability and use cases.  Because it is data agnostic, deep learning is poised to be used in almost every enterprise vertical… Deep Learning Use Cases for

Read More »

“Enabling the Factory of the Future with Embedded Vision,” a Presentation from National Instruments

Andy Chang, Senior Manager of Academic Research at National Instruments, presents the "Enabling the Factory of the Future with Embedded Vision" tutorial at the May 2015 Embedded Vision Summit. Manufacturing has changed dramatically over the past few decades and is now changing even faster. Embedded vision is a key enabler for improved efficiency, quality, flexibility,

Read More »
Figure1

High-Performance Machine Vision Systems Using Xilinx 7 Series Technology

This is a reprint of a Xilinx white paper also found here (PDF). Xilinx and its Alliance partner Sensor to Image have created hardware, software, IP, and whole turn-key system solutions for the growing high-performance Machine Vision market. By Mark Timmons Xilinx, Inc. and Werner Feith Sensor to Image, GmbH Abstract The Machine Vision market

Read More »
May-Jun-INT-factory-automation-concept-art

Industrial Automation and Embedded Vision: A Powerful Combination

A version of this article was originally published at InTech Magazine. It is reprinted here with the permission of the International Society of Automation. In order for manufacturing robots and other industrial automation systems to meaningfully interact with the objects they're assembling, as well as to deftly and safely move about in their environments, they

Read More »

October 2013 Embedded Vision Summit Technical Presentation: “Better Image Understanding Through Better Sensor Understanding,” Michael Tusch, Apical

Michael Tusch, Founder and CEO of Apical Imaging, presents the "Better Image Understanding Through Better Sensor Understanding" tutorial within the "Front-End Image Processing for Vision Applications" technical session at the October 2013 Embedded Vision Summit East. One of the main barriers to widespread use of embedded vision is its reliability. For example, systems which detect

Read More »

October 2013 Embedded Vision Summit Technical Presentation: “Embedded Lucas-Kanade Tracking: How it Works, How to Implement It, and How to Use It,” Goksel Dedeoglu, Texas Instruments

Goksel Dedeoglu, Embedded Vision R&D Manager at Texas Instruments, presents the "Embedded Lucas-Kanade Tracking: How it Works, How to Implement It, and How to Use It" tutorial within the "Algorithms and Implementations" technical session at the October 2013 Embedded Vision Summit East. This tutorial is intended for technical audiences interested in learning about the Lucas-Kanade

Read More »

October 2013 Embedded Vision Summit Technical Presentation: “Using FPGAs to Accelerate 3D Vision Processing: A System Developer’s View,” Ken Lee, VanGogh Imaging

Ken Lee, CEO of VanGogh Imaging, presents the "Using FPGAs to Accelerate 3D Vision Processing: A System Developer's View" tutorial within the "Implementing Vision Systems" technical session at the October 2013 Embedded Vision Summit East. Embedded vision system designers must consider many factors in choosing a processor. This is especially true for 3D vision systems,

Read More »

October 2013 Embedded Vision Summit Technical Presentation: “Feature Detection: How It Works, When to Use It, and a Sample Implementation,” Marco Jacobs, videantis

Marco Jacobs, Technical Marketing Director at videantis, presents the "Feature Detection: How It Works, When to Use It, and a Sample Implementation" tutorial within the "Object and Feature Detection" technical session at the October 2013 Embedded Vision Summit East. Feature detection and tracking are key components of many computer vision applications. In this talk, Jacobs

Read More »

“Embedding Computer Vision in Everyday Life,” a Keynote Presentation from iRobot

Mario E. Munich, Vice President of Advanced Development at iRobot, presents the "Embedding Computer Vision in Everyday Life" keynote at the October 2013 Embedded Vision Summit East. Munich speaks about adapting highly complex computer vision technologies to cost-effective consumer robotics applications. Munich currently manages iRobot's research and advanced development efforts. He was formerly the CTO

Read More »

“High Speed Vision and Its Applications,” a Presentation from Professor Masatoshi Ishikawa

Professor Masatoshi Ishikawa of Tokyo University delivers the keynote presentation, "High Speed Vision and Its Applications — Sensor Fusion, Dynamic Image Control, Vision Architecture, and Meta-Perception," at the July 2013 Embedded Vision Alliance Member Meeting. High speed vision processing and various applications based on it are expected to become increasingly common due to continued improvement

Read More »

April 2013 Embedded Vision Summit Overview Presentation: “What Can You Do With Embedded Vision?,” Jeff Bier, Embedded Vision Alliance

Jeff Bier, founder of the Embedded Vision Alliance and co-founder and President of BDTI, presents the "What Can You Do With Embedded Vision?" overview presentation at the April 2013 Embedded Vision Summit. This presentation is intended for those new to embedded vision, and those seeking ideas for new embedded vision applications and technologies. Jeff Bier

Read More »

“Machine Learning,” a Presentation from UT Austin

Professor Kristen Grauman of the University of Texas at Austin presents the keynote on machine learning at the December 2012 Embedded Vision Alliance Member Summit. Grauman is a rising star in computer vision research. Among other distinctions, she was recently recognized with a Regents' Outstanding Teaching Award and, along with Devi Parikh, received the prestigious

Read More »

Embedded Vision Alliance Conversation with Daniel Wilding of National Instruments

Jeff Bier, founder of the Embedded Vision Alliance, interviews Daniel Wilding, National Instruments Digital Hardware Engineer. Jeff and Daniel discuss National Instruments' history, current status and future plans in the embedded vision application space, the applicability and advantages of FPGAs as embedded vision processors, and National Instruments' development tools for simplifying and otherwise optimizing FPGA-based

Read More »

September 2012 Embedded Vision Summit Presentation: “Introduction to Embedded Vision,” Jeff Bier, Embedded Vision Alliance

Jeff Bier, Founder of the Embedded Vision Alliance and co-founder and president of BDTI, presents the day-opening "Introduction to Embedded Vision" tutorial at the September 2012 Embedded Vision Summit. Topics discussed by Bier in his presentation include a technology overview, application examples, hardware, software and development tool trends, and an overview of the Embedded Vision

Read More »
Thumbnail

“Get Smart” With TI’s Embedded Analytics Technology

By Gaurav Agarwal, Frank Brill, Bruce Flinchbaugh, Branislav Kisacanin, Mukesh Kumar, and Jacek Stachurski Texas Instruments This is a reprint of a Texas Instruments-published white paper, which is also available here (2.1 MB PDF). Overview When a driver starts a car, he doesn’t think about starting an intelligent analytics system; sometimes, that’s precisely what he’s

Read More »
logo_2020

May 18 - 21, Santa Clara, California

The preeminent event for practical, deployable computer vision and visual AI, for product creators who want to bring visual intelligence to products.

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

1646 North California Blvd.,
Suite 360
Walnut Creek, CA 94596 USA

Phone
Phone: +1 (925) 954-1411
Scroll to Top //