How to Choose an Embedded Camera for AMRs

This blog post was originally published at e-con Systems’ website. It is reprinted here with the permission of e-con Systems.

Choosing a camera for autonomous mobile robots is easier said than done. It requires you to consider a multitude of factors. But we have made it easy for you. Whether you are developing a warehouse robot, patrol robot, or agricultural robot, this article outlines everything you need to know about selecting a camera for robots.

Autonomous Mobile Robots (AMRs) have been revolutionizing the way various tasks are performed across multiple industries including retail, medical, industrial, smart city, agriculture, and more. They automate both mundane as well as intelligent tasks to reduce human labor, thereby improving productivity in factory floors, warehouses, hospitals, office buildings, agricultural fields, etc.

And most modern-day robots are powered by embedded cameras that help them detect obstacles, identify objects, provide surround view, etc. However, choosing and integrating a camera into an AMR involves considering multiple factors. So in this article, we intend to help you understand in detail the factors to be considered while picking a camera module for your AMR.

Before we start, we need to understand that every robot is different. And at times, it is difficult to have a generalized approach to camera integration for all of them. This requires you to understand:

  • The end application/function of the robot.
  • The environment it is used in.

Given that the above parameters are specific to a particular use case or type of robot, it is always recommended to take advice from an imaging expert like e-con Systems before you start your camera integration journey.

Nevertheless, here we will look at all that you need to be aware of in general to take a call when it comes to your choice of camera.

Two broad types of cameras used in AMRs

The types of cameras used in robots can be broadly categorized into two:

  • 3D vision – 3D depth-based imaging for mapping, localization, navigation, etc.
  • 2D vision – 2D imaging to read barcodes, capture images of objects, monitor the surroundings, etc.

We will try to understand the factors involved in camera selection along these two camera types.

Key factors to consider while choosing a 3D vision camera

When it comes to 3D vision, cameras are used for the following functions:

  • Mapping – the process of an AMR creating a map of its territory.
  • Localization – the method by which an AMR finds its location in its territory.
  • Navigation/path planning – this involves the AMR defining the most optimal path to get from point A to point B.
  • Obstacle detection and avoidance – this is the technique of detecting obstacles to avoid any possible accidents or damage.
  • Odometry – this is about finding the change in the position of a robot over a specific period.

To learn more about how a 3D camera technology helps an AMR with these functions, please read the article How does an Autonomous Mobile Robot use time of flight technology?

AMRs need to use what is called depth cameras to perform the above functions. And three of the most popular technologies used for depth mapping are:

  • Stereo cameras
  • Time of flight
  • Structured light

A stereo camera uses the stereo disparity technique to process the images from multiple cameras to measure the depth of the target object. On the other hand, a time of flight camera uses the light detection technique – which involves calculating the time taken by light to travel to and from the target object – to measure depth. Structured light also uses a light source and projects a pattern onto the target object. However, depth is measured by analyzing the deformations seen on the reflected pattern.

Now, let us look at the key factors to consider while choosing a depth camera for your autonomous mobile robot. They are:

  • Processing capability of the host platform
  • Lighting conditions and sensitivity
  • ROS support
  • Generic camera parameters such as level of detail required, area to be captured, and optics.

Processing capability of the host platform

You would already know that choosing the right processor is one of the basic requirements when it comes to building an embedded vision system. For 3D vision, this is more important because stereo cameras need to have a stereo algorithm to run on the host platform to get the depth of the object in the scene. This in turn requires the host platform to have enough processing power to be able to do this. However, this wouldn’t matter in a time of flight or structured light camera since you are getting the depth data directly from these.

Depending on the processing requirement, you can choose from the various processors available in the market today – the NVIDIA Jetson series being the most popular of them given its performance and the SDKs it offers.

Also read A quick guide to selecting the best-fit processor for your robotic vision system.

Lighting conditions and sensitivity

The ability of a robot camera to capture depth information accurately will depend heavily on the ambient lighting and the lighting mechanism used in the camera. A passive stereo camera should be used only in applications where there is enough ambient light in the environment the AMR operates in. For instance, a passive stereo camera wouldn’t fit a patrol robot that has to do night-time surveillance.

When the availability of light is limited, an active stereo camera, time of flight camera, or structured light camera is recommended. For instance, Depth Vista – a 3D time of flight camera from e-con Systems – comes with a VCSEL that enables image capture even with zero ambient light.

ROS support

Considering that most robotic systems use ROS (Robotic Operating System), it is essential for the camera to support it. This helps product developers and engineers to shorten development cycles and take their robots to the market faster.

To learn more about what ROS is and why it is important, please check out the article What is Robotic Operating System? What are the e-con cameras compatible with ROS?

Generic camera parameters

Like the camera selection process in any other application, AMRs also need different features depending on their functions. The camera features you need to consider include:

  • Level of detail required – here, you need to consider factors such as resolution, frame rate, interface, etc. For high-quality imaging, you typically have to go with a camera that has features like high resolution, high frame rate, and a long distance interface like GMSL2 or FPD Link.
  • Area to be covered – this involves choosing the right field of view in addition to deciding the number of cameras and type of synchronization to be used (if it is a multi-camera system).
  • Optics – when it comes to choosing a lens, you have to consider factors like aperture, depth of field, and focal length. In addition, you need to account for any possible lens distortions such as lens vignetting and barrel distortion.

Key factors to consider while choosing a camera for 2D vision

Before we look at the factors, we need to first understand what 2D vision is used for in AMRs. Some of the functions of 2D vision in AMRs include:

  • Barcode reading
  • Detecting obstacles
  • Capturing images for the AI algorithm to identify objects
  • Creating a 360-degree view
  • Capturing images and videos for data collection and preview

It is interesting to note that 2D vision is used in mapping, localization and navigation as well when these functions are marker based. 3D vision is used while these have to be done without the help of markers.

With these in mind, let us look at the various considerations while picking and embedding a 2D camera into AMRs.

Lighting conditions

In 3D vision, we discussed the importance of having a camera that can capture images in low lighting conditions. This applies to 2D vision as well. In addition, for robots used in outdoor environments, we need to account for bright lighting conditions. In such scenarios, HDR cameras are recommended. The camera should also deliver images with low noise even under a limited light supply.

To understand this better, consider an agricultural robot, say an automated weeder. It has to operate in open fields where the likelihood of having bright sunlight is high. To avoid any washout in the output image owing to this, an HDR camera will have to be used. Similarly, imagine a cleaning robot that operates in areas of a warehouse that have a limited light supply. In such a scenario, the robot will have to use low light cameras to meet the desired image output levels.

Type of target

Depending on whether the object is stationary or moving, the camera’s shutter type has to be changed. If the object is static or moving at a very low speed, a rolling shutter camera would suffice. At the same time, if the object is moving fast, there are two possible distortions that could happen in the output image:

  1. Motion blur
  2. Rolling shutter artifact

In the case of the former, a rolling shutter camera with high frame rate would do the job. In the latter case, a global shutter camera is required.

An example of an AMR application where a high frame rate rolling shutter or a global shutter camera will have to be used is a warehouse robot that has to read barcodes on the move.

To understand more about these two concepts, please visit the article Differences between rolling shutter artifacts and motion blur.  

Level of detail

Depending on the level of detail needed in the output image, you need to choose the right:

  • Resolution – for high quality imaging, pick a high-resolution camera (such as a 4K camera)
  • Frame rate/exposure time – if the object is static and you wish to collect as much light as possible, a lower frame rate or a longer exposure time would be recommended.

Interface

If your AMR is capturing high-resolution images at high frame rates, you need to use an interface that can handle this bandwidth. In applications where the throughput is high, a MIPI/GMSL2/FPD Link-III interface is preferred. A choice between MIPI, GMSL2, and FPD Link-III can be made based on the distance to which you want to transmit image or video data.

Area to be captured and optics

To make sure your robotic vision system is able to cover the desired field of view (FOV), you need to have a lens and a sensor that can accommodate that. This would also determine whether you need to go with a single or multi-camera system. You will also have to decide the method of synchronization (whether you should go with hardware-level or software-level synchronization).

To deep dive into how to implement multi-camera integration the right way, please have a look at the article What are the crucial factors to consider while integrating multi-camera solutions?

In addition to the FOV, there are a few other parameters you need to look at while picking a lens for your robot camera. They include:

  • Focus type
  • Aperture
  • Focal length
  • Depth of field

As in the case of 3D vision, you also need to make sure that your camera is able to prevent any lens distortions such as lens vignetting and barrel distortion.

ROS support and compatibility

This is critical in 2D vision as well since the cameras have to be compatible with the Robotic Operating System.

e-con Systems’ contribution in robotic vision

e-con Systems has been in the embedded vision space for 18+ years now. We have developed multiple camera solutions for autonomous mobile robots including GMSL2 cameras, 3D depth cameras, synchronized multi-camera systems, and FPD Link-III cameras – just to name a few. With extensive experience in working with customers across multiple industries, e-con Systems has integrated cameras into warehouse robots, agricultural robots, telepresence robots, delivery robots, patrol robots, etc.

To learn more about our camera solutions for AMRs, please visit the AMR markets page. You could also have a look at the Camera Selector to browse through the complete portfolio of our cameras. In case you are looking for help in integrating camera modules into your AMRs, please write to us at [email protected].

Gomathi Sankar
Head of the Industrial Camera Business Unit, e-con Systems

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top