“Strategies and Methods for Sensor Fusion,” a Presentation from Sensor Cortek

Robert Laganiere, CEO of Sensor Cortek, presents the “Strategies and Methods for Sensor Fusion” tutorial at the May 2022 Embedded Vision Summit.

Highly autonomous machines require advanced perception capabilities. Autonomous machines are generally equipped with three main sensor types: cameras, lidar and radar. The intrinsic limitations of each sensor affect the performance of the perception task. One way to increase overall performance is to combine the information coming from different sensor types. This is the objective of sensor fusion: to combine the information from different sensors and thus improve the perceptual ability of the system. This way the system can better operate under challenging environmental conditions by relying on the sensor data that is the least impacted by the current situation (e.g. poor lighting, adverse weather).

In this presentation, Laganiere presents the main sensor fusion strategies that can be used for combining heterogeneous sensor data. In particular, he explores the three primary fusion methods that can be applied in a perception system: early fusion, late fusion and mid-level fusion.

See here for a PDF of the slides.

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top