“Using a Collaborative Network of Distributed Cameras for Object Tracking,” a Presentation from Invision AI

Samuel Örn, Team Lead and Senior Machine Learning and Computer Vision Engineer at Invision AI, presents the “Using a Collaborative Network of Distributed Cameras for Object Tracking” tutorial at the May 2023 Embedded Vision Summit.

Using multiple fixed cameras to track objects requires a careful solution design. To enable scaling the number of cameras, the solution must avoid sending all images across a network. When camera views overlap only a little or not at all, input from multiple cameras must be combined by extending tracking area coverage. When an object can be seen by multiple cameras, this should be used to increase accuracy.

Multiple cameras can better collaborate if they share a common coordinate system; therefore, environment mapping and accurate calibration is necessary. Moreover, the tracking algorithm must scale properly with the number of tracked objects, which can be achieved with a distributed approach. In this talk, Örn covers practical ways of addressing these issues, presents his company’s multiple-camera tracking solution used for vehicle and pedestrian tracking, and shares some of its results.

See here for a PDF of the slides.

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top