Ihor Starepravo, Embedded Practice Director at Luxoft, demonstrates the company's latest embedded vision technologies and products at the May 2017 Embedded Vision Summit. Specifically, Starepravo demonstrates how an embedded system platform extracts a depth map out of what’s being filmed. This complex process is done in real time, allowing devices to understand complex dynamic 3D scenes, as well as react to human gestures and other movements. While run on a relatively older embedded system platform, this process is three times faster than non-optimized OpenCV algorithms.

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.



1646 N. California Blvd.,
Suite 360
Walnut Creek, CA 94596 USA

Phone: +1 (925) 954-1411
Scroll to Top