fbpx

Computer Vision and Power Savings: An Effective Combination

VisionSystemsDesign

This blog post was originally published at Vision Systems Design's website. It is reprinted here with the permission of PennWell.

Conventional thinking might suggest that adding more hardware to a system design, or boosting the performance of the hardware, would cause the system to consume more power. While this cause-and-effect relationship pans out in many cases, the converse is also sometimes the case.

Here's one example. An autonomous robotic vacuum cleaner needs to run as long as possible between battery recharges in order to maximize cleaning capability. There's a limit, though, to how much battery capacity you can pack into such a device; more batteries mean more weight and a larger form factor. And because it's a vacuum cleaner, effectiveness in picking up dirt is also a critical design parameter…here again, though, more powerful motors tend to be heavier, bigger, and drain batteries faster. Add in the additional power draw of the motors necessary for the device to propel itself across the floor, and you've got quite an engineering challenge on your hands.

Enter vision processing, which several vacuum cleaner manufacturers have recently added to their latest products. By using computer vision as the primary localization and navigation technology, versus the more rudimentary pattern-based cleaning algorithms historically employed, these new models are able to cover a floor efficiently, minimizing the number of times they pass over any particular stretch, as well as avoiding obstacles en route. This enables them to focus the available battery charge on the primary task at hand, vacuuming, while also optimizing the size and weight of the system's batteries and motors for maximum run time.

Or consider drones. Maximizing flight time (like maximizing vacuuming time) is a key design objective. But you also want the drone to be decent-sized, both to enable it to withstand wind gusts and to be capable of toting relevant payloads. The bigger the drone, though, and the bigger the drone's motors, the heavier its weight and consequently the shorter its potential flight time for a given-sized battery array. Once again, computer vision can assist in solving your design problem. Modern drones, such as the new Phantom 4 from DJI, leverage their on-board cameras not only to capture in-flight video footage for later viewing but also for efficient navigation. A drone able to autonomously avoid obstacles means less wasted energy en route, not to mention preventing potential collisions with other drones and objects in the flight path.

Note in both of these cases that to yield net energy savings, the added vision subsystem must save more overall system power than it consumes. It also must not push up the bill-of-materials cost inordinately. Fortunately, practical computer vision processing is becoming increasingly feasible at both low power and low price points. Larry Matthies, Senior Scientist at NASA's Jet Propulsion Laboratory, will explore these and other topics in his upcoming Embedded Vision Summit keynote "Using Vision to Enable Autonomous Land, Sea and Air Vehicles," as will Embedded Vision Alliance founder Jeff Bier in his plenary session, "Computer Vision 2.0: Where We Are and Where We're Going."

In addition, multiple technical presentations at the Embedded Vision Summit will explore the subject of minimizing power consumption, including the software optimization talks "Dataflow: Where Power Budgets Are Won and Lost" from Cormac Brick of Movidius, and "Making Computer Vision Software Run Fast on Your Embedded Platform" from Alexey Rybakov of LUXOFT. And heterogeneous computing, which enables energy efficient implementation of demanding algorithms, will also be discussed in a number of Summit technical sessions. For example, both Bill Jenkins in "Accelerating Deep Learning Using Altera FPGAs" and Auviz Systems' Nagesh Gupta in "Semantic Segmentation for Scene Understanding: Algorithms and Implementations" will cover the acceleration of deep learning algorithms for vision processing in programmable logic devices.

The Embedded Vision Summit, an educational forum for product creators interested in incorporating visual intelligence into electronic systems and software, takes place in Santa Clara, California May 2-4, 2016. Register now, as space is limited and seats are filling up! I look forward to seeing you there.

Regards,

Brian Dipert
Editor-in-Chief, Embedded Vision Alliance
[email protected]

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

1646 N. California Blvd.,
Suite 360
Walnut Creek, CA 94596 USA

Phone
Phone: +1 (925) 954-1411
Scroll to Top