fbpx

Quadrotors And Other Drones: Gesture Interfaces Send Them Off And Bring Them Home

GestureInterfaceDrones

The July issue of Wired Magazine, which I received in the mail just the other day, contains an excellent cover story which I commend to your attention. Entitled "How I Accidentally Kickstarted the Domestic Drone Boom," it's written by publication Editor-in-Chief Chris Anderson and discusses (in Anderson's usual humble fashion…ahem…) the flourishing interest in autonomous flying machines of various shapes and sizes, aided in part by the DIY Drones online community that Anderson founded five years ago.

The importance of embedded vision in enabling autonomous machines to discern where they are (and how to get to where they're going next, avoiding objects along the way), as well as to transmit information about their surroundings back to 'home base', is perhaps obvious but nonetheless bears stating. It's a topic I've regularly covered in the past, both from military-specific and more general-purpose perspectives. The Parrot AR.Drone got a shout-out here when it received a CES-timeframe "version 2.0" upgrade that mostly had to do with improved video. I wrote about the topic in a recently published article at Electronic Products Magazine. And speaking of video, make sure you check out Jim Donlon's EVA Summit keynote on the subject, if you haven't already done so.

One of the more mind-blowing demonstrations of quadrotor vision-enabled sentience that I've come across in recent times occurred in the midst of University of Pennsylvania professor Vijay Kumar's presentation at a recent TED conference. I encourage you to check out the whole thing, as it also gives a good overview of "robotic flying machine" technology's origins, as well as Kumar's observations on its current status and predictions for its continued evolution:

As a stepping-stone to complete autonomy, many of today's drones are unmanned but human-controlled, often by operators on the other side of the world; an Air Force base north of Las Vegas, for example, is reportedly responsible for managing many of the drones operating in Afghanistan and other countries in that region.

And what about ship-based drones? They might also require gesture-based assistance in taking off and landing, much as controllers on the flight deck guide manned jets today. For more on that particular concept, I'll direct you to the article "Giving drones a thumbs up" from a recent issue of The Economist. Make sure you also check out the embedded video published at the article link. From the writeup, which discusses a MIT research project championed by computer scientists Yale Song, David Demirdjian and Randall Davis:

In much the same way that spoken language is actually a continuous stream of sound (perceived gaps between words are, in most cases, an audio illusion), so the language of gestures to pilots is also continuous, with one flowing seamlessly into the next. And the algorithm could not cope with that.

To overcome this difficulty Mr Song imposed gaps by chopping the videos up into three-second blocks. That allowed the computer time for reflection. Its accuracy was also increased by interpreting each block in light of those immediately before and after it, to see if the result was a coherent message of the sort a deck officer might actually wish to impart.

The result is a system that gets it right three-quarters of the time. Obviously that is not enough: you would not entrust the fate of a multi-million-dollar drone to such a system. But it is a good start.

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

1646 N. California Blvd.,
Suite 360
Walnut Creek, CA 94596 USA

Phone
Phone: +1 (925) 954-1411
Scroll to Top