fbpx

We in the embedded vision industry live in amazing times, I'm regularly (and thankfully) reminded. Not a single day goes by lately that I'm not archiving an information tidbit (or, usually, multiple ones) for future consideration in a news writeup, an article, or a video interview. And the breakthroughs aren't just being covered by narrowly focused sites; they're now also getting picked up by leading technology blogs and other media outlets.

Take a San Francisco startup called Leap Motion, for example. As first picked up by CNET (click on the link to see another demonstration video) 10 days ago, the company has developed a prototype USB-based gesture interface sensing peripheral, compellingly shown in the above video, which it hopes to have in production late this year or early next year for a price of around $70. According to Michael Buckwald, Leap Motion's CEO, the product is 200 times more accurate than anything currently available on the market (although the FAQ contradictorily states in different places both that it's "it’s 200x more sensitive than existing touch-free products and technologies" and that "The Leap is ~100x more accurate than any other motion sensing/natural user interface on Earth"), with accuracy up to around 1/100th of a millimeter, impressively low latency, and the ability to track individual finger tips, an entire hand, and inanimate objects.

The company is tight-lipped about how the device implements depth discernment, although the combination of image sensor and infrared transmitter technologies is reportedly key to the design. A time-of-flight algorithm is one possibility, although the device's small size would make such an implementation a notable achievement. The 3-D gesture interface interaction space supported by the device is approximately 8 cubic feet in size. Leap Motion earlier this month received $12.75 million in Series A funding, led by Highland Capital Partners. The company is hoping to create a vibrant third-party software ecosystem for its hardware, and is currently accepting developer applications. The company also granted Engadget a hands-on evaluation, along with letting Wired do some testing, captured in the videos below:

For more information, check out the following additional product coverage:

See, I told you embedded vision was getting extensive coverage nowadays!

logo_2020

May 18 - 21, Santa Clara, California

The preeminent event for practical, deployable computer vision and visual AI, for product creators who want to bring visual intelligence to products.

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

1646 North California Blvd.,
Suite 360
Walnut Creek, CA 94596 USA

Phone
Phone: +1 (925) 954-1411
Scroll to Top //