By Jeff Bier
Founder, Embedded Vision Alliance
President, BDTI

This blog post was originally published at EE Times' Prototyping Design Line. It is reprinted here with the permission of EE Times.

The last two embedded vision guest entries by Vin Ratford (In Embedded Vision, Sensors Rule and In Embedded Vision, Sensors Rule, Part 2) focused on new vision sensor technologies, high-performance system architectures, and algorithms that are gaining acceptance in robotics and automotive applications. This week, I want to turn to the consumer market.

In the consumer market, one of the most interesting uses of new vision technologies is the creation of new types of user interfaces that are more natural for users. The biggest success in this space so far is Microsoft's Kinect for its Xbox game console, which has sold more than 20 million units. Recently I spoke with Tim Droz, who was previously head of entertainment systems sensor development at Canesta. (Canesta was acquired by Microsoft, and its technology is being used in the coming second-generation Kinect.) Tim is now VP and GM of US operations for the 3D sensor development company SoftKinetic.

I asked Tim about Intel's "Perceptual Computing" initiative, which aims to bring gesture control and other new user interface technology to personal computers. As part of Intel's Perceptual Computing initiative, Creative Technologies recently announced the Senz3D camera, which is a PC accessory that senses close-range 3D to track fingers, hands, face, and torso — in contrast with the Kinect, which works at longer ranges and tracks large-scale movements of players' bodies, such as kicking and jumping.

The Senz3D camera is based on SoftKinetic's DepthSense time-of-flight sensor. The sensor is a 320×240 depth array and outputs a 30 to 60 frame-per-second depth map of the PC user from 15 cm to 1 m away. (The camera also sports a 720p HD color sensor and stereo microphones.)

Building on this camera hardware, the Perceptual Computing software platform senses hand and finger gestures using a custom gesture algorithm library (also from SoftKinetic) and also adds augmented reality, facial analysis, and speech recognition. Intel presents these libraries in an SDK API, which eliminates the need for application developers to manipulate raw sensor data, freeing them to focus on designing rich natural user experiences.

The Senz3D camera will be shipping in October. The core camera module is now also integrated into Intel's new Portable All-In-One (pAIO) reference design, which was announced and demonstrated at the Intel Developer Forum in September.

Will gesture user interfaces become mainstream for PCs, as they are becoming for game consoles? Only time will tell, but it seems clear that the combined efforts of companies like Intel, Creative Technologies, and SoftKinetic increase the chances of 3D sensing catching on in PCs — and elsewhere.

If you want to learn more about incorporating visual intelligence into products, please join me on October 2 for the Embedded Vision Summit in the Boston area. The Summit is a technical educational forum for engineers interested in incorporating visual intelligence into electronic systems and software and will include a full day of presentations and demos.

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.



1646 N. California Blvd.,
Suite 360
Walnut Creek, CA 94596 USA

Phone: +1 (925) 954-1411
Scroll to Top