Dear Colleague,
Microsoft's Kinect peripheral for the Xbox 360 game console and Windows 7-based PCs singlehandedly brought awareness of vision-based applications such as gesture interfaces and facial recognition to the masses. It's also the embedded vision foundation for a plethora of other system implementations, either based on Microsoft's O/S and thereby leveraging the official Kinect for Windows SDK, or via harnessing unofficial third-party toolsets. Not a day seemingly goes by without news of some cool new Kinect-based implementation; pipe organ control, for example, or augmented reality-augmented (pun intended) magic tricks, or Force-tapping video games, or holographic videoconferencing systems, or navigation assistance for the blind among us. Were I to try to even briefly mention each of the ones I've heard about in just the past few months, far from explain them in-depth, this introductory letter alone would be several pages in length. Instead, at least for the purposes of this particular newsletter, I'll focus on Microsoft-announced Kinect advancements.
- Later this month, the company will release v1.5 of the Kinect SDK. According to the blog post revealing the news, "Among the most exciting new capabilities is Kinect Studio, an application that will allow developers to record, playback and debug clips of users engaging with their applications. Also coming is what we call 'seated' or '10-joint' skeletal tracking, which provides the capability to track the head, neck and arms of either a seated or standing user." The enhancements will work in both standard and "near mode", and won't require new hardware.
- Last November, the company announced that it was co-creating (with TechStars) an accelerator program intended to promote startups that are harnessing Kinect for commercial applications. Applications were accepted through late January; the victors will take part in a three-month incubation program at Microsoft, as well as receive $20,000 in seed funding. Early last month, the company unveiled the 11 winners, selected from nearly five hundred applications with concepts spanning nearly 20 different industries, including healthcare, education, retail, and entertainment.
- Kinect, at least in its Xbox 360 form, will likely soon show up in a lot more homes. That's because Microsoft, taking a page from cellular service providers, just announced a subsidized version of the 4 GByte console-plus-peripheral bundle. You pay only $99 upfront, but commit to a two-year Xbox LIVE Gold subscription at $14.99/month. At the end of the two-year term, you've shelled out roughly $100 more than if you had bought the console-plus-subscription in one shot, but it's an attractive entry to the Kinect experience for folks without a lot of extra cash on hand.
- And this last one should be treated as a rumor, at least for the moment. The most recent upgrade of the Xbox 360 user interface, which rolled out last December, focused the bulk of its Kinect attention on the peripheral's array microphone audio input subsystem. Persistent speculation fueled by unnamed insiders, however, suggests that the next Xbox 360 UI upgrade, currently being tested, will showcase numerous vision enhancements. Specifically, while the console currently supports Bing search engine-powered media explorations on various websites, Microsoft will supposedly soon bring a full-featured Internet Explorer browsing experience to the Xbox 360, powered by both voice commands and gestures.
There's plenty more where those came from; the best ways to track Microsoft's ongoing Kinect developments are to regularly monitor the company blog (via RSS if you wish), Twitter feed and Facebook page.
I'm curious: how many of you are planning on using Kinect (either sanctioned on the Xbox 360 or PC, or unsanctioned on another platform via enthusiast-developed SDKs) as the basis for your embedded vision implementations? And how many others of you, while you might not be harnessing Kinect directly, are still leveraging one or several of its technology building blocks; the PrimeSense depth-map processor, for example, or the structured light depth-discerning technique? I look forward to hearing from you; I'll certainly keep your comments anonymous if you wish.
Thanks as always for your support of the Embedded Vision Alliance, and for your interest in and contributions to embedded vision technologies, products and applications.
Brian Dipert
Editor-In-Chief, Embedded Vision Alliance
FEATURED VIDEOS |
Tobii Eye-Tracking User Interface Demonstration
Introducing Analog Devices' Blackfin ADSP-BF60x Processors
|
FEATURED ARTICLES |
Improve Perceptual Video Quality: Skin-Tone Macroblock Detection
Embedded Vision In Medicine: Let Smartphone Apps Inspire Your Design Decisions
|
FEATURED NEWS |
Samsung's Galaxy S III: Embedded Vision In Smartphones Goes Mainstream Gesture Interfaces Via Sound: Clever Ideas Abound Panorama Mode: Embedded Vision Processing Blends Pixels Together Via Microcode Image Analysis With Cloud-Based Cerebral Cortex Assistance Makeup Selection: An Embedded Vision-Based Determination
|