After a few months of development, Microsoft released the Kinect for Windows software development kit, a tool for programmers to create applications for PCs that use the motion-sensing video game controller.
The free SDK is a beta product, and developers can only use it to create noncommercial applications. But there’s little doubt that it moves computing a small step closer to an era of natural user interfaces, where users can tell computers what to do with voices and gestures.
“This SDK really helps people move toward that,” says Anoop Gupta, an executive who holds the title of distinguished scientist in Microsoft Research who is overseeing the project. “We will see a world where computing becomes much more natural and intuitive.”
The SDK works with Windows 7 and lets developers get access to raw data coming from a depth sensor, a color camera sensor, and a four-element microphone array. The SDK can track skeleton images of one or two people, enabling gesture-driven apps. And it offers developers the ability to use noise suppression and echo cancellation, as well as beam formation, to identify sound sources and integrate with the Windows speech recognition programming interface.
Read More/Source: C|Net