A new technique developed by Microsoft Research shows the potential to use the Kinect's 3D, depth-sensing camera to create detailed, real-time 3D models of entire rooms, from multiple angles.
The KinectFusion demonstration, as shown in a recent SIGGRAPH 2011 presentation
, fuses multiple arbitrary viewpoints of an environment into a volumetric 3D model in a matter of seconds.
The system uses the Kinect's point-based depth data to estimate the unit's position and orientation in the room, then uses a GPU to integrate this data into the previously known information about the space. In this way, the Kinect can be moved over an environment to 'scan' the space from multiple angles in real time.
Once generated, the 3D model can be manipulated with arbitrary lighting and texture maps, as well as virtual characters and objects that can be superimposed accurately on to a video image of the space.
In the demonstration video, researchers show the KinectFusion system being used for robust augmented reality applications, including a finger-tracking demonstration that lets users virtually draw on arbitrary surfaces around a room.