Recently I decided to take another crack at getting the Microsoft Kinect to work with the Raspberry Pi. Last time I went at it, it was about a year ago. After a grueling failure to get the depth information, that is, what actually makes the Kinect see in 3D, it seemed that the online consensus was that the USB drivers on the Pi were simply not up to snuff when it came to getting all of that information out of the Kinect.
This time, however, there was a new library available called 'librekinect' that supposedly allowed yo to get that depth stream right as a camera device on the pi. This meant it could essentially be used exactly like a USB webcam and, more importantly, be quickly integrated into any Python OpenCV code. After allowing the pi almost a whole day to compile the code and through various different build errors I was able to have the final 'make load' command execute. The result was a sparkling grayscale depth feed out of the Pi. To allow the human eye to better distinguish the different levels I can also feed the image through a OpenCV color map. Check out the VNC remote view of the feed (note that those views are not from the same time):
This will hopefully make it easier for our robotics team to implement the Kinect on small linux platforms in the future or simply some other project I decide to do!