- The Kinect will be a very big deal in the next couple years. It has been successful so far, but once it hits for Windows, it will really become a huge device with major numbers of users.
- I am a big fan of computer vision algorithms, and the Kinect provides some things that aren't as easy to get with other devices. It has a unique mix of color and depth input images, which can make for some interesting new algorithms
- The GPU is the perfect place to perform some image processing algorithms, so mapping the Kinect input data into a configuration that can be used by Direct3D 11 is a good idea.
Many of my favorite courses in my undergrad and Masters degrees were related to image and/or signal processing, so the Kinect provides a great opportunity for me to exercise that particular itch. This sample provides a fairly simple introduction to working with the Kinect by simply mapping the color and depth buffer contents into appropriately created Texture2D resources. I have already seen a forum posts asking about how to properly map the data from the Kinect to D3D11 GPU resources, so I built up the sample in such a way that should be fairly easily portable to another framework.
My intentions to work with these resources is to provide some new functionality outside of the standard human pose acquisition. The Kinect SDK provides methods for obtaining the human poses of those people that are within view of the Kinect cameras. While I am sure I will be using this too at some point, I want to push the limit on what people are doing with the raw depth and color images. Certainly more info will follow on exactly what I mean down the road.
I am also going to be doing some work with OpenCV, which is a computer vision library that is available with the BSD license. While I normally like implementing any algorithms that I use myself, I'm also realistic about how much time I will have to get something done. I'll choose my battles the best way that I can, and try to use my time for implementing features that use D3D11 over general algorithms that are already available (via OpenCV). Things like calibrating cameras shouldn't be the focus of my work
Due to the fact that working with the Kinect requires another SDK to be installed, plus it has to be running on Windows 7 or 8, I have been careful to ensure that the dependencies to the Kinect SDK are isolated to the sample applications. Instead of building it into the engine itself, I am adding the Kinect stuff for at the sample program level. The result is that if you don't have the appropriate pre-requisites, then just the Kinect sample will fail to compile while the others are still build-able and usable.
So if you are a Kinect fan and want to see some particular algorithms, features, or applications, please post a comment either here, or in the Hieroglyph 3 discussion pages!