I’ve managed to get a bit of breathing time so I thought about posting a few nerdy bits and pieces. Currently I’m doing an MA in Adaptive Architecture and Computation at UCL which is pretty cool, but keeps me pretty busy lately. Been learning up a lot of new skills there, among others, using the Kinect Sensor.
In this post I’ll demo a few things I’ve learned.
I’ll start with a quick technical demo of what I was able to achieve using Kinect and Processing. It displays the following:
-stereo calibration (matching rgb pixels with depth data)
-hand tracking (in 2D and 3D)
-skeleton tracking (without the ‘cactus’ calibration pose)
Although there is an official Microsoft driver for the Kinect, it’s for Windows only (no surprize there), so I’ve used the opensource drivers. There are plenty of wrapper libraries for various languages, but so far I’ve used wrapper libraries for Processing (Daniel Shiffman’s OpenKinect Processing lib and SimpleOpenNI), OpenFrameworks (ofxKinect) and MaxMSP (jit.freenect.grab). Each library has it’s pros and cons, but I won’t go much into detail in this post. Here’s a list of the data you can get from a Kinect:
-Accelerometer (accesible with some of the libraries)
-Audio data (currently supported by the official KinectSDK at the moment)
Plenty that can be done with the above mentioned. Currently I’m keen to learn more about manipulation the raw data rather than relying on OpenNI to see what sort of interactions can be achieved. I tend to be gravitate around unusual (think Aphex Twin) ideas lately, hence this image, which displays how skeleton tracking and user isolation can be used to duplicate parts of the body. When displaying the bounding box, the gray forearm is the copied version.
One unusual idea might be turning people into trees. It seems the Greeks beat me to it (a few thousand years back), as the myth of Heliades also portrays this idea. The second image on the side shows a tracked figure morphing into a tree by recursively copying forearms. You can see the full video here. It’s split into 3 parts: context, prototyping and final piece. I’m using SimpleOpenNI and skeleton tracking, but the unstable release of the drivers which allows for a more responsive output, as the calibration pose is not required. See you in part 2 !