As if the previous post wasn’t geeky enough, here’s a quick look at project that also ties in a bit of computer vision and neural networks.
One one of the courses, related to Programming for Architecture and Design, among other things, we had a lecture and tutorial on Neural Networks. There are multiple types of Neural Networks, mainly classified as supervised and unsupervised, based on how this networks learn.
Kohonen networks (which is what this post focuses on) are unsupervised networks, also known as a self organizing maps (SOM). As opposed to supervised networks, where neurons are trained what the output should be like (should weigh towards), this type of network is based on competitive learning - the outputs/neurons organize themselves towards the closest inputs. This idea of competitive learning is based on how it is thought the hippocampus(the part of our brain responsible for navigation) works. In a sense, the outputs display a particle-spring like behaviour towards the inputs, which make this type of network useful for surface fitting/dimensionality reduction/etc.
Initially a dataset of 3d points was given, but it thought it would be more fun for some reason to fit a surface on my face (or any face for that matter). This is what the video illustrated:
Computer vision(OpenCV’s HAAR cascade feature) is used to detect faces and isolate an area in the Kinect depth map
Depth pixels belong to the face are converted to 3D coordinates
Once a point cloud was selected, the points can be fed as the inputs of the neural net and the outputs are vertices of the surface
The number of ouputs is variable, so a low-poly mesh can also be calculated The mesh can also be saved to AutoCAD (.dxf) format, which is what I’ve used to render a creepy theatre like mask based on Max’s face. Currently the default surface is a rectangular grid, which is a good start, but not ideal for fitting on a face. If you can imagine a face unwrapped into 2D space, it would not look like a perfect rectangular, but that’s something to explore at a later time.
In the meantime, if you would like to have a play with the code, the source is included. If this leads to something interesting let us know. The code is written using Processing and uses OpenKinect and OpenCV. If you think this is something you would like explained further, leave a comment bellow and we’ll post more details on the wiki.