Categories
Portfolio Project

BodyLenses

BodyLenses explores the Idea of Embodied Interaction in front of a large, interactive display wall. User has their own personal Magic Lens that follows them around, when they move in front of the display. We experiment with different shapes of the lens, more abstract ones and more bodily shaped ones. Furthermore, we also explored how the space in front of the wall can be used for, e.g., proxemic interaction. Body Lenses was developed at the Interactive Media Lab Dresden and resulted in a publication at the ACM ITS 2015.

General

The BodyLenses prototype was developed mostly by a colleague and myself. Some student projects were involved with specific functions.

The prototype is written in Phython using the libavg graphics framework. The position of users is tracked by using a Micrsoft Kincet placed behind the users and which is connected to a dedicated tracking PC. The resulting skeleton data is transmitted to the display wall’s PC using UPD and Open Sound Control. The tracking application uses the Kinect SDK and is written in C++. As OSC libraries we use pyOSC on the python side and OSCpack on the C++ side.

Details and Contribution

BodyLenses was a follow up project to MultiLens. Since we had some performance related issues during that project, most likely due to having used WPF as a GUI framework, we wanted to make sure this time that we would be able to handle a large number of objects on screen. Therefore we tested several options. I wrote smaller prototypes in Cinder and raw Direct3D. In the end, we decided to use libavg as it was fast and provided high level input capabilities as well.

As a first step, I ported our Graph and Lens structures from MultiLens and C# over to Python for the new prototype. Instead of MVVP I used a more generalized MVC structure without data bindings but with a callback structure so that views could be updated easily whenever something in the model changed. First tests were promising and we manged to have graphs a lot bigger with smoother performance than in the WPF prototype.

While I was occupied with the graphs and lenses, my colleague implemented the Kinect tracking based on a previous prototype and connected it with our Python front end. After that we experimented with lenses of different shapes and more body-like lenses. While that was interesting and fun I have to say that from a usefulness perspective the classic shapes, i.e., circles and rectangles worked best.

Furthermore we also experimented with using the distance to the wall to control how the lenses behaved. I implemented a mode which basically switched through a large stack of images based on the users position. Using a video or similar time based data sets, this gave the impression as moving forward through time when stepping closer to the wall. It was a rather neat effect and always very demonstrable to visitors.