AvatAR is an Augmented Reality visualization and analysis environment which uses a combination of virtual human avatars and 3D trajectories, as well as additional techniques, to visualize human motion data with great detail. Interactions can be manipulated using either gesture interaction or through an accompanying tablet, which also acts as an overview device. AvatAR was developed at Autodesk Research and was published as a full paper, as well as presented at the ACM Conference on Human Factors in Computing Systems 2022 (CHI’22).
This is a curated list of the projects I have worked on to give an impression on my experiences and skills. To see a list of all projects, click here.
PARVis is a framework for extending 2D visualizations on large interactive displays with personal Augmented Reality overlays, to extend visualizations with additional data, address perception issues and guide users attention. User can interact with the 2D Visualizations using touch interaction on the display. We present several visualization techniques and demonstrate how they can enhance the experience for user when working with visualizations on large displays. PARVis was developed at the Interactive Media Lab Dresden and build upon the u2vis framework. It was published as a full paper at the IEEE Transactions on Visualization and Computer Graphics (TVCG) journal and presented at the VIS’20 conference.
This generic Unity framework for information visualization was developed at the Interactive Media Lab Dresden. A variety of different types 2D and 3D visualizations are supported, including bar charts, scatter plots, line charts, parallel coordinates, pie charts and more. The framework can be used on traditional display as well as immersive Mixed Reality environments, like the MS HoloLens. Furthermore, visualizations ca be interacted with using mouse, touch and pen input. The framework is designed to be highly modular and easy to extend or adapt to custom solutions. The source code is available on github.
Voxplosion is an experimental voxel framework for Unity, started mostly out of curiosity. It represents the virtual world as a matrix of voxels and each voxel is associated with a global metrial and color palette. The world is further divided into chunks, which in turn are triangulated to be rendered by Unity. Voxplosion also includes a world builder within unity and can import voxel models from MagicaVoxel.
I used this little engine to create a twin stick shooter with zombies and destructible terrain while organizing and simultaneously taking part in a 48h game jam.
DesignAR is an immersive 3D modeling application which combines a large, interactive design workstation with head-mounted Augmented Reality. Users manipulate the model using touch and pen interaction while to model itself is displayed in stereoscopic Augmented Reality above the display. We also demonstrated how the space beyond the borders of the display can be used to show orthogonal views and offloaded menus. DesignAR was developed at the Interactive Media Lab Dresden and resulted in two publications: A full paper at the ACM ISS 2019, which also won the best paper award, and as a demo at the ACM CHI 2020.
While large display walls are support multiple users interacting at the same time, most system can’t recognize which touch belongs to which user. YouTouch! tracks users interacting in front of the wall can distinguished touches of each individual user. It therefore enables applications to use personalizes touch support. To demonstrate this principle, we developed a multi user paint application where each user has their own individual color palette. It was developed at the Interactive Media Lab Dresden and resulted in a publication at the ACM AVI 2016.
This project uses a tracked Tablet and spatial navigation to explore 3D Information Visualizations by physically moving a tablet. Furthermore, the users head was also tracked, enabling Head-Coupled Perspective. We explored how well this interaction style performs in comparison to a touch based interface in a qualitative user study. This project was developed at the Interactive Media Lab Dresden and resulted in three publications: A workshop paper and poster at the ACM ISS 2016 and a full paper at the ACM ISS 2017.
BodyLenses explores the Idea of Embodied Interaction in front of a large, interactive display wall. User has their own personal Magic Lens that follows them around, when they move in front of the display. We experiment with different shapes of the lens, more abstract ones and more bodily shaped ones. Furthermore, we also explored how the space in front of the wall can be used for, e.g., proxemic interaction. Body Lenses was developed at the Interactive Media Lab Dresden and resulted in a publication at the ACM ITS 2015.
MultiLens is a multi-touch controlled WPF application which uses Magic Lenses to help exploring large node-link diagrams. The goal was to provide an alternative to traditional menu based and mouse controlled systems. Therefore we use multi-touch input to quickly manipulate the lenses, change filters and adjust parameters. Furthermore, Lenses may be combined with other lenses to create generic multi-purpose tools. MultiLens was developed at the Interactive Media Lab Dresden as part of the DFG GEMS Project and resulted in two publications: A demo at ACM ITS 2014 and a full paper at ACM ITS 2016.
P.I.P.E. is a Arcade Game for two players, which fly their ships through a randomly generated pipe-like maze, avoiding various obstacles. The can collect powerups which are scattered trough the tunnels for effects like shields or speed ups. Every collision with obstacles damages players and when the health bar of both players reaches zero, the game ends. Points are awarded depending depending on how long and how fast players managed to move to the maze before dying.
What makes P.I.P.E. special is, that players control their ships using a Microsoft Kinect and the Game is displayed on a back-projection Wall in stereoscopic 3D.
The binaries of the game can be downloaded here.