Portfolio Project Uncategorized

AvatAR: Visualizing Human Motion Data in Augmented Relatiy

AvatAR is an Augmented Reality visualization and analysis environment which uses a combination of virtual human avatars and 3D trajectories, as well as additional techniques, to visualize human motion data with great detail. Interactions can be manipulated using either gesture interaction or through an accompanying tablet, which also acts as an overview device. AvatAR was developed at Autodesk Research and was published as a full paper, as well as presented at the ACM Conference on Human Factors in Computing Systems 2022 (CHI’22).


AvatAR started with the core question of how to gain insights into how persons utilize the environment they are working in. This quickly lead to the topic of visualizing human motion data in an immersive and intuitive way that is easy to understand and provides immediate details about a persons movements as well as their behavior and interaction with the environment. The baseline for this have been 3D trajectories, which offer a goof overview of movement over time, but don’t provide many details. We had the idea to combine them with virtual humanoid avatars whose detailed pose was reconstructed from 3D skeletal data. These avatars provide a very detailed image of a person at a certain point in time, but no overview of different points in time. Combining them with trajectories gives the advantages of both techniques and balances their respective disadvantages. The use of head-mounted augmented reality allows us to visualize this data directly within the environment in which it was recorded, and thus to directly explore in-situ the movement of one or more people. Building on this base, we have added further techniques that allow an even more detailed analysis of the interaction of persons with their surrounding environment. These include the visualization of the location where a person’s gaze meets the environment, the visualization of footprints where a person’s feet have touched the ground, as well as all locations that a person has touched with their hands. All these techniques culminate in an analysis framework that we have called AvatAR.


AvatAR was developed by me while taking an internship at Autodesk Research. While there was lots of conceptual input from my co-authors and supervisors at Autodesk, the prototype was developed by myself alone.


AvatAR was developed using the Unity 3D engine for the Microsoft HoloLens 2. For the accompanying Tablet, a Microsoft Surface Pro 6 was used.