Welcome to the
Spatial HCI Lab @ TU Vienna!
Our Lab is an exceptional place where researcher as well as students meet and work towards creative and novel solutions for problems related to spatial interaction. The research performed in this lab ranges from interaction with 3D data to gaze-based interaction in mixed reality environments.
3D Data Visualization and Interaction
Appropriate visualization is key for making sense and analyzing data. Specifically, we argue that inherently 3D data are more easily understood and dealt with if presented through a 3D visualization. Accordingly, in our Spatial-HCI Lab we focus on the conception, realization, and experimentation of new visualization and interaction techniques for 3D data.
Outdoor Mixed Reality
Mixed and Augmented Reality are attracting increasing attention thanks to the recent availability of see-through head-mounted displays capable of offering seamless and robust experiences. Such devices rely on the most up-to-date techniques and algorithms for spatial mapping and spatial tracking and are capable of offering amazing indoor experiences. However, the available solutions are not designed for working in outdoor environments. Our Spatial-HCI group aims at porting and adapting the most innovative methodologies for geographic space onto the Mixed reality domain, as well as devising novel approaches to bring Mixed reality experiences outdoor.
Our eye movements and where we look at while interacting with spatial elements provide important insights that can help us optimize the (gaze-based) interaction dialogues. In our research we focus on the one hand on eye movements and gaze analysis in order to understand how we interact with our surrounding environment (i.e., real environment, virtual reality and mixed reality) and spatial data visualizations. On the other hand, we utilize the eye movements and the gaze of the user in order to enable novel implicit and explicit interaction dialogues.
One of the challenges in modern cities is the increasing density of use. It is necessary to enable different use in different vertical levels. However, this requires a vertical separation of rights, which is not possible with current (2D) cadastral systems. This leads to several research questions:
- How to model 3D rights?
- How to collect data for already existing vertically restricted rights (e.g., condominium)?
- How to visualize the data?
- How to use the new systems in other applications, e.g., for visibility analysis for valuation of location?