resources

Data, code, and frameworks we develop will be published on this page.

COSIT 2024: Wayfinding Stages: The Role of Familiarity, Gaze Events, and Visual Attention

Published Dataset: will be added soon

Abstract: Understanding the cognitive processes involved in wayfinding is crucial for both theoretical advances and practical applications in navigation systems development. This study explores how gaze behavior and visual attention contribute to our understanding of cognitive states during wayfinding. Based on the model proposed by Downs and Stea, which segments wayfinding into four distinct stages: self-localization, route planning, monitoring, and goal recognition, we conducted an outdoor wayfinding experiment with 56 participants. Given the significant role of spatial familiarity in wayfinding behavior, each participant navigated six different routes in both familiar and unfamiliar environments, with their eye movements being recorded. We provide a detailed examination of participants’ gaze behavior and the actual objects of focus.
Our findings reveal distinct gaze behavior patterns and visual attention, differentiating wayfinding stages while emphasizing the impact of spatial familiarity. This examination of visual engagement during wayfinding explains adaptive cognitive processes, demonstrating how familiarity influences navigation strategies. The results enhance our theoretical understanding of wayfinding and offer practical insights for developing navigation aids capable of predicting different wayfinding stages.


Feature Papers in Intelligent Sensors 2024: MYFix: Automated Fixation Annotation of Eye-Tracking Videos

Published Code and Sample Dataset: https://researchdata.tuwien.ac.at/records/msw0z-1hx87

Abstract: In mobile eye-tracking research, the automatic annotation of fixation points is an important yet difficult task, especially in varied and dynamic environments such as outdoor urban landscapes. This complexity is increased by the constant movement and dynamic nature of both the observer and their environment in urban spaces. This paper presents a novel approach that integrates the capabilities of two foundation models, YOLOv8 and Mask2Former, as a pipeline to automatically annotate fixation points without requiring additional training or fine-tuning. Our pipeline leverages YOLO’s extensive training on the MS COCO dataset for object detection and Mask2Former’s training on the Cityscapes dataset for semantic segmentation. This integration not only streamlines the annotation process but also improves accuracy and consistency, ensuring reliable annotations, even in complex scenes with multiple objects side by side or at different depths. Validation through two experiments showcases its efficiency, achieving 89.05% accuracy in a controlled data collection and 81.50% accuracy in a real-world outdoor wayfinding scenario. With an average runtime per frame of 1.61 +- 0.35 seconds, our approach stands as a robust solution for automatic fixation annotation.


AGILE 24: Road Network Mapping from Multispectral Satellite Imagery: Leveraging Deep Learning and Spectral Bands

Published Code and Sample Dataset: https://researchdata.tuwien.at/records/t4j1h-rfd81

Abstract: Updating road networks in rapidly changing urban landscapes is an important but difficult task, often challenged by the complexity and errors of manual mapping processes. Traditional methods that primarily use RGB satellite imagery struggle with obstacles in the environment and varying road structures, leading to limitations in global data processing. This paper presents an innovative approach that utilizes deep learning and multispectral satellite imagery to improve road network extraction and mapping. By exploring U-Net models with DenseNet backbones and integrating different spectral bands we apply semantic segmentation and extensive post-processing techniques to create georeferenced road networks. We trained two identical models to evaluate the impact of using images created from specially selected multispectral bands rather than conventional RGB images. Our experiments demonstrate the positive impact of using multispectral bands, by improving the results of the metrics Intersection over Union (IoU) by 6.5%, F1 by 5.4%, and the newly proposed relative graph edit distance (relGED) and topology metrics by 2.2% and 2.6% respectively.


AGILE 2024: The Impact of Traffic Lights on Modal Split and Route Choice: A use-case in Vienna

Published Code and Sample Dataset: https://doi.org/10.48436/4whzs-fwq52

Abstract: The transportation dynamics within a European city, Vienna, are examined using a multi-graph representation of the city’s network. The focus is on time-optimized routing algorithms and the effects of altering the average waiting penalty at traffic lights. The impact of these modifications, whether an increase to 60, 90, or even 150 seconds or a decrease to 10 seconds, is observed in the selection of transportation modes and routes for identical origin and destination pairs. The investigation also extends to whether routes shift towards secondary street networks to avoid traffic lights as the waiting penalty increases. Experimental variations in average waiting time for cars aim to uncover detailed effects on transportation mode choices, route length and time changes, and variations in human energy expenditure. These findings could provide valuable insights into the transportation network and its possibilities and help in urban planning and policy development. The results indicate a shift in transportation mode as the waiting penalty for cars at traffic lights increases, and in some instances, routes are redirected to roads of lower importance such as residential or service roads.


ETRA ’22: Consider the Head Movements! Saccade Computation in Mobile Eye-Tracking

Published Code and Sample Dataset: https://zenodo.org/record/8117333

Abstract: Saccadic eye movements are known to serve as a suitable proxy for tasks prediction. In mobile eye-tracking, saccadic events are strongly influenced by head movements. Common attempts to compensate for head-movement effects either neglect saccadic events altogether or fuse gaze and head-movement signals measured by IMUs in order to simulate the gaze signal at head-level. Using image processing techniques, we propose a solution for computing saccades based on frames of the scene-camera video. In this method, fixations are first detected based on gaze positions specified in the coordinate system of each frame, and then respective frames are merged. Lastly, pairs of consecutive fixations –forming a saccade- are projected into the coordinate system of the stitched image using the homography matrices computed by the stitching algorithm. The results show a significant difference in length between projected and original saccades, and approximately 37% of error introduced by employing saccades without head-movement consideration.


COSIT2022: Spatial Familiarity Prediction by Turning Activity Recognition

Published Dataset: https://doi.org/10.48436/f0chy-11p06

Abstract: Spatial familiarity plays an essential role in the wayfinding decision-making process. Recent findings in wayfinding activity recognition domain suggest that wayfinders’ turning behavior at junctions is strongly influenced by their spatial familiarity. By continuously monitoring wayfinders’ turning behavior as reflected in their eye movements during the decision-making period (i.e., immediately after an instruction is received until reaching the corresponding junction for which the instruction was given), we provide evidence that familiar and unfamiliar wayfinders can be distinguished. By applying a pre-trained XGBoost turning activity classifier on gaze data collected in a real-world wayfinding task with 33 participants, our results suggest that familiar and unfamiliar wayfinders show different onset and intensity of turning behavior. These variations are not only present between the two classes –familiar vs. unfamiliar– but also within each class. The differences in turning-behavior within each class may stem from multiple sources, including different levels of familiarity with the environment.


Free Choice Navigation

The results of the simulation study can be found here: https://zenodo.org/record/4724597

The corresponding source code will be published soon.

Abstract: Using navigation assistance systems has become widespread and scholars have tried to mitigate potentially adverse effects on spatial cognition these systems may have due to the division of attention they require. In order to nudge the user to engage more with the environment, we propose a novel navigation paradigm called Free Choice Navigation balancing the number of free choices, route length and number of instructions given. We test the viability of this approach by means of an agent-based simulation for three different cities. Environmental spatial abilities and spatial confidence are the two most important modeled features of our agents. Our results are very promising: Agents could decide freely at more than 50% of all junctions. More than 90% of the agents reached their destination within an average distance of about 125% shortest path length.


UrbanCore

UrbanCore Mailinglist: https://list.tuwien.ac.at/sympa/info/urbancore

Effect of Currentness of Spatial Data on Routing Quality

Code and Data: https://doi.org/10.17605/osf.io/rxcgj

Route Selection Framework

A docker container will be soon published