Notice
This record has been edited as far as possible, missing data will be added when the version of record is issued.
Context-Adaptive Visual Cues for Safe Navigation in Augmented Reality Using Machine Learning
Metadata only
Date
2022Type
- Journal Article
Abstract
Augmented reality (AR) using head-mounted displays (HMDs) is a powerful tool for user navigation. Existing approaches usually display navigational cues that are constantly visible (always-on). This limits real-world application, as visual cues can mask safety-critical objects. To address this challenge, we develop a context-adaptive system for safe navigation in AR using machine learning. Specifically, our system utilizes a neural network, trained to predict when to display visual cues during AR-based navigation. For this, we conducted two user studies. In User Study 1, we recorded training data from an AR HMD. In User Study 2, we compared our context-adaptive system to an always-on system. We find that our context-adaptive system enables task completion speeds on a par with the always-on system, promotes user autonomy, and facilitates safety through reduced visual noise. Overall, participants expressed their preference for our context-adaptive system in an industrial workplace setting. Show more
Publication status
publishedExternal links
Journal / series
International Journal of Human-Computer InteractionPublisher
Taylor & FrancisOrganisational unit
09501 - Netland, Torbjörn / Netland, Torbjörn
09623 - Feuerriegel, Stefan (ehemalig) / Feuerriegel, Stefan (former)
03995 - von Wangenheim, Florian / von Wangenheim, Florian
More
Show all metadata