- Conference Paper
Subtitles play a crucial role in cross-lingual distribution of multimedia content and help communicate information where auditory content is not feasible (loud environments, hearing impairments, unknown languages). Established methods uti-lize text at the bottom of the screen, which may distract from the video. Alternative techniques place captions closer to re-lated content (e.g., faces) but are not applicable to arbitrary videos such as documentations. Hence, we propose to leverage live gaze as indirect input method to adapt captions to indi-vidual viewing behavior. We implemented two gaze-adaptive methods and compared them in a user study (n=54) to tradi-tional captions and audio-only videos. The results show that viewers with less experience with captions prefer our gaze-adaptive methods as they assist them in reading. Furthermore, gaze distributions resulting from our methods are closer to natural viewing behavior compared to the traditional approach. Based on these results, we provide design implications for gaze-adaptive captions Mehr anzeigen
BuchtitelProceedings of the 2020 CHI Conference on Human Factors in Computing Systems, CHI '20
Seiten / Artikelnummer
Themaeye tracking; gaze input; gaze-responsive display; multimedia; video captions; subtitles
Organisationseinheit03901 - Raubal, Martin / Raubal, Martin
AnmerkungenConference cancelled due to Corona virus (COVID-19).