Long Cheng


Loading...

Last Name

Cheng

First Name

Long

Organisational unit

Search Results

Publications 1 - 7 of 7
  • Cheng, Long; Gisler, Joy; Lao, Kenli; et al. (2025)
    2025 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)
    This paper describes the potential of integrating generative artifi-cial intelligence (GenAI) based chatbots into virtual reality (VR) for language learning. Existing VR and AI-based systems primarily focus on practicing vocabulary or written skills, often neglecting speaking practice. To address this gap, a VR-enabled conversation trainer, chatbot AI for VR (CHAI) 1, is introduced. The system integrates speech-to-text, a large language model, text-to-speech, and an avatar to enable natural conversations with CHAI. It is evaluated for its language error detection capabilities in English and German. The results show high accuracy in detecting language errors and demonstrate substantial reproducibility in error detection.
  • Cheng, Long; Schachtler, Daniel; Schreiner, Michael; et al. (2024)
    2024 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct)
    This paper introduces a novel mixed reality (MR) remote guidance system for supporting laser alignment experiments. Replacing conventional laser safety glasses, a video see-through head-mounted display (VST-HMD) shields users from hazardous radiation. Nevertheless, the impact on human work performance when using a standalone VST system for laser alignment has not been fully explored. In this paper, we studied the effects of the MR guidance system in instructing users in laser alignment through a between-subject user study. Our results reveal that the developed MR guidance significantly enhances user experience, the alignment system usability, and reduces the need for human intervention compared to traditional guidance from a human instructor. Despite a diminished level of social presence, users displayed a preference for MR guidance using a VST-HMD.
  • Ma, Xinwei; Tian, Xiaolin; Cui, Hongjun; et al. (2024)
    Transportation Research Part D: Transport and Environment
    Three types of intermodals are defined: bus-centric, metro-centric, and hybrid, each representing combinations of bus, metro, and a mix of metro and bus with other travel modes for a trip, respectively. Using the household survey from Nanjing, China, comprising 162,191 trips, we applied the multiple models to reveal the nonlinear effects of socio-demographic and travel-related attributes on intermodal travel choices. Results show that bus-centric intermodal choice accounts for 65.82% of the total among the three types. The optimal model, random forest, indicates the relative importance of travel-related, individual, and household attributes, contributing 46.28%, 31.14%, and 22.59% respectively. Non-public transit travel time demonstrates an inverted V-shaped association with bus-centric intermodal choice, with a peak at around 5 min. Older individuals prefer bus-centric intermodal, while younger lean towards metro-centric and hybrid intermodal. Compared to car ownership and motorcycle ownership, bike ownership and E-bike ownership exhibit relatively high impact on intermodal travel choices.
  • Cheng, Long; Schachtler, Daniel; Schreiner, Michael; et al. (2024)
    2024 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct)
    User safety is a concern in the laser laboratory. Users must concentrate on small surfaces by watching telescope-generated images displayed on the screen to conduct measurement. Through a mixed reality head-mounted display (MR-HMD), they see the real world while being physically separated from hazardous laser radiations and can view telescope images on the HMD at the same time. In this poster, we conducted a comparative analysis of an MR screen extension versus a traditional laser safety goggle (LG) during an optical observation task in a laser laboratory. Results from our user study indicate that the MR condition significantly outperformed the LG condition in terms of speed and user preference.
  • Welti, Damla; Lutfallah, Mathieu; Cheng, Long; et al. (2025)
    2025 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)
    isualization of complex architectural and interior design proposals demands spatial comprehension. Mixed Reality (MR) has opened new ways for architectural visualization through its ability to overlay virtual models onto already existing real environments. However, this approach alone restricts users to the physical constraints of their surroundings. We developed two approaches to augment MR with additional perspectives from physically inaccessible viewpoints: the World-in-Miniature (WIM) solution and the Cross Reality (CR) solution. The WIM solution provides a scaled-down 3D replica of the virtual environment, whereas the CR solution enables users to switch to Virtual Reality and teleport to physically inaccessible viewpoints. Our research aims to explore whether one solution outperforms the other.
  • Cheng, Long; Holzwarth, Valentin; Kunz, Andreas (2024)
    AHFE International ~ Human Systems Engineering and Design (IHSED2024): Future Trends and Applications
    Video see-through (VST) cameras, integrated in virtual reality (VR) headsets, offer a convenient means to bridge the real world and virtual objects. However, performing tasks within a VST system can deviate from real-world experiences. In this study, we assess the technical specifications of a commercially available consumer-grade virtual reality headset and investigate user performance across various tasks within a VST environment using a pilot user study and the prism adaptation method. Tasks include object relocation, drinking, screwing, and typing on a physical tablet. Our findings reveal a decline in performance for tasks requiring close-range interaction and screenbased operations, accompanied by a user adaptability to the studied tasks. We also note mild motion sickness symptoms but find no discernible aftereffects associated with the tasks examined.
  • Gorobets, Valentina; Cheng, Long; Lussi, Helene; et al. (2025)
    Proceedings of the 20th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 1: GRAPP, HUCAPP and IVAPP
    This paper presents findings on the unimodal and multimodal feedback design for the interaction with a virtual knob. Since the physical knob provides haptic feedback while being rotated, we also integrated haptic feedback in the virtual knob. The real and virtual knob consist of a main body and a handle on top. For fine and coarse adjustments, the knob can be grabbed and rotated with a ’Grasp’ or a ’Pinch’ gesture. In a user study with 30 participants, we evaluated our system using objective measures and subjective metrics. The results show that participants reported a preference for having a haptic feedback and perceived it as more natural.
Publications 1 - 7 of 7