Search
Results
-
Work-in-Progress-Enhancing Training in Virtual Reality with Hand Tracking and a Real Tool
(2021)Proceedings of 2021 7th International Conference of the Immersive Learning Research Network (iLRN)The main goal of Virtual Training Environments(VTEs) is to maximize training success, which can be achievedby increasing the degree of immersion. While prior work mainlyfocused on the visual, auditory, and navigational aspects of im-mersion, proprioceptive aspects may be particularly important.In this work-in-progress paper, we explore this potential byimplementing an industrial VTE, which can be interacted with,using VR gloves and a ...Conference Paper -
BIM-Enabled Issue and Progress Tracking Services Using Mixed Reality
(2021)Progress in IS ~ Smart Services Summit. Digital as an Enabler for Smart Service Business DevelopmentThis paper reports on a case study in collaborationwith an industry partner to explore the value creation potentials of Mixed Reality (MR) in construction. MR is a technology that allows for intuitive interaction with digital data through overlays on the real world, giving immediate access to the data of a construction site to quickly track or resolve possible issues. While prior research identified potential applications of MR in ...Conference Paper -
Subtle Attention Guidance for Real Walking in Virtual Environments
(2021)2021 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct)Virtual reality is today being applied to an increasing number of fields such as education, industry, medicine, or gaming. Attention guidance methods are used in virtual reality to help users navigate the virtual environment without being overwhelmed by the overabundance of sensory stimuli. However, visual attention guidance methods can be overt, distracting and confusing as they often consist of artefacts placed in the center of the ...Conference Paper -
Border-SegGCN: Improving Semantic Segmentation by Refining the Border Outline using Graph Convolutional Network
(2021)2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW)We present Border-SegGCN, a novel architecture to improve semantic segmentation by refining the border outline using graph convolutional networks (GCN). Semantic segmentation networks such as Unet or DeepLabV3+ are used as base networks to have a pre-segmented output. This output is converted into a graphical structure and fed into the GCN to improve the border pixel prediction of the pre-segmented output. We explored and extensively ...Conference Paper -
VXSlate: Combining Head Movement and Mobile Touch for Large Virtual Display Interaction
(2021)2021 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)Virtual Reality (VR) headsets can open opportunities for users to accomplish complex tasks on large virtual displays, using compact setups. However, interacting with large virtual displays using existing interaction techniques might cause fatigue, especially for precise manipulations, due to the lack of physical surfaces. We designed VXSlate, an interaction technique that uses a large virtual display, as an expansion of a tablet. VXSlate ...Conference Paper -
Indicators of Training Success in Virtual Reality Using Head and Eye Movements
(2021)2021 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct)An essential aspect in the evaluation of Virtual Training Environments (VTEs) is the assessment of users’ training success, preferably in real-time, e.g. to continuously adapt the training or to provide feedback. To achieve this, leveraging users’ behavioral data has been shown to be a valid option. Behavioral data include sensor data from eye trackers, head-mounted displays, and hand-held controllers, as well as semantic data like a ...Conference Paper -
BGT-Net: Bidirectional GRU Transformer Network for Scene Graph Generation
(2021)2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)Scene graphs are nodes and edges consisting of objects and object-object relationships, respectively. Scene graph generation (SGG) aims to identify the objects and their relationships. We propose a bidirectional GRU (BiGRU) transformer network (BGT-Net) for the scene graph generation for images. This model implements novel object-object communication to enhance the object information using a BiGRU layer. Thus, the information of all objects ...Conference Paper