Victor Schinazi


Loading...

Last Name

Schinazi

First Name

Victor

Organisational unit

Search Results

Publications 1 - 10 of 59
  • Grübel, Jascha; Weibel, Raphael; Jiang, Mike H.; et al. (2017)
    Lecture Notes in Computer Science ~ Spatial Cognition X
    EVE is a framework for the setup, implementation, and evaluation of experiments in virtual reality. The framework aims to reduce repetitive and error-prone steps that occur during experiment-setup while providing data management and evaluation capabilities. EVE aims to assist researchers who do not have specialized training in computer science. The framework is based on the popular platforms of Unity and MiddleVR. Database support, visualization tools, and scripting for R make EVE a comprehensive solution for research using VR. In this article, we illustrate the functions and flexibility of EVE in the context of an ongoing VR experiment called Neighbourhood Walk.
  • Grübel, Jascha; Gath Morad, Michal; Aguilar, Leonel; et al. (2021)
    MAB20: Media Architecture Biennale 20. Proceedings of the 5thMedia Architecture Biennale Conference
    Recent advances in Augmented Reality (AR), the Internet of Things (IoT), cloud computing, and Digital Twins transform the types, rates, and volume of information generated in buildings as well as the mediums through which they can be perceived by users. These advances push the standard approach of media architecture to embed screens in the built environment to its limits because screens lack the immersive capacity that newer media afford. To bridge this gap, we propose a novel AR approach to media architecture that uses a Digital Twin as a platform for structuring and accessing data from various sources, including IoT and simulations. Our technical contribution to media architecture is threefold. First, we extend the possibilities of media architecture beyond embedded screens to three dimensions by presenting a Digital Twin using AR with a head-mounted display. This approach results in a shared and consistent augmented experience across large architectural spaces. Second, we use the Digital Twin to integrate and visualize real physical sensor information. Third, we make artificial occupancy simulations accessible to everyday users by presenting them within their natural context in the Digital Twin. Observing the Digital Twin in situ of the Physical Twin also has applications beyond media architecture. Fusing the two twins using AR can reduce the cognitive load of users from consuming big and complex information sources and enhance their experience. We present two use cases of the proposed Fused Twins in a university building at ETH Zürich. In the first use case, we visualize a dense indoor sensor network (DISN) with 390 IoT sensors that collected data from March 2020 to May 2021. In the second use case, we immerse visitors in agent-based simulations to enable insights into the real and projected uses of space. This * All three authors contributed equally to this research. Asterdam/Utrecht, The Netherlands Grübel, Gath-Morad, Aguilar et al. work brings forward an ambitious vision for media architecture beyond traditional flat screens, and showcases its potential through fusing state of the art simulations, sensor data integration and augmented reality, finally making the jump from fiction to reality.
  • Wampfler, Rafael; Klingler, Severin; Solenthaler, Barbara; et al. (2022)
    CHI '22: CHI Conference on Human Factors in Computing Systems
    Knowledge of users’ affective states can improve their interaction with smartphones by providing more personalized experiences (e.g., search results and news articles). We present an affective state classification model based on data gathered on smartphones in real-world environments. From touch events during keystrokes and the signals from the inertial sensors, we extracted two-dimensional heat maps as input into a convolutional neural network to predict the affective states of smartphone users. For evaluation, we conducted a data collection in the wild with 82 participants over 10 weeks. Our model accurately predicts three levels (low, medium, high) of valence (AUC up to 0.83), arousal (AUC up to 0.85), and dominance (AUC up to 0.84). We also show that using the inertial sensor data alone, our model achieves a similar performance (AUC up to 0.83), making our approach less privacy-invasive. By personalizing our model to the user, we show that performance increases by an additional 0.07 AUC.
  • Kwok, Tiffany C.K.; Kiefer, Peter; Schinazi, Victor; et al. (2024)
    Scientific Reports
    Unlike classic audio guides, intelligent audio guides can detect users’ level of attention and help them regain focus. In this paper, we investigate the detection of mind wandering (MW) from eye movements in a use case with a long focus distance. We present a novel MW annotation method for combined audio-visual stimuli and collect annotated MW data for the use case of audio-guided city panorama viewing. In two studies, MW classifiers are trained and validated, which are able to successfully detect MW in a 1-s time window. In study 1 (n = 27), MW classifiers from gaze features with and without eye vergence are trained (area under the curve of at least 0.80). We then re-validate the classifier with unseen data (study 2, n = 31) that are annotated using a memory task and find a positive correlation (repeated measure correlation = 0.49, p < 0.001) between incorrect quiz answering and the percentage of time users spent mind wandering. Overall, this paper contributes significant new knowledge on the detection of MW from gaze for use cases with audio-visual stimuli.
  • Naegelin, Mara; Weibel, Raphael; Kerr, Jasmine I.; et al. (2023)
    Journal of Biomedical Informatics
    Background and objective: Work-related stress affects a large part of today's workforce and is known to have detrimental effects on physical and mental health. Continuous and unobtrusive stress detection may help prevent and reduce stress by providing personalised feedback and allowing for the development of just-in-time adaptive health interventions for stress management. Previous studies on stress detection in work environments have often struggled to adequately reflect real-world conditions in controlled laboratory experiments. To close this gap, in this paper, we present a machine learning methodology for stress detection based on multimodal data collected from unobtrusive sources in an experiment simulating a realistic group office environment (N=90). Methods: We derive mouse, keyboard and heart rate variability features to detect three levels of perceived stress, valence and arousal with support vector machines, random forests and gradient boosting models using 10-fold cross-validation. We interpret the contributions of features to the model predictions with SHapley Additive exPlanations (SHAP) value plots. Results: The gradient boosting models based on mouse and keyboard features obtained the highest average F1 scores of 0.625, 0.631 and 0.775 for the multiclass prediction of perceived stress, arousal and valence, respectively. Our results indicate that the combination of mouse and keyboard features may be better suited to detect stress in office environments than heart rate variability, despite physiological signal-based stress detection being more established in theory and research. The analysis of SHAP value plots shows that specific mouse movement and typing behaviours may characterise different levels of stress. Conclusions: Our study fills different methodological gaps in the research on the automated detection of stress in office environments, such as approximating real-life conditions in a laboratory and combining physiological and behavioural data sources. Implications for field studies on personalised, interpretable ML-based systems for the real-time detection of stress in real office environments are also discussed.
  • Weibel, Raphael; Kerr, Jasmine I.; Naegelin, Mara; et al. (2023)
    Computers in Human Behavior
    Heart rate variability biofeedback (HRV-BF) is frequently used for stress management. Recently, virtual reality technology has gained attention for delivery, promising higher immersion, motivation, and attention than classical screens. However, the effects of different technologies and breathing techniques are not yet understood. In this study, 107 healthy participants completed a session in one of four conditions: HRV-BF on a desktop screen, HRV-BF via head-mounted display (HMD), standardised paced breathing without feedback (sPB) on a screen, or sPB via HMD. All setups significantly reduced perceived stress and increased heart rate variability (HRV). Practising HRV-BF, however, led to significantly greater increases in the low frequency band of HRV and cardiac coherence than sPB, and using an HMD rather than a screen also led to greater increases in cardiac coherence. As for user experience, immersion adaptation and interface quality were higher for HMDs and facilitating conditions were better for screens. While all technique and technology combinations are feasible and effective for stress management, immersing oneself in virtual reality with an HMD for HRV-BF might yield increased benefits in terms of HRV target outcomes and several user experience measures. Future research is necessary to confirm any long-term effects of such a mode of delivery.
  • Kraemer, David J.M.; Schinazi, Victor; Cawkwell, Philip B.; et al. (2017)
    Journal of Experimental Psychology: Learning, Memory, and Cognition
  • Weisberg, Steven M.; Schinazi, Victor; Ferrario, Andrea; et al. (2021)
    PsyArXiv
    Relying on shared tasks and stimuli to conduct research can enhance the replicability of findings and allow a community of researchers to collect large data sets across multiple experiments. This approach is particularly relevant for experiments in spatial navigation, which often require the development of unfamiliar large-scale virtual environments to test participants. One challenge with shared platforms is that undetected technical errors, rather than being restricted to individual studies, become pervasive across many studies. Here, we discuss the discovery of a programming error (a bug) in a virtual environment platform used to investigate individual differences in spatial navigation: Virtual Silcton. The bug resulted in storing the absolute value of an angle in a pointing task rather than the signed angle. This bug was difficult to detect for several reasons, and it rendered the original sign of the angle unrecoverable. To assess the impact of the error on published findings, we collected a new data set for comparison. Our results revealed that the effect of the error on published data is likely to be minimal, partially explaining the difficulty in detecting the bug over the years. We also used the new data set to develop a tool that allows researchers who have previously used Virtual Silcton to evaluate the impact of the bug on their findings. We summarize the ways that shared open materials, shared data, and collaboration can pave the way for better science to prevent errors in the future.
  • Vespucci Institute 2014: Brain and Space
    Item type: Other Publication
    Schinazi, Victor (2015)
    SILC Showcase
  • Li, Hengshan; Thrash, Tyler; Hölscher, Christoph; et al. (2019)
    Journal of Environmental Psychology
Publications 1 - 10 of 59