Journal: International Journal of Social Robotics
Loading...
Abbreviation
Publisher
Springer
9 results
Search Results
Publications1 - 9 of 9
- Softness, Warmth, and Responsiveness Improve Robot HugsItem type: Journal Article
International Journal of Social RoboticsBlock, Alexis E.; Kuchenbecker, Katherine J. (2019)Hugs are one of the first forms of contact and affection humans experience. Due to their prevalence and health benefits, roboticists are naturally interested in having robots one day hug humans as seamlessly as humans hug other humans. This project’s purpose is to evaluate human responses to different robot physical characteristics and hugging behaviors. Specifically, we aim to test the hypothesis that a soft, warm, touch-sensitive PR2 humanoid robot can provide humans with satisfying hugs by matching both their hugging pressure and their hugging duration. Thirty relatively young and rather technical participants experienced and evaluated twelve hugs with the robot, divided into three randomly ordered trials that focused on physical robot characteristics (single factor, three levels) and nine randomly ordered trials with low, medium, and high hug pressure and duration (two factors, three levels each). Analysis of the results showed that people significantly prefer soft, warm hugs over hard, cold hugs. Furthermore, users prefer hugs that physically squeeze them and release immediately when they are ready for the hug to end. Taking part in the experiment also significantly increased positive user opinions of robots and robot use. - Body Form Modulates the Prediction of Human and Artificial Behaviour from Gaze ObservationItem type: Journal Article
International Journal of Social RoboticsScandola, Michele; Cross, Emily S.; Caruana, Nathan; et al. (2023)The future of human–robot collaboration relies on people’s ability to understand and predict robots' actions. The machine-like appearance of robots, as well as contextual information, may influence people’s ability to anticipate the behaviour of robots. We conducted six separate experiments to investigate how spatial cues and task instructions modulate people’s ability to understand what a robot is doing. Participants observed goal-directed and non-goal directed gaze shifts made by human and robot agents, as well as directional cues displayed by a triangle. We report that biasing an observer's attention, by showing just one object an agent can interact with, can improve people’s ability to understand what humanoid robots will do. Crucially, this cue had no impact on people’s ability to predict the upcoming behaviour of the triangle. Moreover, task instructions that focus on the visual and motor consequences of the observed gaze were found to influence mentalising abilities. We suggest that the human-like shape of an agent and its physical capabilities facilitate the prediction of an upcoming action. The reported findings expand current models of gaze perception and may have important implications for human–human and human–robot collaboration. - Co-existing with Drones: A Virtual Exploration of Proxemic Behaviours and Users' Insights on Social DronesItem type: Journal Article
International Journal of Social RoboticsBretin, Robin; Cross, Emily S.; Khamis, Mohamed (2024)Numerous studies have investigated proxemics in the context of human-robot interactions, but little is known about whether these insights can be applied to human-drone interactions (HDI). As drones become more common in social settings, it is crucial to ensure they navigate in a socially acceptable and human-friendly way. Understanding how individuals position themselves around drones is vital to promote user well-being and drones' social acceptance. However, real-world constraints and risks associated with drones flying in close proximity to participants have limited research in this field. Virtual reality is a promising alternative for investigating HDI, as prior research suggests. This paper presents a proxemic user study (N = 45) in virtual reality, examining how drone height and framing influence participants' proxemic preferences. The study also explores participants' perceptions of social drones and their vision for the future of flying robots. Our findings show that drone height significantly impacts participants' preferred interpersonal distance, while framing had no significant effect. Thoughts on how participants envision social drones (e.g., interaction, design, applications) reveal interpersonal differences but also shows overall consistency over time. While the study demonstrates the value of using virtual reality for HDI experiments, further research is necessary to determine the generalizability of our findings to real-world HDI scenarios. - Building Long-Term Human–Robot Relationships: Examining Disclosure, Perception and Well-Being Across TimeItem type: Journal Article
International Journal of Social RoboticsLaban, Guy; Kappas, Arvid; Morrison, Val; et al. (2024)While interactions with social robots are novel and exciting for many people, one concern is the extent to which people’s behavioural and emotional engagement might be sustained across time, since during initial interactions with a robot, its novelty is especially salient. This challenge is particularly noteworthy when considering interactions designed to support people’s well-being, with limited evidence (or empirical exploration) of social robots’ capacity to support people’s emotional health over time. Accordingly, our aim here was to examine how long-term repeated interactions with a social robot affect people’s self-disclosure behaviour toward the robot, their perceptions of the robot, and how such sustained interactions influence factors related to well-being. We conducted a mediated long-term online experiment with participants conversing with the social robot Pepper 10 times over 5 weeks. We found that people self-disclose increasingly more to a social robot over time, and report the robot to be more social and competent over time. Participants’ moods also improved after talking to the robot, and across sessions, they found the robot’s responses increasingly comforting as well as reported feeling less lonely. Finally, our results emphasize that when the discussion frame was supposedly more emotional (in this case, framing questions in the context of the COVID-19 pandemic), participants reported feeling lonelier and more stressed. These results set the stage for situating social robots as conversational partners and provide crucial evidence for their potential inclusion in interventions supporting people’s emotional health through encouraging self-disclosure. - Coping with Emotional Distress via Self-Disclosure to Robots: An Intervention with CaregiversItem type: Journal Article
International Journal of Social RoboticsLaban, Guy; Morrison, Val; Kappas, Arvid; et al. (2025)People often engage in self-disclosure and social sharing when trying to cope with emotional distress. This study introduces a novel long-term intervention designed to help informal caregivers cope with emotional distress by self-disclosing towards a social robot. Research indicates that informal caregivers frequently face challenges in handling the emotional and practical demands of caregiving, often experiencing a lack of social support and limited social interaction. Accordingly, we explored the extent of informal caregivers’ self-disclosure behaviour towards a social robot (Pepper, SoftBank Robotics) over time, and how their perceptions of the robot evolved. Additionally, we examined how this intervention affected caregivers’ moods, perceptions of the robot as comforting, feelings of loneliness, stress levels, as well as its impact on their emotion regulation. We replicated a previous long-term experiment [1] with a dedicated sample of informal caregivers who interacted with Pepper 10 times over five weeks, discussing everyday topics. Our results show that caregivers increasingly self-disclosed to the robot over time, perceiving it as more social and competent. Participants’ moods improved following interactions, and they viewed the robot as increasingly comforting. They also reported feeling progressively less lonely and stressed. Thus, our findings with informal caregivers replicated those of [1]. Moreover, after the intervention, caregivers reported greater acceptance of their caregiving roles, reappraising it more positively, and reduced feelings of blame towards others. These results highlight the potential of social robots to provide emotional support for individuals coping with emotional distress. - Quantifying the Human Likeness of a Humanoid RobotItem type: Journal Article
International Journal of Social Roboticsvon Zitzewitz, Joachim; Bösch, Patrick M.; Wolf, Peter; et al. (2013) - Social Robots on a Global Stage: Establishing a Role for Culture During Human–Robot InteractionItem type: Journal Article
International Journal of Social RoboticsLim, Velvetina; Rooksby, Maki; Cross, Emily S. (2021)Robotic agents designed to assist people across a variety of social and service settings are becoming increasingly prevalent across the world. Here we synthesise two decades of empirical evidence from human–robot interaction (HRI) research to focus on cultural influences on expectations towards and responses to social robots, as well as the utility of robots displaying culturally specific social cues for improving human engagement. Findings suggest complex and intricate relationships between culture and human cognition in the context of HRI. The studies reviewed here transcend the often-studied and prototypical east–west dichotomy of cultures, and explore how people’s perceptions of robots are informed by their national culture as well as their experiences with robots. Many of the findings presented in this review raise intriguing questions concerning future directions for robotics designers and cultural psychologists, in terms of conceptualising and delivering culturally sensitive robots. We point out that such development is currently limited by heterogenous methods and low statistical power, which contribute to a concerning lack of generalisability. We also propose several avenues through which future work may begin to address these shortcomings. In sum, we highlight the critical role of culture in mediating efforts to develop robots aligned with human users’ cultural backgrounds, and argue for further research into the role of culturally-informed robotic development in facilitating human–robot interaction. - Dual-Path Transformer-Based GAN for Co-speech Gesture SynthesisItem type: Journal Article
International Journal of Social RoboticsQian, Xinyuan; Tang, Hao; Yang, Jichen; et al. (2025)Co-speech gestures have significant impacts on conveying information. For social agents, producing realistic and smooth gestures are crucial to enable natural interactions with humans, which is a challenging task depending on many impact factors (e.g., speech audio, content, and the interacting person). In this paper, we tackle the cross-modal fusion problem through a novel fusion mechanism for end-to-end learning-based co-speech gesture generation. In particular, we facilitate parallel directional cross-modal transformers, and an interactive and cascaded 2D attention module, to achieve selective fusion of the gesture-related cues. Besides, we propose new metrics to evaluate gesture diversity and speech-gesture correspondence, without 3D pose annotation requirements. Experiments on a public dataset indicate that the proposed method can successfully produce diverse human-like poses, which outperform the other competitive state-of-the-art methods, with the evaluations conducted both objectively and subjectively. - Human-Robot Cooperation in Economic Games: People Show Strong Reciprocity but Conditional Prosociality Toward RobotsItem type: Journal Article
International Journal of Social RoboticsHsieh, Te-Yi; Chaudhury, Bishakha; Cross, Emily S. (2023)Understanding how people socially engage with robots is becoming increasingly important as these machines are deployed in social settings. We investigated 70 participants' situational cooperation tendencies towards a robot using prisoner's dilemma games, manipulating the incentives for cooperative decisions to be high or low. We predicted that people would cooperate more often with the robot in high-incentive conditions. We also administered subjective measures to explore the relationships between people's cooperative decisions and their social value orientation, attitudes towards robots, and anthropomorphism tendencies. Our results showed incentive structure did not predict human cooperation overall, but did influence cooperation in early rounds, where participants cooperated significantly more in high-incentive conditions. Exploratory analyses further revealed that participants played a tit-for-tat strategy against the robot (whose decisions were random), and only behaved prosocially toward the robot when they had achieved high scores themselves. These findings highlight how people make social decisions when their individual profit is at odds with collective profit with a robot, and advance understanding on human-robot interactions in collaborative contexts.
Publications1 - 9 of 9