Nadine Bienefeld
Loading...
Last Name
Bienefeld
First Name
Nadine
ORCID
Organisational unit
01359 - Lehre Management, Technologie u. Ök.
41 results
Search Results
Publications 1 - 10 of 41
- Leadership, boundary-spanning, and voice in high-risk multiteam systemsItem type: Doctoral ThesisBienefeld, Nadine (2012)
- Shared Leadership in Multiteam SystemsItem type: Journal Article
Human FactorsBienefeld, Nadine; Grote, Gudela (2014) - Choosing human over AI doctors? How comparative trust associations and knowledge relate to risk and benefit perceptions of AI in healthcareItem type: Journal Article
Risk AnalysisKerstan, Sophie; Bienefeld, Nadine; Grote, Gudela (2024)The development of artificial intelligence (AI) in healthcare is accelerating rapidly. Beyond the urge for technological optimization, public perceptions and preferences regarding the application of such technologies remain poorly understood. Risk and benefit perceptions of novel technologies are key drivers for successful implementation. Therefore, it is crucial to understand the factors that condition these perceptions. In this study, we draw on the risk perception and human-AI interaction literature to examine how explicit (i.e., deliberate) and implicit (i.e., automatic) comparative trust associations with AI versus physicians, and knowledge about AI, relate to likelihood perceptions of risks and benefits of AI in healthcare and preferences for the integration of AI in healthcare. We use survey data (N = 378) to specify a path model. Results reveal that the path for implicit comparative trust associations on relative preferences for AI over physicians is only significant through risk, but not through benefit perceptions. This finding is reversed for AI knowledge. Explicit comparative trust associations relate to AI preference through risk and benefit perceptions. These findings indicate that risk perceptions of AI in healthcare might be driven more strongly by affect-laden factors than benefit perceptions, which in turn might depend more on reflective cognition. Implications of our findings and directions for future research are discussed considering the conceptualization of trust as heuristic and dual-process theories of judgment and decision-making. Regarding the design and implementation of AI-based healthcare technologies, our findings suggest that a holistic integration of public viewpoints is warranted. - Adaptive coordination and heedfulness make better cockpit crewsItem type: Journal Article
ErgonomicsGrote, G.,; Kolbe, M.; Zala-Mezö, Enikö; et al. (2010) - Emergency at 35’000 ft.Item type: Conference Paper
Proceedings of the 29th EAAP Conference on Performance, safety and Well-being in aviationBienefeld, Nadine; Grote, Gudela (2010) - Speaking up in Multiteam Systems: Effects of status, psychological saftey and leadershipItem type: Other Conference ItemBienefeld, Nadine; Grote, Gudela (2012)
- Leadership and Decision Making in Multiteam Systems under stressItem type: Working PaperBienefeld, Nadine; Grote, Gudela (2011)
- Human-AI teaming: leveraging transactive memory and speaking up for enhanced team effectivenessItem type: Journal Article
Frontiers in PsychologyBienefeld, Nadine; Kolbe, Michaela; Camen, Giovanni; et al. (2023)In this prospective observational study, we investigate the role of transactive memory and speaking up in human-AI teams comprising 180 intensive care (ICU) physicians and nurses working with AI in a simulated clinical environment. Our findings indicate that interactions with AI agents differ significantly from human interactions, as accessing information from AI agents is positively linked to a team’s ability to generate novel hypotheses and demonstrate speaking-up behavior, but only in higher-performing teams. Conversely, accessing information from human team members is negatively associated with these aspects, regardless of team performance. This study is a valuable contribution to the expanding field of research on human-AI teams and team science in general, as it emphasizes the necessity of incorporating AI agents as knowledge sources in a team’s transactive memory system, as well as highlighting their role as catalysts for speaking up. Practical implications include suggestions for the design of future AI systems and human-AI team training in healthcare and beyond. - Speaking up in ad hoc multiteam systems: Individual-level effects of psychological safety, status, and leadership within and across teamsItem type: Journal Article
European Journal of Work and Organizational PsychologyBienefeld, Nadine; Grote, Gudela (2014) - From tools to teammates: The role of team mental models and adaptive coordination on performance in human-AI teamsItem type: Other Conference ItemBienefeld, Nadine; Kolbe, Michaela; Bühler, Philipp (2022)
Publications 1 - 10 of 41