Nadine Bienefeld
Loading...
Last Name
Bienefeld
First Name
Nadine
ORCID
Organisational unit
01359 - Lehre Management, Technologie u. Ök.
41 results
Search Results
Publications 1 - 10 of 41
- Solving the explainable AI conundrum by bridging clinicians’ needs and developers’ goalsItem type: Journal Article
npj Digital MedicineBienefeld, Nadine; Boss, Jens Michael; Lüthy, Rahel; et al. (2023)Explainable artificial intelligence (XAI) has emerged as a promising solution for addressing the implementation challenges of AI/ML in healthcare. However, little is known about how developers and clinicians interpret XAI and what conflicting goals and requirements they may have. This paper presents the findings of a longitudinal multi-method study involving 112 developers and clinicians co-designing an XAI solution for a clinical decision support system. Our study identifies three key differences between developer and clinician mental models of XAI, including opposing goals (model interpretability vs. clinical plausibility), different sources of truth (data vs. patient), and the role of exploring new vs. exploiting old knowledge. Based on our findings, we propose design solutions that can help address the XAI conundrum in healthcare, including the use of causal inference models, personalized explanations, and ambidexterity between exploration and exploitation mindsets. Our study highlights the importance of considering the perspectives of both developers and clinicians in the design of XAI systems and provides practical recommendations for improving the effectiveness and usability of XAI in healthcare. - Leadership in Airline CrewsItem type: ReportBienefeld, Nadine; Grote, Gudela (2009)
- Exceeding the Ordinary: A Framework for Examining Teams Across the Extremeness Continuum and Its Impact on Future ResearchItem type: Journal Article
Group & Organization ManagementSchmutz, Jan; Bienefeld, Nadine; Maynard, Travis M.; et al. (2023)Work teams increasingly face unprecedented challenges in volatile, uncertain, complex, and often ambiguous environments. In response, team researchers have begun to focus more on teams whose work revolves around mitigating risks in these dynamic environments. Some highly insightful contributions to team research and organizational studies have originated from investigating teams that face unconventional or extreme events. Despite this increased attention to extreme teams, however, a comprehensive theoretical framework is missing. We introduce such a framework that envisions team extremeness as a continuous, multidimensional variable consisting of environmental extremeness (i.e., external team context) and task extremeness (i.e., internal team context). The proposed framework allows every team to be placed on the team extremeness continuum, bridging the gap between literature on extreme and more traditional teams. Furthermore, we present six propositions addressing how team extremeness may interact with team processes, emergent states, and outcomes using core variables for team effectiveness and the well-established input-mediator-output-input model to structure our theorizing. Finally, we outline some potential directions for future research by elaborating on temporal considerations (i.e., patterns and trajectories), measurement approaches, and consideration of multilevel relationships involving team extremeness. We hope that our theoretical framework and theorizing can create a path forward, stimulating future research within the organizational team literature to further examine the impact of team extremeness on team dynamics and effectiveness. - Choosing human over AI doctors? How comparative trust associations and knowledge relate to risk and benefit perceptions of AI in healthcareItem type: Journal Article
Risk AnalysisKerstan, Sophie; Bienefeld, Nadine; Grote, Gudela (2024)The development of artificial intelligence (AI) in healthcare is accelerating rapidly. Beyond the urge for technological optimization, public perceptions and preferences regarding the application of such technologies remain poorly understood. Risk and benefit perceptions of novel technologies are key drivers for successful implementation. Therefore, it is crucial to understand the factors that condition these perceptions. In this study, we draw on the risk perception and human-AI interaction literature to examine how explicit (i.e., deliberate) and implicit (i.e., automatic) comparative trust associations with AI versus physicians, and knowledge about AI, relate to likelihood perceptions of risks and benefits of AI in healthcare and preferences for the integration of AI in healthcare. We use survey data (N = 378) to specify a path model. Results reveal that the path for implicit comparative trust associations on relative preferences for AI over physicians is only significant through risk, but not through benefit perceptions. This finding is reversed for AI knowledge. Explicit comparative trust associations relate to AI preference through risk and benefit perceptions. These findings indicate that risk perceptions of AI in healthcare might be driven more strongly by affect-laden factors than benefit perceptions, which in turn might depend more on reflective cognition. Implications of our findings and directions for future research are discussed considering the conceptualization of trust as heuristic and dual-process theories of judgment and decision-making. Regarding the design and implementation of AI-based healthcare technologies, our findings suggest that a holistic integration of public viewpoints is warranted. - Leadership and Decision Making in Multiteam Systems under stressItem type: Working PaperBienefeld, Nadine; Grote, Gudela (2011)
- Teamwork in an emergencyItem type: Conference Paper
Proceedings of the Human Factors and Ergonomics Society 55th Annual MeetingBienefeld, Nadine; Grote, Gudela (2011) - Human-Centered Design of Future Work Systems: Examples of ICU Doctors and NursesItem type: Conference Paper
Academy of Management ProceedingsBienefeld, Nadine; Kerstan, Sophie; Grote, Gudela (2020) - Silence That May Kill: When Aircrew Members Don't Speak up and WhyItem type: Journal Article
Aviation Psychology and Applied Human FactorsBienefeld, Nadine; Grote, Gudela (2012)Several accidents have shown that crew members’ failure to speak up can have devastating consequences. Despite decades of crew resource management (CRM) training, this problem persists and still poses a risk to flight safety. To resolve this issue, we need to better understand why crew members choose silence over speaking up. We explored past speaking up behavior and the reasons for silence in 1,751 crew members, who reported to have remained silent in half of all speaking up episodes they had experienced. Silence was highest for first officers and pursers, followed by flight attendants, and lowest for captains. Reasons for silence mainly concerned fears of damaging relationships, of punishment, or operational pressures. We discuss significant group differences in the frequencies and reasons for silence and suggest customized interventions to specifically and effectively foster speaking up. - Researching Teams across the Extremeness Continuum: The Inherent Challenges and OpportunitiesItem type: Other Conference Item
EAWOP 2019: Book of AbstractsSchmutz, Jan B.; Bienefeld, Nadine; Maynard, Travis M. (2019) - Emergency at 35’000 ft.Item type: Conference Paper
Proceedings of the 29th EAAP Conference on Performance, safety and Well-being in aviationBienefeld, Nadine; Grote, Gudela (2010)
Publications 1 - 10 of 41