In AI We Trust Incrementally: a Multi-layer Model of Trust to Analyze Human-Artificial Intelligence Interactions

Open access
Date
2020-09Type
- Journal Article
Abstract
Real engines of the artificial intelligence (AI) revolution, machine learning (ML) models, and algorithms are embedded nowadays in many services and products around us. As a society, we argue it is now necessary to transition into a phronetic paradigm focused on the ethical dilemmas stemming from the conception and application of AIs to define actionable recommendations as well as normative solutions. However, both academic research and society-driven initiatives are still quite far from clearly defining a solid program of study and intervention. In this contribution, we will focus on selected ethical investigations around AI by proposing an incremental model of trust that can be applied to both human-human and human-AI interactions. Starting with a quick overview of the existing accounts of trust, with special attention to Taddeo’s concept of “e-trust,” we will discuss all the components of the proposed model and the reasons to trust in human-AI interactions in an example of relevance for business organizations. We end this contribution with an analysis of the epistemic and pragmatic reasons of trust in human-AI interactions and with a discussion of kinds of normativity in trustworthiness of AIs. Show more
Permanent link
https://doi.org/10.3929/ethz-b-000383241Publication status
publishedExternal links
Journal / series
Philosophy & TechnologyVolume
Pages / Article No.
Publisher
SpringerSubject
Artificial intelligence (AI); Trust; Trustworthiness; E-trustOrganisational unit
02120 - Dep. Management, Technologie und Ökon. / Dep. of Management, Technology, and Ec.03995 - von Wangenheim, Florian / von Wangenheim, Florian
More
Show all metadata