TY - JOUR
T1 - Responsible AI Through Conceptual Engineering
AU - Himmelreich, Johannes
AU - Köhler, Sebastian
N1 - Funding Information:
One of the authors held a postdoctoral fellowship funded by Apple Inc. researching the ethics of autonomous systems while spending time at Apple Inc. All other authors declare that they have no other conflict of interest or disclosures on declarations on competing interests, funding, ethical approval, or consent to publish.
Funding Information:
This paper was presented at the AiTech Agora in Delft, the Robots and Responsibility Workshop organized by Leonhard Menges, and the FS Philosophy Forum. We are grateful for the discussions and suggestions we received at these venues. We thank Herman Veluwenkamp and Christine Tiefensee for extensive comments and discussions. S.K.’s research for this paper was conducted while he was a principal investigator of the project group Regulatory Theories of AI of the Centre Responsible Digitality (ZEVEDI).
Publisher Copyright:
© 2022, The Author(s), under exclusive licence to Springer Nature B.V.
PY - 2022/9
Y1 - 2022/9
N2 - The advent of intelligent artificial systems has sparked a dispute about the question of who is responsible when such a system causes a harmful outcome. This paper champions the idea that this dispute should be approached as a conceptual engineering problem. Towards this claim, the paper first argues that the dispute about the responsibility gap problem is in part a conceptual dispute about the content of responsibility and related concepts. The paper then argues that the way forward is to evaluate the conceptual choices we have, in the light of a systematic understanding of why the concept is important in the first place—in short, the way forward is to engage in conceptual engineering. The paper then illustrates what approaching the responsibility gap problem as a conceptual engineering problem looks like. It outlines argumentative pathways out of the responsibility gap problem and relates these to existing contributions to the dispute.
AB - The advent of intelligent artificial systems has sparked a dispute about the question of who is responsible when such a system causes a harmful outcome. This paper champions the idea that this dispute should be approached as a conceptual engineering problem. Towards this claim, the paper first argues that the dispute about the responsibility gap problem is in part a conceptual dispute about the content of responsibility and related concepts. The paper then argues that the way forward is to evaluate the conceptual choices we have, in the light of a systematic understanding of why the concept is important in the first place—in short, the way forward is to engage in conceptual engineering. The paper then illustrates what approaching the responsibility gap problem as a conceptual engineering problem looks like. It outlines argumentative pathways out of the responsibility gap problem and relates these to existing contributions to the dispute.
KW - Artificial intelligence
KW - Conceptual engineering
KW - Ethics of AI
KW - Moral responsibility
KW - Responsibility gap
UR - http://www.scopus.com/inward/record.url?scp=85133392806&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85133392806&partnerID=8YFLogxK
U2 - 10.1007/s13347-022-00542-2
DO - 10.1007/s13347-022-00542-2
M3 - Article
AN - SCOPUS:85133392806
SN - 2210-5433
VL - 35
JO - Philosophy and Technology
JF - Philosophy and Technology
IS - 3
M1 - 60
ER -