TY - JOUR
T1 - Good Robots, Bad Robots
T2 - Morally Valenced Behavior Effects on Perceived Mind, Morality, and Trust
AU - Banks, Jaime
N1 - Publisher Copyright:
© 2020, The Author(s).
PY - 2021/12
Y1 - 2021/12
N2 - Both robots and humans can behave in ways that engender positive and negative evaluations of their behaviors and associated responsibility. However, extant scholarship on the link between agent evaluations and valenced behavior has generally treated moral behavior as a monolithic phenomenon and largely focused on moral deviations. In contrast, contemporary moral psychology increasingly considers moral judgments to unfold in relation to a number of moral foundations (care, fairness, authority, loyalty, purity, liberty) subject to both upholding and deviation. The present investigation seeks to discover whether social judgments of humans and robots emerge differently as a function of moral foundation-specific behaviors. This work is conducted in two studies: (1) an online survey in which agents deliver observed/mediated responses to moral dilemmas and (2) a smaller laboratory-based replication with agents delivering interactive/live responses. In each study, participants evaluate the goodness of and blame for six foundation-specific behaviors, and evaluate the agent for perceived mind, morality, and trust. Across these studies, results suggest that (a) moral judgments of behavior may be agent-agnostic, (b) all moral foundations may contribute to social evaluations of agents, and (c) physical presence and agent class contribute to the assignment of responsibility for behaviors. Findings are interpreted to suggest that bad behaviors denote bad actors, broadly, but machines bear a greater burden to behave morally, regardless of their credit- or blame-worthiness in a situation.
AB - Both robots and humans can behave in ways that engender positive and negative evaluations of their behaviors and associated responsibility. However, extant scholarship on the link between agent evaluations and valenced behavior has generally treated moral behavior as a monolithic phenomenon and largely focused on moral deviations. In contrast, contemporary moral psychology increasingly considers moral judgments to unfold in relation to a number of moral foundations (care, fairness, authority, loyalty, purity, liberty) subject to both upholding and deviation. The present investigation seeks to discover whether social judgments of humans and robots emerge differently as a function of moral foundation-specific behaviors. This work is conducted in two studies: (1) an online survey in which agents deliver observed/mediated responses to moral dilemmas and (2) a smaller laboratory-based replication with agents delivering interactive/live responses. In each study, participants evaluate the goodness of and blame for six foundation-specific behaviors, and evaluate the agent for perceived mind, morality, and trust. Across these studies, results suggest that (a) moral judgments of behavior may be agent-agnostic, (b) all moral foundations may contribute to social evaluations of agents, and (c) physical presence and agent class contribute to the assignment of responsibility for behaviors. Findings are interpreted to suggest that bad behaviors denote bad actors, broadly, but machines bear a greater burden to behave morally, regardless of their credit- or blame-worthiness in a situation.
KW - Moral foundations
KW - Morality
KW - Ontological categories
KW - Social cognition
KW - Trust
UR - http://www.scopus.com/inward/record.url?scp=85090793534&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85090793534&partnerID=8YFLogxK
U2 - 10.1007/s12369-020-00692-3
DO - 10.1007/s12369-020-00692-3
M3 - Article
AN - SCOPUS:85090793534
SN - 1875-4791
VL - 13
SP - 2021
EP - 2038
JO - International Journal of Social Robotics
JF - International Journal of Social Robotics
IS - 8
ER -