TY - JOUR
T1 - Avoiding the Abject and Seeking the Script
T2 - Perceived Mind, Morality, and Trust in a Persuasive Social Robot
AU - Banks, Jaime
AU - Koban, Kevin
AU - Haggadone, Brad
N1 - Publisher Copyright:
© 2023 Copyright held by the owner/author(s).
PY - 2023/4/15
Y1 - 2023/4/15
N2 - Social robots are being groomed for human influence, including the implicit and explicit persuasion of humans. Humanlike characteristics are understood to enhance robots' persuasive impact; however, little is known of how perceptions of two key human capacities - mind and morality - function in robots' persuasive potential. This experiment tests the possibility that perceived robot mind and morality will correspond with greater persuasive impact, moderated by relational trustworthiness for a moral appeal and by capacity trustworthiness for a logical appeal. Via an online survey, a humanoid robot asks participants to help it learn to overcome CAPTCHA puzzles to access important online spaces - either on grounds that it is logical or moral to do so. Based on three performance indicators and one self-report indicator of compliance, analysis indicates that (a) seeing the robot as able to perceive and act on the world selectively improves compliance, and (b) perceiving agentic capacity diminishes compliance, though capacity trustworthiness can moderate that reduction. For logical appeals, social-moral mental capacities promote compliance, moderated by capacity trustworthiness. Findings suggest that, in this compliance scenario, the accessibility of schemas and scripts for engaging robots as social-moral actors may be central to whether/how perceived mind, morality, and trust function in machine persuasion.
AB - Social robots are being groomed for human influence, including the implicit and explicit persuasion of humans. Humanlike characteristics are understood to enhance robots' persuasive impact; however, little is known of how perceptions of two key human capacities - mind and morality - function in robots' persuasive potential. This experiment tests the possibility that perceived robot mind and morality will correspond with greater persuasive impact, moderated by relational trustworthiness for a moral appeal and by capacity trustworthiness for a logical appeal. Via an online survey, a humanoid robot asks participants to help it learn to overcome CAPTCHA puzzles to access important online spaces - either on grounds that it is logical or moral to do so. Based on three performance indicators and one self-report indicator of compliance, analysis indicates that (a) seeing the robot as able to perceive and act on the world selectively improves compliance, and (b) perceiving agentic capacity diminishes compliance, though capacity trustworthiness can moderate that reduction. For logical appeals, social-moral mental capacities promote compliance, moderated by capacity trustworthiness. Findings suggest that, in this compliance scenario, the accessibility of schemas and scripts for engaging robots as social-moral actors may be central to whether/how perceived mind, morality, and trust function in machine persuasion.
KW - Additional Key Words and PhrasesSocial robots
KW - CAPTCHA
KW - abjection
KW - mentalizing
KW - moral agency
KW - persuasion
KW - schema
UR - http://www.scopus.com/inward/record.url?scp=85164235921&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85164235921&partnerID=8YFLogxK
U2 - 10.1145/3572036
DO - 10.1145/3572036
M3 - Article
AN - SCOPUS:85164235921
SN - 2573-9522
VL - 12
JO - ACM Transactions on Human-Robot Interaction
JF - ACM Transactions on Human-Robot Interaction
IS - 3
M1 - 3572036
ER -