Abstract
Mentalizing is the process of inferencing others’ mental states and it contributes to an inferential system known as Theory of Mind (ToM)—a system that is critical to human interactions as it facilitates sense-making and the prediction of future behaviors. As technological agents like social robots increasingly exhibit hallmarks of intellectual and social agency—and are increasingly integrated into contemporary social life—it is not yet fully understood whether humans hold ToM for such agents. To build on extant research in this domain, five canonical tests that signal implicit mentalizing (white lie detection, intention inferencing, facial affect interpretation, vocal affect interpretation, and false-belief detection) were conducted for an agent (anthropomorphic or machinic robots, or a human) through video-presented (Study 1) and physically copresent interactions (Study 2). Findings suggest that mentalizing tendencies for robots and humans are more alike than different; however, the use of nonliteral language, copresent interactivity, and reliance on agent-class heuristics may reduce tendencies to mentalize robots.
Original language | English (US) |
---|---|
Journal | Technology, Mind, and Behavior |
Volume | 1 |
Issue number | 2 |
DOIs | |
State | Published - 2021 |
Externally published | Yes |
Keywords
- heuristics
- human–machine communication
- mentalizing
- social presence
- social robots
ASJC Scopus subject areas
- Human-Computer Interaction
- Experimental and Cognitive Psychology
- Social Psychology
- Communication