Of Like Mind: The (Mostly) Similar Mentalizing of Robots and Humans

Research output: Contribution to journalArticlepeer-review

17 Scopus citations


Mentalizing is the process of inferencing others’ mental states and it contributes to an inferential system known as Theory of Mind (ToM)—a system that is critical to human interactions as it facilitates sense-making and the prediction of future behaviors. As technological agents like social robots increasingly exhibit hallmarks of intellectual and social agency—and are increasingly integrated into contemporary social life—it is not yet fully understood whether humans hold ToM for such agents. To build on extant research in this domain, five canonical tests that signal implicit mentalizing (white lie detection, intention inferencing, facial affect interpretation, vocal affect interpretation, and false-belief detection) were conducted for an agent (anthropomorphic or machinic robots, or a human) through video-presented (Study 1) and physically copresent interactions (Study 2). Findings suggest that mentalizing tendencies for robots and humans are more alike than different; however, the use of nonliteral language, copresent interactivity, and reliance on agent-class heuristics may reduce tendencies to mentalize robots.

Original languageEnglish (US)
JournalTechnology, Mind, and Behavior
Issue number2
StatePublished - 2021
Externally publishedYes


  • heuristics
  • human–machine communication
  • mentalizing
  • social presence
  • social robots

ASJC Scopus subject areas

  • Human-Computer Interaction
  • Experimental and Cognitive Psychology
  • Social Psychology
  • Communication


Dive into the research topics of 'Of Like Mind: The (Mostly) Similar Mentalizing of Robots and Humans'. Together they form a unique fingerprint.

Cite this