Framing Effects on Judgments of Social Robots’ (Im)Moral Behaviors

Jaime Banks, Kevin Koban

Research output: Contribution to journalArticlepeer-review

6 Scopus citations


Frames—discursive structures that make dimensions of a situation more or less salient—are understood to influence how people understand novel technologies. As technological agents are increasingly integrated into society, it becomes important to discover how native understandings (i.e., individual frames) of social robots are associated with how they are characterized by media, technology developers, and even the agents themselves (i.e., produced frames). Moreover, these individual and produced frames may influence the ways in which people see social robots as legitimate and trustworthy agents—especially in the face of (im)moral behavior. This three-study investigation begins to address this knowledge gap by 1) identifying individually held frames for explaining an android’s (im)moral behavior, and experimentally testing how produced frames prime judgments about an android’s morally ambiguous behavior in 2) mediated representations and 3) face-to-face exposures. Results indicate that people rely on discernible ground rules to explain social robot behaviors; these frames induced only limited effects on responsibility judgments of that robot’s morally ambiguous behavior. Evidence also suggests that technophobia-induced reactance may move people to reject a produced frame in favor of a divergent individual frame.

Original languageEnglish (US)
Article number627233
JournalFrontiers in Robotics and AI
StatePublished - May 10 2021
Externally publishedYes


  • framing theory
  • human–robot interaction
  • mental models
  • moral foundations
  • moral judgment
  • reactance
  • technophobia

ASJC Scopus subject areas

  • Computer Science Applications
  • Artificial Intelligence


Dive into the research topics of 'Framing Effects on Judgments of Social Robots’ (Im)Moral Behaviors'. Together they form a unique fingerprint.

Cite this