Impact of misinformation from generative AI on user information processing: How people understand misinformation from generative AI

Donghee Shin, Amy Koerber, Joon Soo Lim

Research output: Contribution to journalArticlepeer-review

17 Scopus citations

Abstract

This study examines the impact of artificial intelligence (AI) on the ways in which users process and respond to misinformation in generative artificial intelligence (GenAI) contexts. Drawing on the heuristic–systematic model and the concept of diagnosticity, our approach examines a cognitive model for processing misinformation in GenAI. The study’s findings revealed that users with a high-heuristic processing mechanism, which affects positive diagnostic perception, were more likely to proactively discern misinformation than users with low-heuristic processing and low-perceived diagnosticity. When exposed to misinformation from GenAI, users’ perceived diagnosticity of misinformation can be accurately predicted by the ways in which they perform heuristic systematic evaluations. With this focus on misinformation processing, this study provides theoretical insights and relevant recommendations for firms to be more resilient in protecting users from the detrimental impacts of misinformation.

Original languageEnglish (US)
JournalNew Media and Society
DOIs
StateAccepted/In press - 2024

Keywords

  • Algorithmic effects on misinformation
  • algorithmic misinformation
  • ChatGPT
  • generative AI
  • heuristic–systematic process
  • misinformation-processing model

ASJC Scopus subject areas

  • Communication
  • Sociology and Political Science

Fingerprint

Dive into the research topics of 'Impact of misinformation from generative AI on user information processing: How people understand misinformation from generative AI'. Together they form a unique fingerprint.

Cite this