Human-algorithm interactions help explain the spread of misinformation

Killian L. McLoughlin, William J. Brady*

*Corresponding author for this work

Research output: Contribution to journalReview articlepeer-review

Abstract

Human attention biases toward moral and emotional information are as prevalent online as they are offline. When these biases interact with content algorithms that curate social media users’ news feeds to maximize attentional capture, moral and emotional information are privileged in the online information ecosystem. We review evidence for these human-algorithm interactions and argue that misinformation exploits this process to spread online. This framework suggests that interventions aimed at combating misinformation require a dual-pronged approach that combines person-centered and design-centered interventions to be most effective. We suggest several avenues for research in the psychological study of misinformation sharing under a framework of human-algorithm interaction.

Original languageEnglish (US)
Article number101770
JournalCurrent opinion in psychology
Volume56
DOIs
StatePublished - Apr 2024

ASJC Scopus subject areas

  • General Psychology

Fingerprint

Dive into the research topics of 'Human-algorithm interactions help explain the spread of misinformation'. Together they form a unique fingerprint.

Cite this