If you worry about humanity, you should be more scared of humans than of AI

Moran Cerf*, Adam Waytz

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Advances in artificial intelligence (AI) have prompted extensive and public concerns about this technology’s capacity to contribute to the spread of misinformation, algorithmic bias, and cybersecurity breaches and to pose, potentially, existential threats to humanity. We suggest that although these threats are both real and important to address, the heightened attention to AI’s harms has distracted from human beings’ outsized role in perpetuating these same harms. We suggest the need to recalibrate standards for judging the dangers of AI in terms of their risks relative to those of human beings. Further, we suggest that, if anything, AI can aid human beings in decision making aimed at improving social equality, safety, productivity, and mitigating some existential threats.

Original languageEnglish (US)
Pages (from-to)289-292
Number of pages4
JournalBulletin of the Atomic Scientists
Volume79
Issue number5
DOIs
StatePublished - 2023

Keywords

  • algorithmic bias
  • Artificial intelligence
  • cybersecurity
  • ethics
  • existential risk
  • nuclear decision making

ASJC Scopus subject areas

  • Political Science and International Relations

Fingerprint

Dive into the research topics of 'If you worry about humanity, you should be more scared of humans than of AI'. Together they form a unique fingerprint.

Cite this