AI, you can drive my car: How we evaluate human drivers vs. self-driving cars

Joo Wha Hong*, Ignacio Cruz, Dmitri Williams

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

24 Scopus citations

Abstract

This study tests how individuals attribute responsibility to an artificial intelligent (AI) agent or a human agent based on their involvement in a negative or positive event. In an online, vignette experimental between-subjects design, participants (n = 230) responded to a questionnaire measuring their opinions about the level of responsibility and involvement attributed to an AI agent or human agent across rescue (i.e., positive) or accident (i.e., negative) driving scenarios. Results show that individuals are more likely to attribute responsibility to an AI agent during rescues, or positive events. Also, we find that individuals perceive the actions of AI agents similarly to human agents, which supports CASA framework's claims that technologies can have agentic qualities. In order to explain why individuals do not always attribute full responsibility for an outcome to an AI agent, we use Expectancy Violation Theory to understand why people credit or blame artificial intelligence during unexpected events. Implications of findings for practical applications and theory are discussed.

Original languageEnglish (US)
Article number106944
JournalComputers in Human Behavior
Volume125
DOIs
StatePublished - Dec 2021

Keywords

  • Attribution theory
  • Computers-are-social-actors
  • Human-agent communication
  • Human-computer interaction
  • Schema theory
  • Self-driving cars

ASJC Scopus subject areas

  • Arts and Humanities (miscellaneous)
  • Human-Computer Interaction
  • General Psychology

Fingerprint

Dive into the research topics of 'AI, you can drive my car: How we evaluate human drivers vs. self-driving cars'. Together they form a unique fingerprint.

Cite this