Abstract
This study tests how individuals attribute responsibility to an artificial intelligent (AI) agent or a human agent based on their involvement in a negative or positive event. In an online, vignette experimental between-subjects design, participants (n = 230) responded to a questionnaire measuring their opinions about the level of responsibility and involvement attributed to an AI agent or human agent across rescue (i.e., positive) or accident (i.e., negative) driving scenarios. Results show that individuals are more likely to attribute responsibility to an AI agent during rescues, or positive events. Also, we find that individuals perceive the actions of AI agents similarly to human agents, which supports CASA framework's claims that technologies can have agentic qualities. In order to explain why individuals do not always attribute full responsibility for an outcome to an AI agent, we use Expectancy Violation Theory to understand why people credit or blame artificial intelligence during unexpected events. Implications of findings for practical applications and theory are discussed.
Original language | English (US) |
---|---|
Article number | 106944 |
Journal | Computers in Human Behavior |
Volume | 125 |
DOIs | |
State | Published - Dec 2021 |
Funding
This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors. No potential conflict of interest was reported by the authors.
Keywords
- Attribution theory
- Computers-are-social-actors
- Human-agent communication
- Human-computer interaction
- Schema theory
- Self-driving cars
ASJC Scopus subject areas
- Arts and Humanities (miscellaneous)
- Human-Computer Interaction
- General Psychology