Abstract
Sophisticated technology is increasingly replacing human minds to perform complicated tasks in domains ranging from medicine to education to transportation. We investigated an important theoretical determinant of people's willingness to trust such technology to perform competently-the extent to which a nonhuman agent is anthropomorphized with a humanlike mind-in a domain of practical importance, autonomous driving. Participants using a driving simulator drove either a normal car, an autonomous vehicle able to control steering and speed, or a comparable autonomous vehicle augmented with additional anthropomorphic features-name, gender, and voice. Behavioral, physiological, and self-report measures revealed that participants trusted that the vehicle would perform more competently as it acquired more anthropomorphic features. Technology appears better able to perform its intended design when it seems to have a humanlike mind. These results suggest meaningful consequences of humanizing technology, and also offer insights into the inverse process of objectifying humans.
Original language | English (US) |
---|---|
Pages (from-to) | 113-117 |
Number of pages | 5 |
Journal | Journal of Experimental Social Psychology |
Volume | 52 |
DOIs | |
State | Published - May 2014 |
Funding
This research was funded by the University of Chicago's Booth School of Business and a grant from the General Motors Company . We thank Julia Hur for assistance with data coding.
Keywords
- Anthropomorphism
- Dehumanization
- Human-computer interaction
- Mind perception
- Moral responsibility
- Trust
ASJC Scopus subject areas
- Social Psychology
- Sociology and Political Science