Abstract
Although traditional models of decision making in ai have focused on utilitarian theories, there is considerable psychological evidence that these theories fail to capture the full spectrum of human decision making (e.g. Kahneman and Tversky 1979; Ritov and Baron 1999). Current theories of moral decision making extend beyond pure utilitarian models by relying on contextual factors that vary with culture. In particular, research on moral reasoning has uncovered a conflict between normative outcomes and intuitive judgments. This has led some researchers to propose the existence of deontological moral rules; that is, some actions are immoral regardless of consequences, which could block utilitarian motives. Consider the starvation scenario (from Ritov and Baron [1999]) that follows: A convoy of food trucks is on its way to a refugee camp during a famine in Africa. (Airplanes cannot be used.) You find that a second camp has even more refugees. If you tell the convoy to go to the second camp instead of the first, you will save one thousand people from death, but one hundred people in the first camp will die as a result. Would you send the convoy to the second camp? The utilitarian decision would send the convoy to the second camp, but 63 percent of participants did not divert the truck. Making these types of decisions automatically requires an integrated approach, including natural language understanding, qualitative reasoning, analogical reasoning, and first-principles reasoning.
Original language | English (US) |
---|---|
Title of host publication | Machine Ethics |
Publisher | Cambridge University Press |
Pages | 422-441 |
Number of pages | 20 |
Volume | 9780521112352 |
ISBN (Electronic) | 9780511978036 |
ISBN (Print) | 9780521112352 |
DOIs | |
State | Published - Jan 1 2011 |
ASJC Scopus subject areas
- General Computer Science