Conclusions

Maria Vanina Martinez*, Cristian Molinaro, V. S. Subrahmanian, Leila Amgoud

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingChapter

Abstract

Past works on reasoning about inconsistency in AI have suffered from multiple flaws: (i) they apply to one logic at a time and are often invented for one logic after another. (ii) They assume that the AI researcher will legislate how applications resolve inconsistency even though the AI researcher may often know nothing about a specific application which may be built in a completely different time frame and geography than the AI researcher’s work – in the real world, users are often stuck with the consequences of their decisions and would often like to decide what they want to do with their data (including what data to consider and what not to consider when there are inconsistencies). An AI system for reasoning about inconsistent information must support the user in his/her needs rather than forcing something down their throats. (iii) Most existing frameworks use some form or the other of maximal consistent subsets.

Original languageEnglish (US)
Title of host publicationSpringerBriefs in Computer Science
PublisherSpringer
Pages41-42
Number of pages2
Edition9781461467496
DOIs
StatePublished - 2013
Externally publishedYes

Publication series

NameSpringerBriefs in Computer Science
Number9781461467496
Volume0
ISSN (Print)2191-5768
ISSN (Electronic)2191-5776

ASJC Scopus subject areas

  • General Computer Science

Fingerprint

Dive into the research topics of 'Conclusions'. Together they form a unique fingerprint.

Cite this