BEEF: Balanced English Explanations of Forecasts

Sachin Grover, Chiara Pulice*, Gerardo I. Simari, V. S. Subrahmanian

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

8 Scopus citations

Abstract

The problem of understanding the reasons behind why different machine learning classifiers make specific predictions is a difficult one, mainly because the inner workings of the algorithms underlying such tools are not amenable to the direct extraction of succinct explanations. In this paper, we address the problem of automatically extracting balanced explanations from predictions generated by any classifier, which include not only why the prediction might be correct but also why it could be wrong. Our framework, called Balanced English Explanations of Forecasts, can generate such explanations in natural language. After showing that the problem of generating explanations is NP-complete, we focus on the development of a heuristic algorithm, empirically showing that it produces high-quality results both in terms of objective measures - with statistically significant effects shown for several parameter variations - and subjective evaluations based on a survey completed by 100 anonymous participants recruited via Amazon Mechanical Turk.

Original languageEnglish (US)
Article number8668423
Pages (from-to)350-364
Number of pages15
JournalIEEE Transactions on Computational Social Systems
Volume6
Issue number2
DOIs
StatePublished - Apr 2019
Externally publishedYes

Keywords

  • Decision support systems
  • knowledge engineering
  • machine learning

ASJC Scopus subject areas

  • Modeling and Simulation
  • Social Sciences (miscellaneous)
  • Human-Computer Interaction

Fingerprint

Dive into the research topics of 'BEEF: Balanced English Explanations of Forecasts'. Together they form a unique fingerprint.

Cite this