Separating facts and evaluation: motivation, account, and learnings from a novel approach to evaluating the human impacts of machine learning

Ryan Jenkins*, Kristian Hammond, Sarah Spurlock, Leilani Gilpin

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

In this paper, we outline a new method for evaluating the human impact of machine-learning (ML) applications. In partnership with Underwriters Laboratories Inc., we have developed a framework to evaluate the impacts of a particular use of machine learning that is based on the goals and values of the domain in which that application is deployed. By examining the use of artificial intelligence (AI) in particular domains, such as journalism, criminal justice, or law, we can develop more nuanced and practically relevant understandings of key ethical guidelines for artificial intelligence. By decoupling the extraction of the facts of the matter from the evaluation of the impact of the resulting systems, we create a framework for the process of assessing impact that has two distinctly different phases.

Original languageEnglish (US)
JournalAI and Society
DOIs
StateAccepted/In press - 2022

Keywords

  • Design for values
  • Impact assessment
  • Machine learning
  • Operationalizing
  • Practice dependence

ASJC Scopus subject areas

  • Philosophy
  • Human-Computer Interaction
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Separating facts and evaluation: motivation, account, and learnings from a novel approach to evaluating the human impacts of machine learning'. Together they form a unique fingerprint.

Cite this