TY - GEN
T1 - Workshop on Trust and Reliance in AI-Human Teams (TRAIT)
AU - Bansal, Gagan
AU - Smith-Renner, Alison Marie
AU - Buçinca, Zana
AU - Wu, Tongshuang
AU - Holstein, Kenneth
AU - Hullman, Jessica
AU - Stumpf, Simone
N1 - Publisher Copyright:
© 2022 Owner/Author.
PY - 2022/4/27
Y1 - 2022/4/27
N2 - As humans increasingly interact (and even collaborate) with AI systems during decision-making, creative exercises, and other tasks, appropriate trust and reliance are necessary to ensure proper usage and adoption of these systems. Specifically, people should understand when to trust or rely on an algorithm's outputs and when to override them. While significant research focus has aimed to measure and promote trust in human-AI interaction, the field lacks synthesized definitions and understanding of results across contexts. Indeed, conceptualizing trust and reliance, and identifying the best ways to measure these constructs and effectively shape them in human-AI interactions remains a challenge. This workshop aims to establish building appropriate trust and reliance on (imperfect) AI systems as a vital, yet under-explored research problem. The workshop will provide a venue for exploring three broad aspects related to human-AI trust: (1) How do we clarify definitions and frameworks relevant to human-AI trust and reliance (e.g., what does trust mean in different contexts)? (2) How do we measure trust and reliance? And, (3) How do we shape trust and reliance? As these problems and solutions involving humans and AI are interdisciplinary in nature, we invite participants with expertise in HCI, AI, ML, psychology, and social science, or other relevant fields to foster closer communications and collaboration between multiple communities.
AB - As humans increasingly interact (and even collaborate) with AI systems during decision-making, creative exercises, and other tasks, appropriate trust and reliance are necessary to ensure proper usage and adoption of these systems. Specifically, people should understand when to trust or rely on an algorithm's outputs and when to override them. While significant research focus has aimed to measure and promote trust in human-AI interaction, the field lacks synthesized definitions and understanding of results across contexts. Indeed, conceptualizing trust and reliance, and identifying the best ways to measure these constructs and effectively shape them in human-AI interactions remains a challenge. This workshop aims to establish building appropriate trust and reliance on (imperfect) AI systems as a vital, yet under-explored research problem. The workshop will provide a venue for exploring three broad aspects related to human-AI trust: (1) How do we clarify definitions and frameworks relevant to human-AI trust and reliance (e.g., what does trust mean in different contexts)? (2) How do we measure trust and reliance? And, (3) How do we shape trust and reliance? As these problems and solutions involving humans and AI are interdisciplinary in nature, we invite participants with expertise in HCI, AI, ML, psychology, and social science, or other relevant fields to foster closer communications and collaboration between multiple communities.
KW - human-centered artificial intelligence
KW - reliance
KW - trust
KW - uncertainty
UR - http://www.scopus.com/inward/record.url?scp=85129749281&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85129749281&partnerID=8YFLogxK
U2 - 10.1145/3491101.3503704
DO - 10.1145/3491101.3503704
M3 - Conference contribution
AN - SCOPUS:85129749281
T3 - Conference on Human Factors in Computing Systems - Proceedings
BT - CHI 2022 - Extended Abstracts of the 2022 CHI Conference on Human Factors in Computing Systems
PB - Association for Computing Machinery
T2 - 2022 CHI Conference on Human Factors in Computing Systems, CHI EA 2022
Y2 - 30 April 2022 through 5 May 2022
ER -