Reasoning for Social Autonomous Agents

Project: Research project

Project Details

Description

The problems facing the nation are growing ever more complex, while the cognitive capabilities of people remain essentially unchanged. Artificial intelligence and machine learning software help somewhat, but today’s systems can be brittle, require considerable human expertise to maintain and adapt, and require orders of magnitude more data than humans do for the same problems. This project is exploring reasoning for social autonomous agents, essentially software social organisms, to overcome these issues. This effort includes three components. (1) We will explore social reasoning via analogical reasoning over approximations to experience, including reasoning about social norms. (2) We will explore cognitive control, where an agent maintains its own internal environment to guide its reasoning and learning. This includes computational models of internal signals such as surprise, curiosity, frustration, and boredom. (3) We will explore broad reasoning, the ability to handle open-ended questions with little problem-specific data. This includes computational versions of common human reasoning heuristics, as well as examining the kinds of cognitive illusions that human analysts exhibit. We will build upon our Companion cognitive architecture for this research. We will also collaborate with AFRL’s ACT3, both to transfer our technology to them, and as a source of relevant problems and data. If successful, it will be a step closer to systems that we can work with as partners, rather than tools, and that provide complementary strengths to human reasoning.
StatusActive
Effective start/end date7/15/207/14/25

Funding

  • Air Force Office of Scientific Research (FA9550-20-1-0091 P0003)

Fingerprint

Explore the research topics touched on by this project. These labels are generated based on the underlying awards/grants. Together they form a unique fingerprint.