Large Language Models Need Symbolic AI

Kristian Hammond, David Leake*

*Corresponding author for this work

Research output: Contribution to journalConference articlepeer-review

6 Scopus citations

Abstract

The capability of systems based on large language models (LLMs), such as ChatGPT, to generate human-like text has captured the attention of the public and the scientific community. It has prompted both predictions that systems such as ChatGPT will transform AI and enumerations of system problems with hopes of solving them by scale and training. This position paper argues that both over-optimistic views and disppointments reflect misconceptions of the fundamental nature of LLMs as language models. As such, they are statistical models of language production and fluency, with associated strengths and limitations; they are not—and should not be expected to be—knowledge models of the world, nor do they reflect the core role of language beyond the statistics: communication. The paper argues that realizing that role will require driving LLMs with symbolic systems based on goals, facts, reasoning, and memory.

Original languageEnglish (US)
Pages (from-to)204-209
Number of pages6
JournalCEUR Workshop Proceedings
Volume3432
StatePublished - 2023
Event17th International Workshop on Neural-Symbolic Learning and Reasoning, NeSy 2023 - Siena, Italy
Duration: Jul 3 2023Jul 5 2023

Funding

Funding for the (㘀rst author’s work was provided by ?L Research Institutes through the Center for Advancing Safety of Machine Intelligence. The second author’s work was funded by the ?S Department of Defense (Contract W52P1J2093009), and by the Department of the Navy, O(Wce of Naval Research (Award N00014-19-1-2655).

Keywords

  • ChatGPT
  • Large language models
  • Natural Language Understanding
  • Neuro-Symbolic AI

ASJC Scopus subject areas

  • General Computer Science

Fingerprint

Dive into the research topics of 'Large Language Models Need Symbolic AI'. Together they form a unique fingerprint.

Cite this