I have argued that the goal of human-level AI can be equivalently expressed as creating sufficiently smart software social organisms. This equivalence is useful because the latter formulation makes strong suggestions about how such systems should be evaluated. No single test is enough, something which has become very apparent from the limitations of Turing's test, which brought about the workshop that motivated the talk that this article was based on. More positively, it provides a framework for organizing a battery of tests, namely the apprenticeship trajectory. An apprentice is initially a student, learning from instructors through carefully designed exercises. Apprentices start working as assistants to a mentor, with increasing responsibility as they learn. Eventually they start working autonomously, communicating with others at their same level, and even taking on their own apprentices. If we can learn how to build AI systems with these capabilities, it would be revolutionary. I hope the substrate capabilities for social organisms proposed here will encourage others to undertake this kind of research. The fantasy of the Turing test, and many of its proposed replacements, is that a single simple test can be found for measuring progress toward human-level AI. Part of the attraction of this view is that the alternative is both difficult and expensive. Many tests, involving multiple capabilities and interactions over time with people, all require substantial investments in research, engineering, and evaluation. But given that we are tackling one of the deepest questions ever asked by humanity, that is, what is mind, this should not be too surprising. And I believe it will be an extraordinarily productive investment.
ASJC Scopus subject areas
- Artificial Intelligence