The emergence of AGI is likely to be gradual rather than all at once as its development is a very complex and difficult task that will require significant advances in several different fields.
Beyond improvements and new applications for artificial intelligence (AI), most agree the next quantum leap for AI will occur when artificial general intelligence (AGI) emerges. We loosely define AGI as the hypothetical ability of a machine or computer program to understand or learn any intellectual task that a human being can. There is very little consensus, however, on when and how this will actually happen.
One school of thought argues that if enough different AI applications could be built, each of which solves for a specific problem, those apps would eventually grow together into a form of AGI. The problem with this approach is that such so-called “narrow” AI applications don’t store information in a generalized form. As such, information can’t be used by other narrow AI applications to expand their breadth. So while stitching together applications for, say, language processing and image processing might be possible, those apps cannot be integrated in the same way that a child’s mind integrates hearing and vision.
Other AI researchers contend that if a big enough machine learning (ML) system with enough computer power could be built, it would spontaneously exhibit AGI. When we look deeper into how ML actually works, this would mean having a training set that encompasses all the situations our hypothetical ML system might encounter. As expert systems, which attempted to capture the knowledge of a specific field clearly demonstrated decades ago, it is simply impossible to create enough cases and example data to overcome a system’s underlying lack of understanding.
The problem with both of these approaches is that they will create, at best, an AI that only appears to be intelligent. They are still reliant on predetermined scripts and millions of training samples. Such an AI still won’t comprehend that words and images represent physical things that exist in a physical universe. It still can’t merge information from multiple senses. And so, while it may be possible to combine language and image processing applications, there is still no way to integrate them in the same seamless, effortless way that a child integrates vision, hearing, and direct interaction with his/her environment.
What’s needed for AGI success?
To attain true AGI, researchers must shift their focus away from ever-expanding datasets to a more biologically plausible structure containing three essential components of consciousness: an internal mental model of surroundings with the entity at its center; a perception of time that allows for a perception of future outcomes based on current actions; and an imagination so that multiple potential actions can be considered and their outcomes evaluated and selected. In short, it must begin exhibiting the same kind of contextual, common-sense understanding as humans do to experience the world around them.
To get there, AI’s computational system must more closely resemble the biological processes found in the human brain, while its algorithms must allow it to build abstract “things” with limitless connections rather than the vast arrays, training sets, and computer power today’s AI requires. Such a unified knowledge base could potentially be integrated with mobile sensory pods, which contain modules for sight, hearing, motion, and speech. Such pods would enable the entire system to experience rapid sensory feedback with each action it takes, which, over time, would result in an end-to-end system that can begin to learn, understand, and ultimately work better with people as it approaches true AGI.
Even with such a system, the actual emergence of AGI is likely to be gradual rather than all at once for two primary reasons. First and perhaps foremost is the fact that developing AGI is clearly a very complex and difficult task that will require significant advances in several different fields, among them computer science, neuroscience, and psychology. While this means years of research and development involving the contributions of numerous scientists and engineers, the good news is that a great deal of research is currently underway. With numerous areas being worked on, individual components of AGI will emerge as they are figured out.
Then, because many AGI capabilities are marketable in their own right, instant gratification may slow AGI’s emergence. A feature is produced that improves the way Alexa understands, or a new vision capability improves self-driving cars, and that individual development is rushed to market because it is commercially viable. If these more specialized, individually marketable AI systems could be built on a common underlying data structure, however, they could begin to interact with each other, building a broader context that would actually understand and learn. As these systems become more advanced, they will be able to function together to create a more general intelligence.
As these facets are added on, AI systems will exhibit more humanlike performance in individual areas and will advance to superhuman performance as the systems are enhanced. But the performance will not be equal in all areas simultaneously. This suggests that at some point, we’re going to get close to the threshold for AGI, then equal the threshold, then exceed the threshold. At some point thereafter, we’re going to have machines that are obviously superior to human intelligence, and people will begin to agree that, yes, maybe AGI does exist. Ultimately, AGI has to happen because the market demands it.