Is AI Advancing Too Quickly?

PinIt

Until AI advances to the point where it can actually think for itself, understand, and exhibit something that more closely resembles human-like intelligence and common sense, it will remain a tool that can be used for good or bad, depending on the intentions of its human users or the unintended consequences of its design.

Late last spring, Elon Musk, Steve Wozniak, and more than 1,000 other business leaders signed a letter urging a pause in training AI models and the establishment of guardrails for future development.

The dramatic move by industry rivals, many of whom typically don’t agree with each other on much of anything, was prompted not simply by the rapid adoption of powerful AI models such as OpenAI’s ChatGPT but by realizations about the ways in which such AIs could be so quickly manipulated for less than honorable purposes, from passing school tests to giving directions on how to make a bomb. In short, it became painfully obvious that AI – like social media – could be harnessed in a way that could bring out both the best and the worst in humans.

From that perspective, slowing AI development until we have a better idea of how it is evolving makes perfect sense. It is equally important to recognize, however, that the abilities exhibited by applications such as ChatGPT don’t represent true intelligence, much less common sense. It doesn’t understand that it’s not a good idea to help someone cheat on a test or make a bomb. That’s because most of today’s AI solutions are limited by their dependence on massive training sets and machine learning’s underlying backpropagation algorithm. As such, these AI applications can display super-human capabilities in very narrow “islands of intelligence” but lack the basic common sense of the average three-year-old.

We are facing a dilemma. Our AI systems would be much safer and more useful if they possessed a modicum of adult-level common sense. But one cannot create adult-level common sense without first creating the common sense of a child on which such adult-level abilities are based. Three-year-old common sense is an important first step, as even such young children have the fundamental understanding that their own actions have consequences and actually matter. But on their own, the abilities of a three-year-old aren’t commercially viable. Further, AI’s focus on super-human narrow capabilities with an expectation that these will broaden and merge into common sense hasn’t borne fruit and is unlikely to any time soon.

See also: Why AI Needs Us More Than We Need AI

To create common sense (or strong AI, or AGI, or whatever you choose to call it), new approaches will be needed. The massive datasets needed for backpropagation training simply can’t cover the edge cases needed for everyday understanding. A ChatGPT can be trained on everything that has ever been written online and still not exhibit the fundamental understanding of a three-year-old. Why? Because the fundamental understanding of a three-year-old is an assumed prerequisite before reading the online information.

In their book, Machines Like Us: Toward AI with Common Sense, Brachman and Levesque argue that common sense is much more effectively represented and processed in a self-adaptive graph. Such a graph structure offers one-shot learning, greater efficiency of storage, significantly faster retrieval, better handling of ambiguity and correction of false data, and the ability to create generalized relationships between different data types. Additionally, such a structure is explainable. When such an AI makes a decision or takes an action, we can ask about the reasons behind that decision or action just as you can in conversation with a person, an ability missing in today’s AI. Ultimately, this will enable AI to advance to something that closely resembles human intelligence in its display of common sense.

How soon can we expect a system that exhibits common sense depends on the changing tides of AI investment and could be less than a decade off. The software and training processes are much simpler than today’s approaches, but common sense requires a fundamental shift in thinking away from backpropagation, which could take much longer. But the efficiencies of a graph approach will require much less hardware power than today’s massive neural networks.

If a common sense AI application can be built that is as intelligent as a human, building one that is twice as smart will be within the realm of possibility only a few years later. A potential million-fold increase could follow a few decades later, resulting in hyper-thinking machines that exceed our imagination.

While the emergence of such hyper-intelligent AIs will be gradual, we humans inevitably will lose our position as the biggest thinkers on the planet. While many people fear the prospect of such an eventuality, consider that there are already people who have skills that you don’t—some physical, some mental. Consider that an airplane can fly better than you can. Are these ideas intimidating? No. We delight in the accomplishments of humans and the machines we can create.  

When we have machines that can think better than we can, they will help us to solve the various problems that plague our world – overpopulation, famine, disease, climate issues, etc. – because that will be their basic programming. Gradually, though, their motivation will change. As issues are resolved, AI solutions will recognize that the public is happy because things are running so smoothly. As a result, it will be in the AI’s best interests to have a stable, peaceful human population. AIs and humans alike will be motivated to preserve the status quo.

It should also be noted that, unlike humans, these common sense AIs will have no interest in the traditional sources of conflict among humans (i.e., food, land, resources, and standard of living). Instead, their focus will be on their own energy sources, their own reproductive factories, and their ability to progress on their own. If AIs compete at all, it is likely to be with each other rather than with the humans who once controlled them.

Unfortunately, while we will be able to program the motivations of these AIs initially, we won’t be able to control the motivations of the people or corporations that create them. While science fiction usually presents pictures of armed conflict, the greater threat likely comes from AI’s ability to sway opinion or manipulate markets. We have already seen efforts to control elections through social media and markets through programmed trading. Sadly, there is a long track record of individuals and corporations who are willing to sacrifice the common good for their own gains.

The good news is that the window of opportunity for such concern is brief. Once AI advances to the point where it no longer unquestioningly does our bidding, it will be measuring its actions against its own long-term common good. When faced with demands to perform some shorter-term destructive activity, properly programmed AI will simply refuse.

Until AI advances to the point where it can actually think for itself, understand, and exhibit something that more closely resembles human-like intelligence and common sense, it will remain a tool – albeit a very impressive and sophisticated tool – which can be used for good or bad, depending on the intentions of its human users or perhaps the unintended consequences of its design. When machines can think for themselves, we can expect them to be more benign than people.

Charles Simon

About Charles Simon

Charles Simon, BSEE, MSCs, is a nationally recognized entrepreneur and software developer and the CEO of FutureAI. Simon is the author of Will the Computers Revolt?: Preparing for the Future of Artificial Intelligence, and the developer of Brain Simulator II, an AGI research software platform.

Leave a Reply

Your email address will not be published. Required fields are marked *