Cognitive Computing Systems: How They Learn…and Unlearn

PinIt

When self-learning cognitive computing systems behave in unintended ways, new business roles are needed to identify the source, and then re-train.

A variety of organizations are turning to cognitive computing systems as a way to make important decisions more quickly. Real-time detection and response is changing how industries operate, for the better. With global revenues for AI and cognitive systems growing from nearly $8 billion in 2016 to more than $47 billion in the year 2020, according to IDC, businesses need a plan for how to react when intelligent systems learn in unexpected ways.

IDC forecasts public safety, medical diagnosis and quality management as the AI use cases that will grow the fastest in the upcoming years, making the ways in which computing systems learn and develop a very important issue. Cognitive computing systems can occasionally learn in ways their creators don’t foresee, often causing serious consequences.

The Army’s cognitive computing case study

One example is when the U.S. Army trained a computer vision system to locate camouflaged enemy tanks. The program worked perfectly for testers, but not for the Pentagon. The designers soon figured out that the reason the program wouldn’t work right was that the camouflaged tanks had been photographed on cloudy days and the empty forests on sunny days. The system had learned to associate cloudy days with tanks.

The solution to this problem is to understand how learning goes awry, and then re-train. There are three ways this learning occurs. The simplest type of training is to present hand-chosen input-output pairs. More advanced systems are designed to occasionally consult learning content that developers identify, such as medical image databases. Even smarter AI systems, called self-learning systems, are designed to seek out new content on a regular basis.
Self-learning systems can potentially take over important human decisions because they never stop learning. However, it is necessary to have a human to curate their sources, or else these systems can learn things that veer from their intended use.

The antidote to unintended learning is to re-train. For that to happen, new roles need to be created, including that of a digital psychologist and sociologist. By working together, they can tweak AI systems as needed and hone them to react exactly as needed. The world is changing, and with it come new and exciting opportunities for computers and humans to work together.

Fill out the form to download this complimentary white paper from Cognizant.

Leave a Reply

Your email address will not be published. Required fields are marked *