Improving Artificial Intelligence Common Sense Testing

PinIt

Common sense could bring AI and human learning and processing closer together resulting in a better understanding of computers and their ability to move through truly complex activities.

Artificial intelligence is technically smart and can learn with data, but does it think like a human? Common sense could bring artificial intelligence and human learning and processing closer together. Assessing common sense in computers would bring a greater understanding of computers and their ability to move through truly complex activities.

A new piece of work from Northwestern could allow researchers to test common sense automatically. The results could have far-reaching consequences.

See also: Step up Your Digital Transformation: It’s Now a Means of Survival

The power of common sense and artificial intelligence

Machine common sense is still a few years away, but Douglas Downey and the team’s work could bring us closer to this benchmark. The collaboration, Generative Data AUGmentation for Commonsense Reasoning (G-DAUGc), generates additional data to test for common sense without requiring additional notation.

The team — a collaboration between the Weinberg College of Arts and Sciences’ department of statistics, the Allen Institute for AI, and lead student on the project, Yiben Yang — saw an improvement of one to eight percent in four popular processing benchmarks. The system also improved whether models were overly sensitive to certain perturbations. Visitors can see it in action on the website, using a dataset meant to find the right word for fill-in-the-blanks.

One common method for testing common sense in language processing is to submit the system to sizeable natural language sets. Developers evaluate how accurate the system is when answering these questions. Secondarily, developers look at how many hand-authored questions machines needed to gain accuracy.

How does it work?

Datasets contain tens of thousands of questions, something difficult and time-consuming for humans to write from scratch. Plus, when humans write the questions, they often give away subtle irregularities that allow the computer to make decisions that seem like common sense without actually being so.

The team creates large datasets without these irregularities, allowing training sets to happen faster and with greater accuracy. The goal will be to nudge the test towards training and evaluations that produce true common sense markers. Read the full paper, “Generative Data Augmentation for Commonsense Reasoning,” published originally in Findings of the Empirical Methods of Natural Language Processing.

Elizabeth Wallace

About Elizabeth Wallace

Elizabeth Wallace is a Nashville-based freelance writer with a soft spot for data science and AI and a background in linguistics. She spent 13 years teaching language in higher ed and now helps startups and other organizations explain - clearly - what it is they do.

Leave a Reply

Your email address will not be published. Required fields are marked *