Is It Time to Redefine Artificial Intelligence?

PinIt
Is It Time to Redefine Artificial Intelligence?

An industry leader takes a compelling, level-headed and innovative look at the definition of artificial intelligence.

Ask those in big data, analytics, and data science what artificial intelligence (AI) means — or what types of applications and systems can be encompassed under the AI umbrella — and you’re likely to hear a different answer from each of them.

On top of that, there’s a massive disconnect between what data scientists and AI experts do on a day-to-day basis, and what information gets reported via the media, which errs toward the hyperbolic. Sometimes, even anthropomorphic.

I was preparing an article about lesser-known applications for AI in business, but struggled to find much information beyond what we’ve covered before on RTInsights., and beyond systems like chatbots and familiar predictive modeling available via marketing automation tools. It turned out that my own definition of AI had shifted, over time, to the point where I had lost sight of the fundamentals and the innovative work that’s on deck.

At the recent Cloudera Now event, attendees heard from Hilary Mason, the founder and CEO of Fast Forward Labs, a machine intelligence research company out of Brooklyn, New York. For anyone whose definition of AI might have shifted away from the reality, it’s fortunate that Mason is opinionated about the topic. She brings a compelling combination of level-headedness and innovative thinking that just might help re-center the AI equation for many.

Pieces of the AI pyramid

Mason says that before you start to define AI, you have to first think about all the technologies and skills that make it possible. If you think of AI as the top section of a pyramid, it’s supported by machine learning, data science, analytics, and big data, which takes the role of foundation.

[ Related: Blockchain, IoT, AI Will Converge in Healthcare ]

All of these components are based around one simple action: counting.

Eight to 10 years ago, Mason argues, the way we count was revolutionized by commoditized infrastructure for big data and analytics. Suddenly, it became incredibly easy, cheap and fast to play with data. “You could ask a question of your data and get an answer back before you, as a human being, forgot why you asked that question in the first place,”  she says. Naturally, big data and analytics technologies have only continued to improve in the years since.

“We do actually live in a world where it is possible to collect and analyze more data than it ever has been before,” she says. “When you combine that with similar access to computation, and you combine it with our knowledge of algorithms … we’re actually able to do some pretty remarkable things.”

Next comes data science

“If analytics is counting things, data science is counting things cleverly,” Mason says. Now we’re talking about making predictions based on existing data using various modeling techniques. Once data science techniques are established, machine learning comes into play, based on the idea of “counting cleverly, with feedback loops.

[ Related: Why You May Want a Career in Data Science ]

The general fuzziness around AI begins to develop in the space beyond machine learning. Some people define AI as all of these different platforms within the pyramid, and others define AI as only things enabled by deep learning and similar technologies. To Mason, however, AI is all about one simple principal: Getting the right data, to the right person, in the right context, at the right time.

The real AI innovations

Between these various and conflicting definitions, “AI” is popping up all over the place, particularly around SaaS applications for businesses. Analyzing any particular platform or application is outside the scope of Mason’s presentation, which means that stakeholders should be aware of the difference between true AI that’s enabled by everything beneath it on the data pyramid, and the more dubious claims from companies that see AI as an easy marketing tactic.

That said, Mason and Fast Forward Labs sees a few areas of genuine AI innovation happening in the next few years. The first is natural language generation, which is most commonly seen in groups like the Associated Press automatically writing articles based on structured data like financial reports. Mason says everything from celebrity gossip to the weekly emails you might get from your financial advisor about your portfolio’s performance are being generated using these techniques right now.

[ Related: How AI and Cognitive Science Can Beat Addiction Treatment Fraud ]

Summarization and problablistic programming are two other categories with significant and genuine innovation, the former of which is all about using Bayesian inference to make better predictions and actually use machine learning to “learn more about the world using the data you have access to.”

Mason argues that interpretability — the idea of building systems and algorithms so that you can understand how they work—will be fundamentally changed by improved AI. Algorithms will be able to peer into the mysterious “black boxes” of machine learning and see how permuted inputs change the outputs. That’s going to be a massive improvement for those who want to deploy AI systems in regulated environments.

The key to Mason’s point, and now mine, is that there’s value in resetting our definition of AI, and refocusing on the fundamentals that will create impressive changes in the years to come. Instead of putting all our attention on chatbots and the like, maybe Mason offers an appealing alternative: “We really, as a community, need to focus on understanding why our systems do what they do, so we can build systems that actually work the way we want to work.”

Joel Hans

About Joel Hans

Joel Hans is the former managing editor of Manufacturing.net. He earned his master's degree from the University of Arizona, and currently lives and writes in Tucson.

Leave a Reply