IBM researchers first showcased this capability last year as part of Project Debater and is now moving to commercialize that capability within offerings such as Watson Discovery later this year.
IBM this week announced that it has made available sentiment analytics based on algorithms and a natural language processing engine that can now understand idioms and colloquialisms such as “hot under the collar.”
Now embedded within various Watson services, this capability advances the ability to rely on artificial intelligence (AI) platform to analyze text to, for example, sense customer frustrations, says Daniel Hernandez, vice president of IBM data and AI.
IBM researchers first showcased this capability last year as part of Project Debater, an initiative where the IBM Watson platform took part in a series of debates with humans. IBM is now moving to commercialize that capability within offerings such as Watson Discovery later this year that will be used to analyze documents at scale.
Other capabilities being added to the Watson portfolio of services later this year include the ability to generate summaries by pulling text data from a variety of sources. Earlier this year IBM showed how that capability could be used to generate bite-sized insights about hundreds of artists and celebrities that attended The GRAMMYS awards by analyzing more than 18 million articles, blogs, and bios.
IBM is also adding an Advanced Topic Clustering that makes it simpler to organize incoming data around topics. That capability will make it possible for subject matter experts to customize and fine-tune topics to reflect the language of specific businesses or industries.
Finally, IBM has added a Customizable Classification of Elements in Business Documents capability to make it possible to create AI models that can more easily classify clauses that occur in business documents. Once learned, Watson can then use those AI models to create new classifications.
As is the case with all AI platforms, the more data that is surfaced the more accurate the analytics generated become. The degree of confidence any organization will have in the analytics provided by Watson will, of course, vary by use case. Platforms such as Watson are designed to augment rather than replace humans, says Hernandez. As such, each organization will need to determine what level of confidence and trust they want to place on an AI system. However, the more familiar an AI system becomes with any data set, the more accurate the recommendations being made become, notes Hernandez.
“The more data the greater the confidence,” says Hernandez.
Most organizations today are already struggling to analyze massive amounts of data that is now being generated by everything from social media platforms to business contracts. On top of that is a raft of verbal communications that now regularly occurs online. In theory, much of those recorded conversations could be converted into text and analyzed. However, given the amount of data that needs to be analyzed, it’s not going to be possible to achieve that goal without relying on AI platforms. The ultimate goal is to, of course, make these analytics capabilities so ubiquitous available that end users one day simply take them for granted.
In the meantime, IT teams might want to gear up to store massive amounts of data because it’s now only a matter of time before AI platforms come looking for it.