Imparting Chatbots with Multiple Conversational Styles

PinIt

Researchers are studying a way to produce a single AI model capable of smoothly switching between and blending different chatbot conversational styles.

In these tough economic times and increasingly competitive marketplace, it’s survival of the fittest. Workers and job candidates with diverse skills sets stand out and are very much prized. And that does not just apply to humans. Chatbots with multiple skills also are highly desirable. Unfortunately, nowadays, chatbots are created using continuous intelligence applications and artificial intelligence (AI) models that make them very good at talking to humans in a very specific way.

Specifically, most chatbots are developed to perform a single conversational skill. For instance, AI models used by chatbots may make one useful for incorporating knowledge into the conversation, another that responds empathetically, and yet another that provides assistance. But most chatbots do not do all three.

See also: Credible Chatbots Outperform Humans

In contrast, a human operator or agent interacting with another person can easily and automatically switch between these different conversational styles. For instance, a human speaker would alternate between talking about him/herself, listening to others, consoling them, sharing knowledge or information about something, and more.

New techniques for chatbots

Things are about to change. Mono-conversational style chatbots could be a thing of the past if new research pans out. In particular, researchers at Facebook AI Research have recently conducted a study investigating the possibility of combining the skills of different conversational agents, to enhance their overall capabilities.

They present their findings in a paper that was pre-published on arXiv and is set to be presented at the 2020 Annual Conference of the Association for Computational Linguistics (ACL 2020) in July. In the paper, they propose various techniques to combine the skills of different models into one, while also introducing a dataset that can be used to analyze how well individual conversational skills trained in isolation would mesh together in a single agent.

As reported by TechXplore, the team produced AI models, each very good at talking to humans in a certain way. “We had a model that could incorporate knowledge into the conversation, a model that was good at responding empathetically, and a model that was good at being consistent when talking about talking about its persona,” said Eric Smith, one of the researchers who carried out the study, in a conversation with TechXplore. “Our goal in this study was to produce a single model capable of smoothly switching between and blending all three of these kinds of communication.”

The researchers explained (at a high-level) how they explored this topic in the abstract of the submitted paper. The abstract noted:

Being engaging, knowledgeable, and empathetic are all desirable general qualities in a conversational agent. Previous work has introduced tasks and datasets that aim to help agents to learn those qualities in isolation and gauge how well they can express them. Rather than being specialized in one quality, a good open-domain conversational agent should be able to seamlessly blend them into one cohesive conversational flow.

In their work, the researchers looked at several ways to combine models trained towards isolated capabilities, ranging from simple model aggregation schemes that require minimal additional training, to various forms of multi-task training that encompass several skills at all training stages. They further propose a new dataset, BlendedSkillTalk, to analyze how these capabilities would mesh together in a natural conversation. The dataset, which can be publicly downloaded, offers a unified platform for sharing, training, and evaluating dialogue models across many tasks. It includes many popular datasets available all in one place – from open-domain chitchat to visual question answering. And it also has a wide set of reference models – from retrieval baselines to Transformers.

Their experiments showed that multi-tasking over several tasks that focus on particular capabilities results in better-blended conversation performance compared to models trained on a single skill.

Salvatore Salamone

About Salvatore Salamone

Salvatore Salamone is a physicist by training who has been writing about science and information technology for more than 30 years. During that time, he has been a senior or executive editor at many industry-leading publications including High Technology, Network World, Byte Magazine, Data Communications, LAN Times, InternetWeek, Bio-IT World, and Lightwave, The Journal of Fiber Optics. He also is the author of three business technology books.

Leave a Reply

Your email address will not be published. Required fields are marked *