In any group setting, one way to establish and promote better social engagement is through vulnerable expressions.
A leading use of continuous intelligence (CI) and artificial intelligence (AI) is in autonomous systems such as robots and chatbots that interact with people. Such systems must take streaming visual and verbal input, analyze it in real-time, and take an action in seconds or less. The effectiveness of such systems is dependent on the quality of the algorithms and processing speed. Researchers have now found another factor that improves their performance. Namely, robots that admit mistakes foster better conversation in humans.
That is the finding from a Yale-led study of robots’ effects on human-to-human interactions. The study, which was published this week in the Proceedings of the National Academy of Sciences, showed that humans on teams that included a robot expressing vulnerability communicated more with each other and later reported having a more positive group experience than people teamed with silent robots or with robots that made neutral statements, like reciting the game’s score.
In the abstract of the article, the researchers noted: “Social robots are becoming increasingly influential in shaping the behavior of humans with whom they interact. We examine how the actions of a social robot can influence human-to-human communication and not just robot-human communication. We find that people in groups with a robot making vulnerable statements:
- converse substantially more with each other
- distribute their conversation somewhat more equally
- perceive their groups more positively compared to control groups with a robot that either makes neutral statements or no statements at the end of each round.
Such characteristics are valuable and can lead to better interactions not only between robots and humans but also between the humans in a group. In a business environment, such enhanced interactions would certainly be welcome. For example, the researchers noted that groups without social interaction are less able to learn from each other and work together.
In any group setting, one way to establish and promote better social engagement is through vulnerable expressions. According to the researchers in their paper, “vulnerability focuses individuals on others, which encourages an interpersonal connection. In interpersonal interactions, vulnerability is often expressed as self-disclosure or personal stories (which increase solidarity) and humor (which alleviates tension). Extending such traits to nonhuman agents, like robots, can be accomplished by programming them to exhibit emotions in a collaborative task.
Other Ways to Improve Engagement with Robots and Chatbots
Other studies have identified additional ways that improve the effectiveness of robots and chatbots when interacting with humans.
One method is to make the entity credible. Artificial intelligence (AI) chatbots are four times more effective at selling products than inexperienced workers. How can this be accomplished?
Simpler chatbots will simply not do the trick. For example, many voice and text chatbots listen for or look for keywords to guide the conversation. The customer mentions “account balance” or “pay bill,” and the bot knows which road to go down with its replies. The problem with this approach is that we’ve all had trouble with chatbots that do not understand or miss the meaning of what we are saying. And most certainly do not pick up on our sentiments (hint: a building frustration during an exchange).
Continuous intelligence can help alleviate these problems. Chatbots that use natural language processing, speech recognition, and artificial intelligence can be designed to get deep insights into customer content in real-time. In that way, an intelligent bot could interact with the customer just as a live agent would. An even better solution is one that leverages sentiment analysis to determine whether a customer is happy with a product or if a web site visitor is struggling to find information about their account.