The framework provides guidelines on how to develop AI systems that avoid bad first impressions, gain a user’s trust, and improve the user experience.
First interactions with an artificial intelligence (AI) system are critically important, shaping trust and expectations for future use. As we saw with Apple’s Siri launch, once an AI gets a reputation for unreliability, it is hard to entice users back.
To avoid bad first impressions, Shyam Sundar, a professor of media effects at Penn State University, has published a framework on human-AI interaction, which provides guidelines to gain trust and improve user experience.
SEE ALSO: AI Will Be The Most Powerful Tool for Real-Time Analytics
“This is an attempt to systematically look at all the ways AI could be influencing users psychologically, especially in terms of trust,” said Sundar. “Hopefully, the theoretical model advanced in this paper will give researchers a framework, as well as a vocabulary, for studying the social psychological effects of AI.”
In the paper, Sundar identifies two paths for improving user experience, cues and actions. Cues include adding human-like voice or face features or informing a user what is happening under the hood, for instance why a TV show is recommended to you.
“When an AI is identified to the user as a machine rather than human, as often happens in modern-day chatbots, it triggers the ‘machine heuristic,” or a mental shortcut that leads us to automatically apply all the stereotypes we hold about machines,” said Sundar.
Too Much AI Annoys Users
Developers need to identify if a cue will provoke a positive or negative response, based on what it wants the AI to achieve. If a user is looking for accurate data, it may see the AI as a benefit, but if they are receiving support advice, they may view the AI as unable to understand the issue.
Actions include all forms of interaction between the human and AI, for example, a voice assistant may offer information based on the time of day, where the user is inside the house, or from an observation pattern.
“AIs should actually engage and work with us. Most of the new AI tools—the smart speakers, robots, and chatbots—are highly interactive. In this case, it’s not just visible cues about how they look and what they say, but about how they interact with you,” said Sundar.
“If your smart speaker asks you too many questions, or interacts with you too much, that could be a problem. People want collaboration. But they also want to minimize their costs. If the AI is constantly asking you questions, then the whole point of AI, namely convenience, is gone.”