AI Trust Often Comes Down to Human Relationships

PinIt

People who are anxious or unsure about human relationships are more likely to view AI with skepticism or mistrust.

Businesses are eager to implement AI-driven strategies into operations, but some are worried that backlash against the use of artificial intelligence from the public could come back to haunt them. However, new research could help understand why some people seem to trust AI-driven products and services more readily than others.

Trusting systems that use AI stems largely from our own understandings of human relationships, as researchers are finding out. According to an interdisciplinary team from the University of Kansas, your trust in AI is tied to your current relationships and attachment styles.

See also: Credible Chatbots Outperform Humans

People who are already anxious or unsure about their various human relationships are more likely to view AI with skepticism or mistrust. On the other hand, people who are secure in their various human relationships are more likely to trust AI influences.

Studies confirm a link between real relationships and AI trust

One of the biggest obstacles to true AI adoption is this critical trust. Specifically, the researchers noted: “Lack of trust is one of the main obstacles standing in the way of taking full advantage of the benefits artificial intelligence (AI) has to offer.” The research also suggests that businesses and organizations can increase the trust of AI by reminding people of their current relationships.

In three separate studies, researchers were able to show a direct link between attachment anxiety and AI mistrust. Researchers were able to enhance attachment anxiety and attachment security to establish a direct link between decreases and increases in AI acceptance.

This relational study could help ease tensions around adoption for businesses and organizations with a serious stake in AI. Rather than focusing solely on cognitive ways to boost trust, companies and organizations could use this relational aspect to create better, deeper connections between target audiences and the AI they may not quite trust yet.

It’s important to remember that while this research could show a way forward with delicate adoption strategies and reduce pushback, experts and general consumers are still largely divided on whether adoption of AI is the next phase of human development or the death of large sectors of human-driven employment. 

It could take a multifaceted approach to introduce AI into human systems to get over the trustability obstacle. The next wave of adoption could reveal whether humans are ready real for AI-integration. In turn, such adoption could help drive a wide range of AI-based projects from individual applications to digital transformations.

Elizabeth Wallace

About Elizabeth Wallace

Elizabeth Wallace is a Nashville-based freelance writer with a soft spot for data science and AI and a background in linguistics. She spent 13 years teaching language in higher ed and now helps startups and other organizations explain - clearly - what it is they do.

Leave a Reply

Your email address will not be published. Required fields are marked *