By wholeheartedly adopting emerging technologies to remedy trust deﬁcits, companies can rebuild and strengthen trust.
Accountability is a crucial issue companies that create and use emerging technologies to attract and serve customers must address. We are seeing this play out in the media every day as Facebook, and other companies are taken to task for the undesirable impacts of their products and services. For some companies, these impacts are unintentional. For some, they are the result of prioritizing growth goals over everything else. For a small number, they are intentional criminal acts. As people become more aware of the negative impacts, trust in technology is eroding. In fact, after getting high marks in trust for decades, trust in tech has been falling for the last decade and precipitously in the last year.
According to the 2012 Edelman Trust Barometer, there was a 30-point gap between the public’s trust in technology compared to business in general (77 percent versus 47 percent). Since then, trust in business has grown while trust in tech has declined. The gap is now less than 10 points (68 percent for tech, 59 percent for business in general). Further, after a pandemic-driven surge in the first half of 2020, tech experienced double-digit declines in trust in China (-16 percent), Canada (-15 percent), the United States (-13 percent), and the UK (-12 percent). Globally, trust in tech has dropped 6 points, leaving it still in the number one position but barely edging out manufacturing and healthcare.
What can tech do to regain lost trust? The simple answer is to safeguard users’ interests. This is easier said than done. A model I have developed to help is the Trust Cycle, a unified and holistic decision-making model that consists of three layers.
As illustrated in Figure 1, The Trust Cycle, the inner layer includes the key processes that go into decision-making, Inputs, Analysis, Decisions, Impact, and Action, encircled by a Learning and Feedback loop. Each of these processes can increase or decrease trust depending on the way it is handled, the people handling it, the circumstances under which it is handled, and the results that are derived. The outer layer includes employees, partners, customers, regulators, consultants, third parties, and others who are responsible for each process and who are impacted by it. Each element of the model plays its own crucial role in establishing trust while at the same time aﬀecting the entire model.
Further, I believe in something that might sound contradictory: while tech can create trust problems, tech can also be used to solve trust problems. In other words, by wholeheartedly adopting emerging technologies to remedy trust deﬁcits, companies can rebuild and strengthen trust. In fact, the key processes included in the cycle can all be improved with emerging technologies, including AI, machine learning, blockchain, and the Internet of Things. Examples of companies I have worked with to power the Trust Cycle with emerging tech include:
- A large global auditing firm cited by regulators for the poor quality of its audits implemented an AI and machine learning-based document-management system to automate and improve the accuracy and reliability of the data it collects and must validate from documents provided by its clients.
- To better predict when its aircraft engines and parts might need maintenance or replacing, an aerospace manufacturing company builds digital twins instead of simulations to study aspects of how they function. Digital twins are virtual models outfitted with sensors that produce real-time data about the engines’ performance in a variety of conditions.
- An automobile insurance company is building an AI-powered, fully automated end-to-end claims processing application that initiates at the time of an accident. Photos of the accident are used to provide customers with instant damage estimates, connect them to body shops, process claims, and provide reimbursement.
The emerging technologies used by these companies to increase trust are powerful and have great potential for good. But they are not well understood by the average person. Case in point, by analyzing large amounts of data and experiences and over long periods of time, AI systems can learn to draw conclusions and make decisions by recognizing patterns that would be too complicated or nuanced for a human ﬁnd. But this complexity often means that the reason an AI system reaches a particular conclusion is a mystery to users and sometimes even to the people who created the system.
Creators of tech aside for the moment, this is an important point because users of tech-based systems ﬁnd it hard to trust a technology they do not understand. This is especially true in fields such as healthcare, ﬁnance, food safety, and law enforcement, where the consequences of a flawed system or a biased system that reflects historical or social inequities are more serious than getting a bad movie recommendation from Netflix.
The upshot is that unlike with people—whom we mostly trust until, and unless, we have a reason not to— trust in technology is not a given. Companies that use technology need to earn our trust every day by supplementing tech with sound decision-making.