Making each step up the human/machine interaction spectrum ladder, from assisted to augmented to an autonomous machine, requires increased trust in the real-time systems.
Real-time systems rely on the rapid analysis of data to provide decision-making insights in sub-second times. Obviously, such capabilities have a wide range of applications. However, within all application areas, there are three general ways humans interact with real-time systems.
Specifically, human/machine interaction can range from assistive to augmented to autonomous. Each has its own use cases.
An assisted model is frequently used for decision support. Here the real-time system would analyze large volumes of data to be used to make a decision. A solution might assess tradeoffs in making one decision over another or quantify uncertainties due to the data’s limitations. Such a system would need to incorporate and analyze real-time data. Insights are generated base on the data analysis, and inferences are made using artificial intelligence or machine learning models that incorporate business logic. An example: A fraud detection and prevention application might rely on an assisted model. Such an application would see the real-time system spot an anomaly and alert a human to evaluate the threat and take appropriate action (if needed).
An augmented model would take a real-time system’s role to a higher level. The same data and analysis are be done as is the with an assisted model. What happens next is the differentiator. For example, many cars today have lane departure warning systems. An assistive version would generate a beep when you veer out of your lane. An augmented system would generate the alert and gently steer you back into your lane. Another application of an augmented model might see a fraud detection system place a hold on a transaction pending human approval. In both cases, the real-time system detected something out of the ordinary, sent an alert, and helped humans get through the situation.
An autonomous model builds on such capabilities and can act as humans would without them getting involved. A differentiator here is that the system would make decisions and take actions in situations that have not been pre-defined. In both the fraud detection and the automotive example, augmented systems would assess a situation, compare it to known scenarios, and take a programmed action. An autonomous model adjusts on the fly to unknown situations.
The role of trust in real-time systems
Making each step up the human/machine interaction spectrum, from assisted to augmented to an autonomous machine, requires increased trust in the real-time systems.
It is one thing to have a CT-image system highlight suspect features and notify a physician. It’s another to automatically schedule a patient for surgery based on a system’s interpretation of an image. That’s an exaggerated scenario. However, it illustrates the need for more transparency and explainability as real-time systems play more of a role in decision-making.
One type of trust is whether a business can trust the decisions and actions of its real-time systems. The normal discussion here is around things like explainable AI and ethical AI. Explainable AI is getting extra attention these days due to major business disruptions brought on by the COVID-19 pandemic. Many predictive models are presented and used as simple black boxes. If a model is developed using a certain dataset and the data changes, what would the implications be on the insights derived from that data and model. Also, what if the wrong machine learning models are used to make the decisions? A business needs to address these issues.
A business also must address trust related to ethical AI. Are the models or data-limited and introduce bias? What companies want to avoid is the bad publicity that can come from AI bias. One example got extensive coverage due to the people involved. In 2019, Apple launched its credit card. CNN reported that “tech entrepreneur David Heinmeier Hansson wrote that Apple Card offered him twenty times the credit limit as his wife, although they have shared assets and she has a higher credit score.” Apple co-founder Steve Wozniak seconded his opinion about the card’s bias and discrimination. He and his wife had a similar experience with the card.
Trust and human acceptance
A recent interdisciplinary study from the University of Kansas found that a person’s trust in AI is tied to their current relationships and attachment styles. As researchers are finding out, trusting systems that use AI stems largely from our own understandings of human relationships.
People who are already anxious or unsure about their various human relationships are more likely to view AI with skepticism or mistrust. On the other hand, people who are secure in their various human relationships are more likely to trust AI influences.
A recent RTInsights article noted: This relational study could help ease tensions around adoption for businesses and organizations with a serious stake in AI. Rather than focusing solely on cognitive ways to boost trust, companies and organizations could use this relational aspect to create better, deeper connections between target audiences and the AI they may not quite trust yet.
It’s important to remember that while this research could show a way forward with delicate adoption strategies and reduce pushback, experts and general consumers are still largely divided on whether adoption of AI is the next phase of human development or the death of large sectors of human-driven employment.
So, this is just another thing businesses will need to consider when moving from assisted to augmented to automated real-time systems.