Humans are Key to Ethical and Responsible AI


From determining why and where to implement AI, to addressing bias and ethical considerations, companies need to be prepared to think very differently when it comes to AI.

Infusing artificial intelligence (AI) into business operations is about more than implementing cool, new technologies. AI is a fundamental capability that companies can use to create better experiences, more intelligent products, and smarter processes. Applying a human lens to AI helps companies determine where and what to automate and ensure ethics and bias are addressed from the start, so they can achieve better insights.

Marrying sound business strategy with sound ethical behavior and appropriate governance enables organizations to achieve business objectives without violating ethical standards. As companies develop AI strategies, ethics are critical. It’s not only good business policy, it also helps safeguard the organization against the intense scrutiny of AI usage by corporations. Unfortunately, less than half of all executives surveyed in a recent Cognizant study said that ethical considerations play a critical or significant role when their company develops and employs AI. However, roughly two-thirds of them said AI is very or extremely important today, and 8 out of 10 believe it will be in three years.

See also: AI Providing Mental Health Guidance? Not So Fast, Say Psychiatrists

While executives believe AI offers significant benefits–such as lower costs, increased revenues, and faster time to market–there is a disconnect when it comes to implementation. AI adoption is still in its infancy, and many companies lack a strategic focus for integrating AI into the core of the business. There is no one-size-fits-all AI solution. Each use case requires different tools, algorithms, and a unique training process. And each application should be evaluated based on the business problem it is intended to solve and the outcome desired–not just on technology capabilities.

Following the five steps outlined below can help organizations develop an effective AI strategy that adheres to ethical standards while achieving business objectives.

Educate, experiment, evaluate, establish priorities, and explore

Educate your organization on the implications of AI and how it will affect their careers and the way they do their jobs. Use principles of change management to help people understand the big picture and how they fit into it. This human-centric approach will improve the success of AI efforts and should be integrated into the process at the outset. While other technology implementations focus on change management during the adoption phase, AI requires awareness from the start, because it fundamentally changes the role humans play in their organizations and how they do their jobs. Roles are redefined as AI assists humans and enhances their performance. In call centers, for example, AI can effectively resolve basic customer questions, freeing call center representatives to be consultative and advise customers on more complex matters. These new “advice centers” provide a greater customer experience, elevate the role of call center representatives, and enhance the organization’s reputation.

Experiment continuously and remember it’s not just about the technology but about the implications of the technology on people–so approach experimentation from a human perspective. Companies need to take a fail-fast, learn-faster approach as they pilot, learn, and scale. It’s important to have the mindset that failure is fine, and there are lessons to be learned when an experiment fails. Even a successful AI pilot won’t always lead to implementation if there are ethical issues, the business benefits aren’t great enough, or scalability isn’t realistic. For example, Microsoft shut down its Tay experiment immediately when the AI began to tweet racist messages based on biased data. While the experiment was deemed a failure, it proved that having a human team focused on identifying unethical behavior led to immediate corrective action.

Organizations need data that is free from bias and from which AI can learn patterns and a technology infrastructure that is capable of processing large volumes of different types of data. AI requires a holistic view of data to understand its inherent biases. Because AI is based on historical training data and patterns, it must be trained to tell good data from bad. That’s where human intervention comes in. Humans provide data and the overall context for and monitoring of good and bad behaviors. This enables the AI to correct itself over time, so it emulates appropriate behavior and avoids bad behavior. It’s important to balance experimentation with a failsafe mechanism–typically human–that can quickly shutter an experiment in the event bias creates any type of ethical violation.

Evaluate a pilot’s results to determine if it makes sense to move to the next phase or stop if the cost-benefit doesn’t align. Stopping a pilot and moving on to another is common given the short development window for AI. A typical AI experiment can be up and running in four weeks, and a pilot deployed in four to eight weeks, so if a pilot is shut down, another can be quickly deployed. Most companies maintain a roster of pilots to explore, so they are ready to deploy when a prior pilot is shuttered. To prepare for AI’s rapid development cycles, organizations need access to new technologies and techniques, an open cloud environment in which to experiment and partnerships that provide access to continuously advancing AI technologies.

In addition, government regulations regarding compliance and liability may impact AI efforts, so understanding and keeping abreast of regulations is important. For example, as autonomous vehicles become mainstream, who will bear responsibility in the event of an accident, the car manufacturer or the company that created the AI? The implications of new regulatory mandates such as the General Data Protection Regulations or the California Privacy Regulations may impact AI. However, regulators across the globe are currently working on a framework that will balance ethics, bias, privacy, and creativity.

Establish priorities for AI efforts as the role AI plays increases. Before embarking on AI initiatives, first determine which projects offer the most business value and whether they are technically feasible. For example, a specialty pharmaceutical company wanted its patient services managers to better understand the drug interactions and side effects patients experienced. The managers traditionally spent time taking notes when patients called in with complaints and weren’t focused on identifying signs that a patient might be inclined to stop taking a medication. Applying machine learning, assisted intelligence and automation provided context to the patient services managers and freed them to focus on understanding the patient experience, empathizing and making suggestions to help keep patients on necessary drug regimens, thus enhancing business value. 

Explore areas where AI efforts will be most effective as you continue to build AI as a core capability. AI can be embedded in every interface across functions such as customer service, manufacturing, production, and R&D, as well as across lines of business, to improve productivity, drive better customer experience, reduce fraud and create smarter products.

When AI may not be the answer

As beneficial as AI is, automation is not always the answer. Digital transformation still needs to be human-centric to be most effective. For example, in the case of credit card fraud, consumers are receptive to calls from fraud departments, because it concerns their financial security. By automating that process, banks miss the opportunity for call center agents to build the rapport that solidifies customer trust. Instead of automating the fraud process, banks can leverage AI to help agents have more meaningful conversations with customers that enhance the experience, build loyalty, and improve retention.

From determining why and where to implement AI, to addressing bias and ethical considerations, companies need to be prepared to think very differently when it comes to AI. No two organizations will approach AI the same, but following the steps outlined here will help optimize business results.

Poornima Ramaswamy

About Poornima Ramaswamy

Poornima Ramaswamy is Vice President, AI and Analytics at Cognizant Digital Business.

Leave a Reply

Your email address will not be published. Required fields are marked *