RTInsights is a media partner of AI Summit NY

Considerations and a Blueprint for Responsible AI Practices after a Year of ChatGPT

PinIt

Responsible AI requires that organizations address process, technology, people, and governance issues, according to speakers at the recent AI Summit.

At the recent AI Summit in New York, topics surrounding Responsible AI – including regulation, fairness, bias, transparency, and data privacy – were a prominent part of the conference agenda. It is clear from the many talks and panel discussions at the conference that there is a heightened awareness of ethical concerns surrounding AI, and even more so as AI algorithms find their way into real-time applications. But how to best handle this issue is still very much an open conversation.

Here we highlight some of the key challenges and active topics of discussion surrounding Responsible AI, and then a general blueprint for how companies can either begin, or strengthen, their Responsible AI practices.

Responsible AI Challenges and Considerations

There are a few roadblocks that exist on the path to responsible, ethical AI.

From a technical standpoint, as Dr. Paul Dongha, Group Head of Data & AI Ethics at Lloyds Banking Group, pointed out on one panel, one cannot guarantee “repeatability” with probabilistic models like machine learning and AI. That is, for a certain set of inputs (say, a particular income level and credit score in a bank’s lending model), one cannot always guarantee the same output.

Instead, one can do “consequence scanning” and test the models with a range of inputs, he continued. However, this requires having enough test data, which may not always be possible due to limits on the availability of relevant data.

But even if one could perform a range of tests, who agrees on what is “fair”?

Panelists across several different sessions pointed out that fairness and bias are viewed differently across different cultures. Thus, to whom does it fall to make decisions on fairness and bias? There was near-unanimous recognition among speakers and panelists that this responsibility cannot fall solely on individual AI engineers.

Of course, these are challenges faced by companies who want to pursue responsible AI, but what about companies who do not feel motivated to look into the ethical implications of their models?

One option is governmental regulation. Some important considerations around regulation were discussed at a panel on AI policy. On the one hand, Michelle Rosen, General Counsel at Collibra, stated that “regulation can engender trust” in AI by the general public. But on the other hand, panelists pointed out issues for companies that would have to contend with different regulations across many different countries throughout the world. And what if some countries, like China, do not have the same priorities in whether or how to regulate AI as other countries?

Among the AI policy panelists, there was near-unanimous support for “soft law.” In other words, there should be industry best practices that companies would be motivated to uphold. This would have to be supported by some sort of incentives, which are yet to be determined, but “soft law” could be adopted as a supplement to some base level of government regulation.

See also: Algorithmic Destruction: The Ultimate Consequences of Improper Data Use

A Blueprint for Responsible AI

What are some best practices for companies that are either just starting out and considering responsible AI practices or want to strengthen their existing procedures?

During a separate talk he gave on responsible generative AI, Dr. Dongha from Lloyds Banking Group outlined three broad areas where this issue should be addressed: “process & technology,” “people and governance,” and “independent validation.”

Under “process & technology,” this would include the technical aspects of actually implementing responsible machine learning and AI models. During a panel on AI Ethics, Ravit Dotan, AI ethicist and CEO of TechBetter, advised that there needs to be a checkpoint at the beginning of the development process, as well as regular checkpoints as updates are made to the model.

While Dr. Dongha advises using synthetic data for testing model outcomes before the models go live – in cases where there are either privacy concerns in using customer data to test models or there just is not enough data yet available – Dotan cautioned that this approach must be validated in terms of the quality of the tests on synthetic versus real data. Further, special care must be taken to ensure the synthetic data is representative of the target population.

Finally, Dotan points out that it is not enough to just monitor the data and models themselves but also monitor the actual outcomes of the models as they are used in a live, customer-facing setting.

Under “people & governance,” Dr. Dongha stressed the importance of setting up an internal ethics board within the company. This would allow for debate and decision-making on “ethical edge cases” and would “facilitate multiple stakeholder perspectives.” Crucially, this would take the responsibility for making decisions around fairness and ethics off of the shoulders of individual AI engineers.

Along these lines is how to bridge the communication gap between those versed in the technical details of AI models and stakeholders who are less aware of the technical details. Dotan, from TechBetter, emphasizes that those on the technical side must be aware of what parts of the AI system stakeholders need to understand in order for them to be in a position to make important decisions around fairness and ethics. In other words, it is less important for the stakeholder to understand all of the technical details, but rather what the model can actually do – or not do – and what the potential implications are on the outcomes of adopting these models.

Finally, “independent validation,” where an outside authority can assess for any potential risks with your AI models. There was not much discussion on this topic at the conference, and perhaps a more difficult piece for companies to implement, but it is important to have this option as part of the conversation.

Dan Capellupo

About Dan Capellupo

Dan Capellupo, PhD, is a data scientist with 14 years of experience working with data, from studying distant black holes to designing predictive algorithms in the financial and IoT sectors.

Leave a Reply

Your email address will not be published. Required fields are marked *