RTInsights is a media partner of The AI Summit NY

State of AI in the Financial Services Industry Going into 2024

PinIt
responsible ai in finance

Generative AI might be the main buzzword in the financial services industry, but remains a small part of the overall AI adoption as the industry looks to implement AI responsibly.

One year after OpenAI launched ChatGPT, generative AI has become the main buzzword in the financial services industry, as in other sectors. However, it remains a small part of the overall AI adoption that the industry is experiencing, according to industry experts.

In addition to Generative AI, discussions in the finance track at The AI Summit NY earlier this month focused on how companies can use AI responsibly to reduce costs and improve efficiencies, combat bias and discrimination, and consider trends around regulation, with particular attention given to the EU AI Act (which the EU agreed to move forward the same week).

Practical solutions to combating bias and discrimination in lending decisions

When it comes to lending decisions, how can organizations avoid a repeat of Apple Card’s lending blunder a few years ago? 

Although the aim is for AI-based decision-making to be robust and repeatable, it cannot be guaranteed, said Dr. Paul Dongha, Group Head of Data & AI Ethics at Lloyds Banking Group,  on one panel. Instead, “we have to revert to some techniques that we can use to help with them,” he explained. 

In the case of Lloyd’s, they try to have ethics throughout the data science lifecycle, he continued. 

One example that he gave is at the beginning of the lifecycle, when looking at the requirements of a project, they try to predict the potential negative consequences. 

Before that, diversity within product teams both in terms of gender and other areas, including thought, is important, he noted. 

Fellow panelist, Mirka Snyder Caron, Assistant Vice President, Privacy and Compliance at Foresters Financial in Canada, shared that there are various frameworks that can be followed when looking at an AI product lifecycle. 

One framework that she uses has five checkpoints in the data lifecycle, including training and testing in that particular stage of deployment and managing the oversight of the AI product. 

She emphasized the importance of decommissioning that AI and determining what to do with a particular AI or the products (i.e., the data)

How Mastercard uses AI

Like electricity, “we use it [AI] for everything,” said Rashida Richardson, Senior Counsel, Privacy and Data Protection, Artificial Intelligence – Mastercard, during the session, “Adapting your company to fast-moving technology.” 

One example she gave was using AI to assign in real time a risk score to every customer transaction to help detect fraud.

Other areas that Mastercard has expanded into include money laundering as well as vulnerability detection. The latter of which, she explained, addresses how Mastercard can identify risks in software, HR systems, and identify solutions and metrics.

Key learnings and avoiding the pitfalls of AI

“It’s necessary to try AI or risk falling behind, but it has to be strategic,” Jessica Peretta, Senior Vice President, Network Behavior Management – Mastercard, noted in the session with Richardson. 

Backing and support from leadership is crucial to success. “We’ve had hundreds and hundreds of use cases internally trying to drive AI and if there was no sponsorship behind it, you build something and it goes nowhere. So you really have to make sure that it’s somebody who can mobilize that to do the unsexy stuff,” she explained.

Starting small is also critical. “AI requires a ton of infrastructure and support and talent. And if you don’t have it, it will be very hard to do that,” she continued.

AI is not a monolith

When addressing the challenges, AI is often discussed as if it is a monolith, but “there are two distinct kinds of ways of thinking about viability and other legal and policy concerns,” said Richardson.  

She noted that while they use traditional AI models that are focused on something specific, such as pathway prediction or recommendations and have very discrete functions, “we also have foundational models, which are more generalized and can do a lot but also contribute a number of risks.”

They still all fit under a certain umbrella though and “all of AI needs some type of governance, some type of procedures to ensure that we’re balancing risks and benefits in the right manner, but some specific risks or concerns may require a different type of analysis or even different forms of legal analysis,” Richardson explained. 

One example is with media networks, where there is a potential risk of IP addresses being leaked when generative AI is used on a media network, she noted. 

“If you’re only taking privacy or cybersecurity approaches then those types of risks may not be associated, or you may not mitigate as well. And that’s why I think there is a need to see it as the state which dealt with the same type of governance practices,” she explained.

Asked by someone in the audience if Generative AI is a new breed or a new species:

“It’s a new breed, not a new species,” said Peretta.

Lisa Damast

About Lisa Damast

Lisa Damast is a senior writer and the Head of Marketing at RTInsights.com. She has over 12 years experience in online media as a reporter and marketing manager. Follow her on Twitter @Lisa_Damast

Leave a Reply

Your email address will not be published. Required fields are marked *