Risks and ethical implications of AI in financial services

April 05, 2024

While the use of artificial intelligence (AI) in financial services poses numerous benefits, such as enhanced productivity and innovation, improved risk management, personalized customer experiences and more, it also carries certain risks. Some of these include data privacy, intellectual property and copyright issues, quality and reliability, and inherent bias in data models. It also raises ethical questions for consideration. Below are some core implications that need to be considered when developing generative AI use cases for financial services:

1. Transparency and fairness – There are inherent biases in AI because of its reliance on human modeling. While AI researchers and developers are aware that biases exist, it is nearly impossible to root them out completely, meaning a person must vet all outputs from generative AI systems to ensure accuracy and fairness. Examples of this could include AI-based lending decisions being negatively impacted by the potential biases from data that reflect adverse socioeconomic trends. Ensuring transparency in how algorithms work builds trust and can limit inherent biases.

2. Data privacy and security – Major questions remain about intellectual property rights on AI-generated text and images, especially those that have not been vetted by a live person, posing significant risk to creating external or client-facing materials with programs like ChatGPT. Financial institutions will need to protect sensitive data used by the AI model and ensure that customer consent feeds into the trust and secure use of AI via adequate opt-ins and opt-outs.

3. Regulatory compliance – The novelty of AI brings with it an ambiguity to regulatory compliance and legislation, and it will be incumbent on AI developers and users to align with those standards. Financial institutions must work with regulatory bodies and policymakers to ensure that adequate compliance frameworks are developed, taking into account ethical and legal standards, as well as putting the customers’ best interests at the center of what they do.

4. Market manipulation and fraud – As this technology reaches broader audiences, there is more potential for bad actors to do harm in novel ways. One example is the growing trend of spear-phishing attacks, where fraudsters use AI voice-cloning technology to target vulnerable individuals with information that they perceive as credible, such as an individual receiving an AI-generated voice call from a loved one requesting money to be transferred to them in an emergency. New and effective security methods are required to detect and prevent these types of attacks.

5. Overreliance on AI and unintended consequences – Without a proper system of checks and balances, AI outputs can introduce unwanted risk because of the imperfect nature of the technology’s ability to analyze and synthesize outcomes. Because generative AI systems like ChatGPT are modeled from real text posted across the internet, there is a high probability of inaccurate information being generated and disseminated repeatedly, meaning automated text needs thorough review from a live person before it is ready to be used. Examples include utilizing AI predictions without verification of the input data or the output, which could lead to potential financial and reputational risk.

6. Cost of AI ownership – Owning and operating a generative AI system in-house is expensive and requires significant hardware capabilities and on-staff engineers to maintain. Relying on a third-party provider, however, reduces reliance on internal resources and deepens the specificity of the AI’s application to your needs, which introduces the challenges of biases, intellectual property and other risk factors. As a result, third-party risk would need adequate assessment.

While risks need to be considered when developing generative AI, there are also many potential use cases and advantages to be gained from it. Many of these use cases are interchangeable across financial services subindustries, and as the AI market matures, we will undoubtedly find a cross-pollination of ideas from other industries reaching the financial services industry. The extent to which the AI tools can solve real-world problems accurately will ultimately depend on the sophistication of the AI models being developed and the richness and reliability of the data used to train them.

About the Author
Ameet Bhatt, Senior Director International Banking & Payments, FIS
Ameet BhattSenior Director International Banking & Payments, FIS

Ameet Bhatt is head of Strategic Initiatives for International Banking & Payments at FIS. His role involves developing and executing innovative strategies that improve revenue and margin, as well as strengthen FIS’ competitive advantage as a leader in banking and payments. He brings a wealth of experience, with two decades of strategy consulting, product strategy and development, project delivery and innovation design experience within the financial services industry.

SIMPLY FINTECH EDUCATIONAL SERIES
Capture opportunities with embedded finance
Similar Articles