Challenges in the adoption of AI in the financial institutions

The digital era has brought forth numerous new solutions and opportunities, transforming our perception of traditional services, including artificial intelligence (AI) systems. The profile and needs of financial service customers have also evolved. Today, accessing financial services is no longer limited to physical offices of the bank. Clients increasingly seek to use services via their mobile devices, anytime and anywhere. This growing demand pressures financial institutions to deliver services which are easy, fast, and convenient. To meet these expectations, the financial institutions are accelerating and automating both internal processes and customer-facing services through the development of digital solutions.

The rapid advancement of artificial intelligence in recent years has created new opportunities to improve service efficiency across various sectors, including finance. It is no surprise, then, that financial institutions are increasingly exploring the adoption of AI systems. The main benefit is considered to be that AI allows greater automation of service delivery processes.

What the future might hold for AI in financial institutions? The clients of the financial institutions could soon interact with chatbots instead of customer support employees, available 24/7 to answer questions and resolve the complaints of the clients. To combat fraud, AI will be employed to detect suspicious patterns in transactions, operations and processes. In customer relationship management, AI will analyze customer behavior to deliver personalized services. When it comes to credit risk management, AI systems could evaluate applicants’ creditworthiness through automated data analysis, enhancing decision accuracy and reducing credit losses. Ultimately, these advancements should make accessing financial services faster and more convenient for the customers.

However, alongside the benefits, financial institutions must also address the risks associated with AI and ensure compliance with applicable regulations. The European Union has taken a leading role in AI regulation. On August 1, 2024, the EU’s Artificial Intelligence Regulation (the EU AI ACT) came into force and will be implemented gradually. It is the first regulation of its scale to focus specifically on artificial intelligence. Financial institutions using AI systems to deliver their services will also need to comply with its requirements.

Risk-based approach to AI systems 

The EU AI Act follows a risk-based approach, categorizing AI systems into four risk levels: minimal, limited, high, and unacceptable. In financial institutions, examples of unacceptable risk systems, which are prohibited, include AI systems that evaluate or classify individuals based on their social behavior or personal characteristics, or that generate a ‘social score’ to make decisions about customers. The use of manipulative techniques that can influence individuals’ behavior and judgment—such as encouraging them to use a financial service—is also prohibited.

For financial institutions, the high-risk category is likely the most relevant in practice. The regulation on artificial intelligence provides for two specific situations in financial services where the use of AI entails a higher-than-normal risk. These situations are the use of AI for creditworthiness assessment and credit decisions and for risk assessment and pricing in life and health insurance. High-risk systems must comply with stringent security and compliance requirements, including transparency obligations.

Key regulatory requirements for high-risk AI Systems

Firstly, before implementing a high-risk AI system, the financial institutions must conduct a risk and impact assessment document to assess the potential effects of using the system. This analysis must assess and mitigate potential risks related to the use of AI system, with a particular focus on preventing breaches of privacy or discrimination.

Next, it is crucial to ensure that the use of the AI system is clear and understandable. AI systems must be transparent, meaning it should be possible to see how decisions are made and on what data they are based. The logic behind the decision must be clearly explained to the person concerned. In cases of automated data processing, individuals must have the right to request human intervention and to prevent any decision solely and exclusively based on automated processing from being made about them.

High-risk AI systems must also undergo a conformity assessment process, which in certain cases (e.g. creditworthiness assessment) includes the obligation to conduct an independent audit to confirm that the system complies with the applicable requirements. In addition, the financial institutions will need to ensure that staff are adequately trained and informed about the use of AI systems, and capable of providing customers with accurate and clear information.

Exemptions for financial institutions

The EU AI ACT foresees a number of specific exemptions for financial institutions. This means that the financial institutions are not obliged to comply with certain requirements under the EU AI ACT. These derogations are implemented because existing financial sector legislation already imposes stringent requirements on the internal systems, processes, and mechanisms of the financial institutions (e.g., the requirements under the DORA regulation). Therefore, the presence of exemptions does not imply that financial institutions can use AI systems more leniently than other market participants; instead, the exemptions to financial institutions are intended to prevent regulatory duplication and potential conflicts with sector-specific laws.

Financial institutions should therefore base their use of AI systems first on the specific requirements of the financial sector, and the EU AI Act should be implemented to the extent appropriate.

The broader adoption of AI in the financial sector, as in other areas of society, is an anticipated outcome of the digital age. Financial institutions using or planning to use AI systems should start to familiarise themselves with the requirements applicable to the use of AI as early as possible in order to be ready for the next major technological step. 

The article published in Äripäev on September 25, 2024, was written by Ellex in Estonia experts Anneli Krunks and Merlin Liis-Toomela.

Linked Experts

Person Item Background
Anneli Krunks
Anneli Krunks
Senior Associate / Estonia
Person Item Background
Merlin Liis-Toomela
Merlin Liis-Toomela
Senior Associate / Estonia