Questions and answers about
the economy.

How will artificial intelligence affect financial regulation?

From risk management to investment banking, artificial intelligence is having a transformative effect on the finance industry. But this change – both in financial services and regulation – is fraught with complexities. Balanced solutions are needed to harmonise innovation with financial stability.

The financial sector is inextricably linked to the burgeoning world of digital technology. In this modern era, financial practitioners lean on artificial intelligence (AI) technologies to unravel complex issues, a shift propelled by an exponential rise in big and alternative data sources.

But the aggressive pace of AI expansion in finance is spawning an array of regulatory obstacles and opportunities that cannot be overlooked.

How has the use of AI in finance developed?

Artificial intelligence is not new. The current wave of chatbots powered by generative AI can be traced back to ELIZA, a natural language processing computer program created at the Massachusetts Institute of Technology (MIT) in the 1960s.

Even during the AI winter of the 1980s – an era of funding cuts in AI research due to their apparently limited contribution in real-world scenarios – there was a proliferation of AI expert systems in finance.

These expert systems transparently encode rules based on existing human decision-making rulebooks. For example, the American Express authoriser’s assistant encodes in-house decision-making rules for credit authorisation.

The ‘black-box’ deep learning algorithms that have been developed since, and which are currently in vogue, are less transparent. Their inputs and mechanisms are not visible, and no explanations of the decisions they reach are provided.

AI in the financial sector

Today, we are in an era of AI implementation where algorithms have become a cornerstone of the financial landscape, seeding innovations in risk management (Lin and Hsu, 2017), portfolio construction (Jaeger et al, 2021), investment banking (Investment Banking Council, 2020) and insurance (Society of Actuaries, 2020).

The trajectory towards a fully algorithmic finance industry, where multiple operations like trading, valuation and textual analysis will become automated and highly efficient, appears unstoppable, a viewpoint corroborated by various industry experts (López de Prado, 2019). This raises pressing questions about the future orientation of financial regulations.

As change in the industry continues apace, a reflective approach is paramount in navigating the evolving narrative of AI in finance – a narrative characterised by its revolutionary potential and the looming challenges it presents.

Crafting a harmonious future where finance and technology seamlessly intertwine hinges on a nuanced regulatory framework that promotes innovation while assuring systemic stability.

Debates about the effects of regulation

Financial regulation involves rules and laws that govern financial institutions like banks, brokerages and investment funds. The aim is to maintain the financial system’s integrity, safeguard investor confidence, prevent financial crimes and provide stability to the economy.

The effects of financial regulation on society can be analysed from two perspectives: the referees’ (public good) view and the risk-takers’ (private good) view.

The referees’ view of financial regulation suggests that regulation is essential for maintaining stability in the economy and protecting consumers, providing fairness and efficiency in financial markets.

This perspective emphasises that if left unregulated, financial markets may lead to market failures and financial crises. This is due to systemic risks and imbalances in the involved parties’ knowledge (information asymmetry).

The risk-takers’ view argues that regulation might not always be beneficial and could have negative implications, including stifling innovation and imposing high compliance costs.

Further, this view is grounded in theories of public choice and regulatory capture, which suggest that regulators can be influenced by the very institutions that they are supposed to regulate, leading to decisions that favour the industry rather than the broader public.

The optimal level of regulation lies somewhere between these two extremes and remains the subject of debate.

Has financial regulation become more complex over time?

Artificial intelligence has been used to solve complex problems in finance. For example, AI transformer algorithms imbued with financial theory – the technology underpinning ChatGPT – have been used as a solution to the complex supervisory problem of reducing false positive detections in fraudulent market manipulation.

There is a perception that regulation has also become more complex in recent times. A crude way to investigate this is to track the frequency of regulation enactments for financial institutions (see Figure 1).

Figure 1: Rules on the books faced by financial services firms (regulations by year of introduction and country)

Source: Authors’ calculations
Note: The list is part of a larger research project investigating regulatory complexity using AI. The list here only includes the major pieces of legislation enacted to protect the global financial system.

This upward trend has spawned a debate about whether increased frequency equates to greater regulatory complexity. Increased regulatory complexity motivates the adoption of AI technology to reduce the burden on complying firms.

One way of answering this question is to track investment in firms seeking to aid financial institutions in navigating regulations. This can be used as a proxy for the regulatory strain facing financial firms (see Figure 2).

Figure 2: Market trends in fintech fundraising

Source: PitchBook
Note (a): The total capital invested each year was: $230,995 million in 2012, $147,302 million in 2013, $227,450 million in 2014, $152,065 million in 2015, $79,737 million in 2016, $90,068 million in 2017, $68,014 million in 2018, $63,689 million in 2019, $22,697 million in 2020, $14,968 million in 2021, $15,167 million in 2022 and $16,313 million in 2013.

The presence of multiple stakeholders in the regulatory ecosystem allows us to visualise its evolution in a number of ways. A look at the investment trends against the backdrop of growing regulations helps to show the market’s evaluation of opportunities raised by AI.

Figure 2 shows two categories of firms: financial technology (fintech) and regulation technology (regtech).

Fintech firms are financial firms that have embraced technological innovations, including AI. Regtech firms are a subset of the former that focus on adopting technology to address regulatory requirements. Growth in regtech investment highlights the market’s interest in adopting technologies such as AI to comply with financial regulation.

Using this split, Figure 2 uses capital investment in these firms as a proxy for the market’s interest in using disruptive technology in both finance in general and in financial regulation.

According to PitchBook (a leading provider of data on global capital markets), the fintech sector has raised $1.54 trillion in capital investment since 2012 from 50,595 investors over 60,835 deals.

While regtech investment is still a minority part of overall fintech investment, only rising to over 25% in one year of the sample, the average deal size for regtech investment is much larger (see Figure 3).

Figure 3: Fintech vs regtech average deal size ($ millions)

Source: PitchBook
Note: The mean average deal size is calculated by dividing the total capital invested that year by the number of deals.

This indicates that fewer regtech investment opportunities exist, but they are in high demand when offered. This finding also indicates the larger economies of scale required to exploit regtech versus general fintech opportunities. Specifically, the complexity of the regulatory landscape requires additional capital investment.

Towards effective regulation using AI

Artificial intelligence possesses immense potential to reshape the process of financial regulation. By employing AI systems, regulatory agencies can enhance their ability to interpret and enforce complex financial regulations.

Financial institutions are intricate entities with many moving parts, making monitoring compliance a daunting task. AI systems, with their capacity to analyse large datasets and identify patterns, could be a game-changer.

For example, machine learning algorithms can assist in identifying irregularities or discrepancies in financial transactions. These could be indicative of non-compliance, fraud or other illicit activities.

By flagging these issues, AI allows for quicker and more efficient regulatory interventions. Further, AI can automate much of the regulatory reporting process, making it more accurate and less labour-intensive.

Nevertheless, while AI’s potential to assist in financial regulation is promising, it also presents considerable challenges. Regulatory bodies and financial institutions need to consider these to ensure that AI’s integration into the financial regulatory framework is smooth.

A primary concern is the black-box problem. This refers to the lack of transparency in how AI algorithms make decisions. Given the significance of these decisions in financial regulation, a lack of understanding of AI operations can lead to unpredictable and potentially detrimental outcomes. This opacity could jeopardise accountability, a cornerstone of effective regulation.

Another challenge is the risk of automation bias, where regulators may rely excessively on AI outputs without questioning their validity. This could lead to overlooked errors or misinterpretations, affecting the accuracy of financial regulation.

Indeed, AI systems are not immune to bias, which can be inadvertently introduced through the data used to train them. This could result in biased regulatory decisions, potentially creating a discriminatory financial environment.

AI also raises data privacy and security concerns. As AI in financial regulation would require extensive data collection and analysis, ensuring the protection of sensitive financial information is paramount. AI might also increase the system’s vulnerability to cyber threats.

Figure 4: Taxonomy of responsible AI

Source: Arrieta et al, 2020

These challenges make it necessary to take a careful and balanced approach towards integrating AI into financial regulation. The current regulatory frameworks may need to be rethought and updated to account for these challenges.

One potential solution could be to develop a ‘responsible’ AI framework for financial regulation.

‘Explainable’ AI (XAI) is vital to this framework (see Figure 3). In finance, where explaining model risk management is a primary objective of such financial market regulation as SR 11-7, the most popular XAI techniques provided removal-based explanations for the black-box AI models.

These methods are understood as simulating feature removal to quantify each feature’s influence on the model. Each method varies on the removal operation, the model behaviour investigated and the summary technique.

For example, Shapley Additive Global Explanations (SAGE) remove features by marginalising out using their conditional distribution and summarise the dataset loss (model behaviour) using Shapley values (Shapley, 1953). This type of mathematical approach shares some similarities with classical econometric regression explanations.

This could help to tackle the black-box problem and increase the trustworthiness of AI outputs. It may also decrease the risk of automation bias, as regulators would have a better understanding of AI’s decision-making process.

Regulators could also invest in training and education to develop a deep understanding of AI technologies. This would allow them to oversee their use and challenge their outputs effectively when necessary.

‘Regulatory sandboxes’ – controlled environments where AI technologies can be tested and refined – can help to address potential biases, data security and privacy concerns. Here, regulators can closely monitor AI’s performance and troubleshoot issues before a wider implementation.

Lastly, global cooperation among financial regulators is crucial to establish common standards for using AI in financial regulation. This could help to mitigate potential cross-border regulatory discrepancies and ensure a harmonised approach to AI’s adoption in the global financial system. In conclusion, AI holds immense promise for the future of financial regulation. It could transform regulatory processes, making them more efficient and accurate. At the same time, it presents significant challenges that could undermine these benefits. Regulatory bodies and financial institutions must approach AI’s integration with caution.

Where can I find out more?

Who are experts on this question?

  • Marcos López de Prado, Cornell
  • Andrew Urquhart, ICMA
  • Michael Dowling, Dublin City University
  • Mark Cummins, University of Strathclyde
Authors: Barry Quinn, Fearghal Kearney and Abhishek Pramanick
Image by bopav on iStock
Recent Questions
View all articles
Do you have a question surrounding any of these topics? Or are you an economist and have an answer?
Ask a Question
OR
Submit Evidence