FSB on artificial intelligence | KPMG | GLOBAL

FSB on artificial intelligence

FSB on artificial intelligence

The Financial Stability Board has published a paper on artificial intelligence (AI) and machine learning in financial services.

1000

Senior Advisor, EMA Regulatory Centre of Excellence

KPMG in the UK

Contact

Related content

Blue glowing light earth view

The paper (PDF 650 KB) follows the same approach as many other recent papers from international and national standard setters on Fintech innovations - (i) these innovations are of potential value to firms, consumers and supervisors; (ii) but they also bring risks to firms, consumers and financial stability; and (iii) some regulatory interventions may be required, but it remains unclear what form such interventions might take. Indeed, it is difficult to see how regulation could mitigate some of the risks arising from AI and machine learning.

Existing and potential uses

AI and machine learning have already been adopted in some areas of financial services, including to assess credit quality, price and market insurance contracts, automate client interactions, optimise capital, identify trading opportunities and optimise trading execution (for example by analysing the market impact of trading large positions), and back-test models.

Meanwhile, the RegTech and SupTech applications of AI and machine learning could help to improve firms' regulatory compliance (for example to undertake KYC checks) and increase supervisory effectiveness (including for money laundering and fraud detection, and the identification of suspicious trading patterns).

Risks

On financial stability, the paper focuses on:

  • third-party dependencies (possibly leading to the emergence of new systemically important players that could fall outside the regulatory perimeter);
  • concentrations that may arise from economies of scale in new technologies; 
  • new and unexpected forms of interconnectedness between financial markets and institutions (for example from the correlations arising from the use by various institutions of previously unrelated data sources);
    herding behaviour arising from more widespread use of similar machine learning strategies for trading and other activities; and
  • the opaqueness of AI and machine learning methods and models.

Risks to individual firms include:

  • deficiencies in the governance of the use of AI and machine learning; 
  • a failure to understand fully the applications being used and the risk of creating `black boxes' in decision-making (be it decisions on credit scoring, insurance underwriting, trading and investment); 
  • a lack of clarity about responsibilities between regulated firms and third party providers when something goes wrong; and 
  • the opportunities for insiders or cybercriminals to manipulate market prices by exploiting advanced optimisation techniques and predictable patterns in the behaviour of automated trading strategies.

Although not the main focus of the paper, risks to consumers also come across strongly:

  • data privacy and data protection issues arising from the increased use of personal data for credit scoring and insurance underwriting;
  • some data sources could introduce bias into the credit and insurance decisions, and could reintroduce decisions that are correlated to race and gender even if these characteristics are not themselves included in the data sets. Even if innovative insurance pricing models are based on large data sets and numerous variables, algorithms can entail biases that can lead to non-desirable discrimination;
  • the use of machine learning could result in a lack of transparency to consumers - it becomes more difficult to provide consumers (or supervisors) with an explanation of how a credit or insurance decision was reached; 
  • without adequate testing and `training' of tools with unbiased and accurate data and feedback mechanisms, applications may not deliver what they are intended to do; and
  • greater use of individual pricing decisions could undermine the risk pooling function of insurance.

Implications for firms

Supervisors can be expected to look closely at how well firms control and mitigate the risks arising as firms make increased use of AI and machine learning, not least in terms of governance, understanding and relationships with third party providers.

Firms are also likely to be required to demonstrate that they understand and can manage effectively the risks that greater use of AI and machine learning might pose for some groups of consumers.

Connect with us

 

Request for proposal

 

Submit