Are you ready to place your trust in artificial ... | KPMG | UK

Are you ready to place your trust in artificial intelligence?

Are you ready to place your trust in artificial ...

If the risk team is not knocking on the boardroom door to discuss AI right now, either about the risks of using of AI or about using AI to manage risk, they soon will be. KPMG’s Shamus Rae and Paul Holland discuss.

1000

Also on KPMG.com

robot and woman

There’s never been so much data at our fingertips… and arguably there’s never been greater internal and external pressure to analyse that data to manage compliance and risk.

In this context, artificial intelligence (AI) is an opportunity managers cannot ignore, offering companies the ability to process vast quantities of data at lower cost. Possible applications where AI can help to manage business risk are vast, for instance anti-money laundering, know your customer, combating fraud and managing compliance. 

Many organisations are already deploying AI, to deliver highly complex as well as more repetitive and standardised processes. The main benefits are cost savings, speed and scalability. AI engines can process multiple data sets concurrently and consistently and natural language processing enables AI systems to work across different wordings and languages. 

Unlike a human operator, AI is unbiased (if trained well), and machine learning means systems only get better over time as they can improve their performance in response to experience and feedback

So, if the risk team is not knocking on the boardroom door to discuss AI right now, either about the risks of using of AI or about using AI to manage risk, they soon will be.

Who do you trust?

AI technology intrinsically involves trust. While AI systems eliminate risks associated with human error, they bring new risks around how far to ‘trust’ the machine. Risk and audit functions will require evidence that processes are functioning effectively but the fact AI handles large data volumes and self learns presents challenges around thorough checking. Furthermore, if a cognitive system delivers 97% accuracy in its decision-making, is this enough for the organisation? Who should make that call? And how do you know whether the 97% gets achieved?

It is important to build in continuous monitoring and feedback loops to identify and evaluate inaccuracies. Boards have to decide how comfortable they are with computers checking other computers – and where and when to introduce the human element. This will depend on context, but it adds further complexity to risk, particularly around human intervention. Potential risk scenarios include the machine identifying an error, and a human operator incorrectly overriding it. AI needs to work with human intelligence as there will also be times when the human override is needed, for instance to protect moral and ethical requirements. Other trust-related risks occur around prioritising and decision-making. As organisations automate more, they also need to retain human expertise to implement technology that delivers their strategic goals and priorities.

No paying, no gain

Even in organisations that deploy AI operationally, most risk, audit and compliance functions are still at the early stage of considering the costs and benefits. In addition to implementation, AI requires significant investment in training the system and adjusting roles and responsibilities. 

Because AI adds complexity to risk management, it is likely, at least initially, risk functions will require more resources – including auditing AI expertise and technology investment – while the business as a whole will need fewer people as more functions are automated. As the risk and compliance functions start to apply AI to more processes, they may then be able to reduce staff allocation again. 

Be prepared

As well as scenario planning around how an AI-enabled version of their business will look in the future, enterprises considering piloting AI should introduce rigorous data collection strategies to ensure they have high-quality, complete, unbiased data. They need to apply a content strategy to existing data – incomplete or biased data will delay and potentially compromise AI implementations.

While management boards of large organisations see massive opportunities in terms of automating standardised tasks, reducing manpower, and developing technology assets to sell to customers, there is a danger introducing controls and safeguards are not top of mind.

For businesses piloting AI, now is the time to think about the risks, data quality and governance, rather than waiting until systems are rolled out and processes and behaviours established. Unlike other technologies, where it often pays to follow close behind, when it comes to AI, organisations need to take action early or risk being left behind.

Connect with us

 

Request for proposal

 

Submit