Learning to love the robots | KPMG | UK
close
Share with your friends

Learning to love the robots

Learning to love the robots

The thought of artificial intelligence taking people’s jobs can be unnerving. But if we learn to trust new technologies, then we can being to recognise the freedom they offer. Read the interview of our KPMG experts in The Times.

1000

Also on KPMG.com

robotic-arm

Joshua Brown occupies an unfortunate place in history. In May 2016, Brown was travelling on a highway in north Florida when his car’s autonomous driving system failed to spot a large white truck and headed straight into its path. He died instantly, becoming the first known death involving an autonomous vehicle.

The accident made headlines around the world and prompted calls for a ban on self-driving vehicles. Never mind that Brown’s death was the first involving a Tesla car in 130 million miles of driving, against an average rate of one death per 94 million miles in the US, and one every 60 million miles globally when driven by humans.

The story sums up the problems faced by every organisation considering the opportunities presented by artificial intelligence-powered automation. AI-powered automation will change our lives change our lives, but how do we get over the trust barrier?

Especially since it’s not only customers who are worried; many employees are unnerved by the possibility of automation taking their jobs.

For Shamus Rae, head of digital disruption at KPMG, one crucial challenge is to ensure people better understand what lies ahead. “AI-driven automation and robotics is a broad field,” he says. “Some of this technology, such as process automation, has been in place for some time. This is where work inputting and managing data that used to be done manually is done by a machine.

“Now we’re seeing the next stage: AI automation capable of dealing with unstructured data – from social media to human voices – in areas such as customer service. Here, for example, a chatbot might deal with simpler enquiries initially, and then rapidly take on more complex tasks. Beyond that, over time, we’re going to see the Internet of Things power a whole new wave of automation, of which self-driving cars are just a part.”

The way to build trust is to show people what can be achieved, says Stephen Moir, executive director of resources at Edinburgh City Council, which is trialling automation across a range of services. “This isn’t about robots coming to get you,” he says. “It’s about serving our customers better and giving our staff work that is much more rewarding.

”Moir’s approach is to build a case for change through small-scale, proof-of-concept projects that demonstrate the rewards of automation. This has begun with the manual processing work previously done by humans. Not only can technology do this work more quickly and accurately, it also frees up staff to do more valuable tasks.

“When you talk to the people who used to do this work, they’re delighted because they now have an opportunity to spend their time helping clients,” Moir says. “We are more efficient, but we’re also able to allocate much-needed resources to more complex work.

”This approach, built on consensus and communication, is the constructive way forward, says Katie Clinton, head of internal audit and risk compliance services at KPMG. “New technologies always challenge our traditional assumptions and risk models,” she says. “The imperative is to find new ways to manage that risk so you can start to build trust.”

Part of the challenge is to learn to walk before you run, says Rae, not least in the organisation’s ability to deliver. “One risk that organisations face is the pace of change,” he says. “There’s a race to implement new tools before organisations have understood the risks, and this problem is intensified because many businesses don’t yet have the skills or competency to manage this new type of risk.”

The threat of cyberattack is an obvious danger, particularly given the sensitive intellectual property tied up in the learning process that powers artificial intelligence. Another risk is relying on a technology that does not provide black and white answers – only decisions based on probability, which rely on the data available to the system.

All organisations considering automation must confront a series of questions.

Could AI-driven automation undermine what are currently stable processes? Are automated systems vulnerable to attack? Does the organisation have sufficient expertise to manage automated systems and the outputs they produce?

“We need an assurance approach that builds trust over time,” says Clinton. Where risk is managed carefully and communicated successfully, what seems frightening today will eventually be taken for granted.

“People’s willingness and confidence to engage, including sharing their data, which is what artificial intelligence needs, will grow if you get the right results,” she says.

Connect with us

 

Request for proposal

 

Submit