Cognitive chaos

Cognitive chaos

The more capabilities that you automate, the more can go wrong when there’s a failure in quality or security. How do you maintain control? Olivia Solon discusses the issue with Paul Taylor, Partner and Head of Cyber Security at KPMG in the UK.

Partner and UK Head of Cyber Security



Also on

cognitive chaos
  • Cognitive software to help keep tabs on suspicious staff and external threats
  • AI must have in-built safety mechanisms for when things go wrong
  • AI to drive demand for human data scientists, not just robots

Humans can be fooled but thankfully our brains can’t be hacked... yet. However, cognitive software that mimics human activities can be. While these systems can free people from repetitive, data-heavy tasks, maintaining oversight of them requires a specialised approach to cyber security.

As with humans, keeping tabs on artificial intelligence (AI) requires a focus on behaviour. It’s critical for security professionals to understand how such systems are supposed to behave. Which patterns are typical of ‘business as usual’? And which might indicate something fishy?

A self-aware robot?
Networks built with AI ought to be better at resisting cyber attack through self-monitoring and diagnosis, explains Paul Taylor, Partner and Head of Cyber Security at KPMG in the UK.

“The same way a human being might feel they are running a temperature, a network AI should have the equivalent,” he explains.

Taylor cites the example of the Typhoon fighter jet, which is aerodynamically unstable in subsonic flight and requires a complex control system, made up of three computers, to continually change its aerodynamic profile in order to keep it in the sky.

“It has three times the system requirements and the computers each check each other’s homework. We’ll see that in AI networks,” he says.

When it comes to spotting malware, researchers at the Massachusetts Institute of Technology have developed an artificial intelligence system called AI2 that does just this. The software is able to scan through data from billions of log files each day to get a sense of what normal behaviour on the network should look like and when anything looks suspicious. After a week of training, the AI is 85% correct at identifying malware and finds five times fewer false positives than the industry standard1.

Monitoring employees through AI 
The same approach could be applied to networks of driverless vehicles. Many companies and individuals baulk at the idea of relinquishing control of the wheel, fearing that a cyber attack could result in ‘killer robot’ headlines and a liability nightmare. Taylor, however, is more measured, saying that just as cars have safety brakes, so too should the systems that oversee them.

“It’s easy to get apocalyptic about being targeted by a cyber attack,” he says, “but I’d expect there to be a system in place that notices the system is behaving in a way that’s not safe, or is unexpected, and would automatically shut it down.”

All organisations face attacks, but smart organisations can make sure the attack doesn’t spread across infrastructure. This requires detecting the intrusion and responding as quickly as possible.

“You need responsive, agile systems that get back up and running fast and prevent lateral transfer of attacks,” he says.

In addition to monitoring automated systems, AI can also be used to monitor humans within organisations. People might not get infected with malware, but they can be vulnerable to manipulation or, when it comes to corporate espionage, corruption.

Taylor explains: “Are you sending yourself multi-gigabyte files to your home email address? Are you looking up the file system in a different way than expected? An ideal cognitive computer could spot these sorts of things and ask: ‘Why is Paul coming in a lot earlier, staying later and sending gigabytes of company data?’”

Data science still requires a human touch
Although the machines are coming for cybersecurity jobs, they’re not going to replace humans altogether.

“Companies will still need a small number of people with some very sophisticated tools looking under the surface of these cognitive computers,” he says. This means there’ll be great demand for data scientists and AI experts in the future.

In addition to technical skills they’ll need to be able to translate jargon into relatively simple terms that can be understood by the rest of the business. “It’s no longer just speaking to the technical community – you need to speak to the boardroom,” he explains.

“Where people go wrong is to think it’s the IT department’s problem. Information security is a company problem.”

1Massachusetts Institute of Technology

Connect with us


Request for proposal



KPMG’s new-look website

KPMG’s new-look website