AI: Bias in – bias out | KPMG | UK

AI: Bias in – bias out

AI: Bias in – bias out

AIs are brilliant at analysing vast quantities of data and delivering solutions. But there’s a problem: they’re very bad at understanding how structural factors or inherent biases in that data might affect their decisions.

1000

Director

KPMG in the UK

Contact

Also on KPMG.com

 Bias in – bias out - man mouse mat

In September 2016, an artificial intelligence think-tank ran the first ever beauty contest judged by an artifical intelligence (AI) system. Thousands of photos submitted by the public were fed into a deep learning network, which judged the entries on facial symmetry, wrinkles and other categories. The results were not good.

Of 44 “winners”, almost all were white. Only one had dark skin. The backlash to the experiment was – quite rightly – severe, with headlines calling the AI racist. 

It’s not the first time newly developed AI systems have demonstrated serious bias, from voice recognition struggling with women’s voices, to chatbots quickly learning and then using offensive, discriminatory language.

There’s clearly a problem with bias. In that sense, AI mirrors the real world.

Society and business clearly haven’t got their systems right at this stage – and therefore some of those biases already in data and systems are going to feed through into new cognitive systems.

This presents a huge challenge for companies, as they seek to integrate emerging automative technologies. It’s urgent, too, as according to KPMG’s 2017 HR Transformation Survey, 29 percent of companies plan to integrate cognitive systems into their HR departments in the next year.
 

The tipping point’s here

We know that humans are more likely to recruit in our own image. Many biases are unconscious and form at an early age. Prejudice is also difficult to root out within business processes.

For example: one organisation recently adopted blind CVs, and actually found that the change made no difference to diversity, because it was at the face-to-face interview that the unconscious bias came in. So businesses need to think about breaking down their ways of working, and saying: where are the points at which bias could come in? How can we do it in a different way? So then, as processes are automated, you’re ensuring discrimination isn’t hard-wired in.
 

Does your data scrub up?

With artifical intelligence, this is an issue of training sets. Take a recent example, of an advertising system that targeted ads for higher-paying executive jobs toward men. If your historical data is biased – and it most likely is – then using past benchmarks will lead to the same problems.

Data around a company’s gender pay gap is an easy thing to look at. But around other questions, it is more complicated. For example, your internal figures might tell you that two percent of your workforce has a disability, but that may not be the case – there probably are more, but employees don’t want to disclose it. So companies need to work to ensure that these training sets are providing a true picture.

Collecting the right data

Collecting the right data is in part a question of trust. In the future, some companies will likely establish ethical codes to outline their use of such information. There are legal issues too. Organisations that are more transparent, explaining what they're doing and why they're doing it, are less likely to face legal challenges.

There are steps that companies embracing cognitive systems can already take. Academics at Cornell University and elsewhere are working on new statistical methods to interrogate data sets for bias before they’re introduced to a deep learning network. Teams adopting cognitive systems can construct a diversity workflow from the outset – to ensure the issue of fairness is central from the start.

As ever, this is a question of leadership. Industries are not going to change overnight: bias is a societal issue. But I love the video Redraw the Balance where a primary school teacher asks the children to draw a doctor, a firefighter and a pilot. Almost all the kids draw them as men – and they then bring in a woman pilot, doctor, and firefighter. The kids are gob smacked.

Right now, the AI is those children. It’s being trained on biased stereotypes. If we expect unbiased machines, we humans have work to do in teaching them.

To read more in-depth insights into artificial intelligence and its advantages, download our full report: Advantage AI.

Connect with us

 

Request for proposal

 

Submit