Cognitive technologies will pivot the organisation – make sure it’s a pivot in the right direction.
No matter how hard they try, people find it impossible to keep unconscious bias from affecting their decisions, leading to continued discrimination in employment and business practices. But for computers, it’s a different matter. And this has enormous potential when it comes to the challenge of stamping out discrimination across your organisation, in areas such as HR, recruitment, operations and customer service. We look at the potential for technology to help enterprises change their ways.
We’ve all seen the headlines about artificial intelligence (AI) producing racist or otherwise discriminatory outputs, but it’s not the technology that is biased – it’s the data that it relies on.
Cognitive systems are trained by historical data sets that are laced with our subjective judgements, so of course they inherit failings in the system. Now that we’re aware of this tendency, we all need to be creating rigorous testing techniques and new standards to assess algorithms for bias – particularly with cognitive systems being used for applications as diverse as policing, banking and recruitment.
But artificial intelligence also offers tremendous promise in helping humans to address their own unconscious biases. We’ve already seen that if a human doesn’t get to see the name and sex on a resumé and just looks at achievements, they make different selections as a result.
Take Unilever, which has adopted a selection approach in which candidates perform in a series of games and an algorithm assesses performance against a predetermined personality profile. That way, they're not asking someone whether they have the experience. Instead, the algorithm is assessing: does this person actually have those skills?
There are tech startups already working on using AI to do the initial job interview and others working on facial recognition software to detect body language and emotion cues to help screen candidates. In the future, such AIs will make a judgement, based on a job description, about whether a candidate meets the required personality profile. If we can program systems like that – and rigorously test them to ensure the results are bias-free – then candidate shortlists are likely to be more objective and diverse as a result.
Of course, that presents a variety of challenges for the 60 percent of HR departments that are planning to adopt cognitive automation in the next five years, according to KPMG’s 2017 HR Transformation Survey.
One of these challenges is identifying the kind of talent we want our AI assistant to find. Again, data can help. By looking at existing employee data, it’s already possible to identify promising qualities in a job candidate; at KPMG, we now have an analytics capability that does that in near-enough real time.
The results of this approach can be fascinating. We were able to identify for a client predictors of upper quartile performance in a sales job nine months ahead, based on the first-month data of new starters. The results were often unexpected and subtle: one thing that was predictive was how new starters chose to network – who they sought advice from. In another role, the indicators will vary. But it proves if you can get at the information, you can identify some very interesting insights.
But there can be a flipside: there was a bank that did similar work around upper quartile performance. They crunched the numbers and worked out a set of six or seven factors they felt had a causal relationship and they recruited against this model. However, there was then a change in regulation around the range of products that this part of the bank sold and after about two years of persevering with the model, their sales started to go down. It was only after reassessing the model they realised the regulation change had changed the indicators of performance.
Therein lies a lesson: the tendency of early cognitive systems will be to steer companies towards a monoculture. AI systems need to be rigorously assessed and retrained, in order to ensure the algorithm is up-to-date.
Innovation comes from the boundaries of things: the interplay between domains of knowledge, of different cultures and mindsets. It’s one department seeking to collaborate with another, with unexpected results. Many recent reports suggest that organisations with more diverse boards perform better in the long term for exactly this reason.
Companies introducing AI systems will need to think hard, not just about what automation means for efficiency, about what it means for their company culture and values.
The companies leading this field are creating automation “centres of excellence” that can build-out best practice. That’s something that should continue; as AI develops, we’re going to need rigorous assessment and reassessment of algorithms.
So, while AI certainly introduces new challenges when it comes to diversity, there are tremendous opportunities. The implementation of cognitive is not just a technological question: it’s a cultural one.
Before you start transforming your company, ask yourself: “How do we want to use automation for the benefit of customers, employees and even for society? What kind of company do we want to be?”
To read more in-depth insights into artificial intelligence and its advantages, download our full report: Advantage AI.
<p>© 2018 KPMG LLP, a Delaware limited liability partnership and the U.S. member firm of the KPMG network of independent member firms affiliated with KPMG International Cooperative (“KPMG International”), a Swiss entity. All rights reserved.</p> <p>KPMG International Cooperative (“KPMG International”) is a Swiss entity. Member firms of the KPMG network of independent firms are affiliated with KPMG International. KPMG International provides no client services. No member firm has any authority to obligate or bind KPMG International or any other member firm vis-à-vis third parties, nor does KPMG International have any such authority to obligate or bind any member firm.</p>