Artificial intelligence can help remove unconscious bias when recruiting to fill tech positions. But can it be a double-edged sword for tech leaders when it comes to promoting diversity & inclusion?
The use of artificial intelligence (AI) is growing rapidly, infiltrating areas of business which have traditionally required humans to undertake what are often low-level tasks. With this comes the potential for artificial intelligence to help improve diversity & inclusion, using algorithms to arrive at decisions based around objective facts or statistics rather than subjectivity or bias.
One obvious area is in the recruitment space, where AI has the potential to help organisations hire the best candidates, regardless of their background, or to improve representation of particular groups.
‘Just 26% of those working in artificial intelligence are women’
“When used appropriately and kept in check by diverse teams that can spot biases creeping in, AI can lead to much more objective decision-making around new hires,” says Jill Stelfox, executive chair and CEO of software firm Panzura.
“When a human scans a resume, they’re far more likely to give disproportionate weighting to things like gender, age, ethnicity, or where a candidate is from. AI has the potential to solve this problem, instead selecting candidates based on experience, qualifications, and other more objective factors.”
This can also help ensure people are paid fairly, suggests Marie Angselius-Schönbeck, chief of impact and corporate communications officer at conversational AI firm Artificial Solutions.
“If systems were taught to ignore data relating to gender, race or sexual orientation, instead looking neutrally at the education, skills and experience of a candidate and comparing that to existing employees across all demographics, it may help to reduce the diversity pay gap for new employees,” she says.
What is diversity in artificial intelligence?
AI can also be used to promote diversity in the workplace more generally. “It can be an important tool for ensuring that your workforce processes include a diversity and inclusion lens, as it lays the foundation for skill-based workforce decisions that ultimately level the playing field for everyone,” says Jeroen Van Hautte, co-founder and chief technology officer at AI-based strategic workforce planning firm TechWolf.
“If workforce decisions are being made purely based on the skills you have, you cannot be excluded, either consciously or unconsciously, from recruitment, promotions or other career opportunities.”
This could lead to AI systems suggesting someone might be a good fit for a particular role, he adds, moving internal promotions away from gut feel or a reliance on who people know.
The use of AI can also warn businesses of the potential to make poor decisions.
“For instance, AI-powered tools can be used to identify biased language in business meetings, or to ensure that goal-setting is fair and equitable, based on the performance of previous employees inhabiting a given role,” suggests Stelfox. “Bias is insidious, and can impact all aspects of management and productivity, and AI can help to address this and ensure that no employee is treated unfairly.”
How does AI enable diversity & inclusion?
Over time, this can start to change thinking and company culture, believes Angselius-Schönbeck. “Leveraging the nudge theory, a conversational artificial intelligence assistant could help businesses promote diversity and inclusion by asking questions and offering prompts that encourage users to consider whether they have accounted for different groups or scenarios as part of their decision-making process,” she says. “Accumulative nudges support the development of new habits. And in this instance, it might help prevent unconscious bias from creeping into decisions.”
Yet while AI has the potential to help organisations in different ways, its use is also fraught with risk. One issue is the lack of diversity in the AI industry itself. The World Economic Forum’s Global Gender Gap Report 2020 revealed that just 26 per cent of those working in the sector are women, while the BCS Insights 2021 Report suggests just 15 per cent of people in the tech sector as a whole are from ethnic minorities, and only 10 per cent have a disability.
Angselius-Schönbeck believes a lack of diversity in the AI space can have three negative effects on its design and use. “Firstly, the experience of AI is then not representative,” she says. “For example, a conversational AI trained solely by men will not be reflective of women’s conversational styles. Secondly, it risks the introduction of biases that might negatively impact certain groups. And finally, ultimately it means that the AI is flawed. Whether built for consumer or enterprise use, a lack of diversity on AI means that it does not perform as well as comparable solutions which prioritise a more inclusive design process.”
Can AI be used to promote diversity in the workplace?
Aniela Unguresan, founder of EDGE Certified Foundation – the assessment methodology and business certification standard for gender equality – believes that while AI can help identify and reduce the impact of human biases, “it can also make the problem worse by baking in and deploying biases at scale in sensitive application areas”.
“There is a solution,” says Unguresan. “Organisations can hire diverse people to devise correct processes, which are overseen by a chief diversity officer who checks software that is in development for bias and creates applications and processes that remove bias.”
Guidelines and frameworks are also needed to ensure that AI is developed for diverse communities, says Anton Nazarkin, global development director at AI-powered facial recognition provider VisionLabs. “Creators of AI technology can undertake discussions with working groups from diverse communities, allowing the impact of AI technology to be truly understood,” he says. “Organisations can also use tools to ensure that data sets meet a minimum level of diversity. Only through a combination of these approaches can we begin to remove the bias from AI.”
There’s also a risk that AI may be designed to help solve a particular problem but then not be updated to ensure it does not then go too far the other way.
“AI will operate based on the rules that it is trained upon,” points out Angselius-Schönbeck. “For example, if a company wants to improve the diversity of its talent, then an algorithm can be trained to prioritise certain genders or ethnic groups. However, unless it is defined to support a company with a specific quotas target, such an implementation may still not be appropriate. As the AI self-learns to ‘un-prioritise’ certain candidates, it may never re-learn to appraise them ‘fairly’ when there is no longer a need to prioritise certain groups.”
A double-edged sword
For now, it seems as if the use of artificial intelligence to encourage diversity will remain something of a double-edged sword. “We should be cautious about encouraging more widespread AI adoption unless there is both explainability and accountability,” claims Jesse Shanahan, chief technology officer at digital fitness start-up Another Round. “There is already a common problem of data and models being manipulated to justify a particular outcome. Ethical AI should play more of a supportive role, but never replace the human decision-maker.”
More on diversity & inclusion
Why diversity matters when recruiting cybersecurity staff – Putting diversity at the heart of your cybersecurity team helps you spot issues and problems that might not have occurred to you
Why you need more women on your data science team – Companies that embrace diversity and especially women, embedding their data science team will outperform competitors, says research