Most HR managers concur that AI is helpful in performing some of the tasks related to recruiting, especially the very early stages of hiring. However, there have also been discussions of expanding the scope of AI technology usage in recruiting. A recent example of this has been the implementation of AI into live interviews with potential candidates. “Artificial intelligence may be watching your face’s every move, assessing the honesty of your answers, as well as your emotions in general,” remarked Minda Zetlin from Inc.com. “It may also try to determine whether your personality is a good fit for the job.”
With AI’s potential seemingly limitless, the question becomes: even if employers could automate the entire job recruitment process, should it be done?
AI empathy is a poor substitute for human empathy
Many people in the process of seeking work are in an emotionally vulnerable position. They don’t know how an employer’s decision may impact their present and future – and that of their dependents. This is not a reality where machines are effective at respecting and expressing empathy for job candidates. In a low-employment environment where finding the right human talent is a battle, this is an important skill. Smart employers should not automate this and should retain a human face at the late interview stage of hiring.
The problems with using AI to establish a diverse workforce
Emotional intelligence remains a major barrier between humans and machines. We are already seeing machines take over the jobs that require traditional intelligence. This use of machines will likely only continue into the future. For example, it’s easy to see how a future machine could perform the tasks of a construction worker, pilot, or even a doctor by automating tasks. But the ability to employ emotional intelligence — empathy, compassion, persuasion, compassion– in the workplace will be a highly valuable skill into the distant future. This is particularly true for the hiring process.
AI doesn’t (yet) understand bias
Eliminating bias is critical to the job recruitment industry — not just ethically but legally as well. The Harvard Business Review gives several recommendations for reducing bias in the hiring process, including using blind resume reviews, reworking job descriptions, and setting diversity goals. Important to note is that all of these recommendations require the human touch to implement. AI, as remarkable as it may be, consistently fails to understand societal biases in its algorithms. In fact, algorithms have been proven to be alarmingly discriminatory and prejudiced in several cases.
Dr Latanya Sweeney, a Professor of Government and Technology at Harvard University, found that a Google search of a name “racially associated” with the black community was much more likely to be accompanied with an advertisement which suggested the person had a criminal record. It’s easy to see why this is a problem in the hiring process: If a hiring manager performs a Google search on a potential employee and sees reason to believe that the person has a criminal record, they are much less likely to hire the person.
Can AI eliminate hiring bias?
Amazon recently had to disband its AI recruiting tool as evidence emerged showed a worrying gender bias. Essentially, Amazon’s machine learning system taught itself that men were preferable as candidates than women.
In most cases, algorithms aren’t consciously designed this way. Regardless, it is alarming to see an automated function falling into extreme biases typically associated with humans. AI tools are designed to identify patterns. If they aren’t taught to exclude specific patterns – for example gender and racial biases – they will start to incorporate biased information with neither neutrality nor fairness.
Organisations can aspire to minimise or eliminate recruitment bias – but it’s thanks to human intervention, not robots. Humans are naturally prone to biases but are more capable than machines at recognising the bias, remaining cognizant of the problem and adjusting accordingly. Automating this portion of the recruitment process with AI-driven tools would (as technology stands today) would introduce greater bias into the workplace. Therefore, humans remain the best bet in avoiding the pitfalls of biases now and in the immediate future.
Humans are more flexible than AI
Learning algorithms are only as strong as the data they use. AI, therefore, is capable of making mistakes when attempting to sift through data. But, lacking human emotional capacity, AI isn’t able to understand and adapt based on feedback and has a limited concept of consequences.
If a machine is determining a hiring decision based on a faulty dataset, it will have limited means of correcting itself. In fact, in many cases, mistakes based on the data are reinforced as the machine has no means to understand and improve itself. Compare this to humans, who through trial and error become better employees or face the risk of job termination.
“Digital disruption means ethics is playing catch-up”
Furthermore, businesses have much more control over their employees’ ability to learn and adapt over time. Professional development programs allow employees to become more qualified over time – not from more data in one area but exposure to many aspects of work. Through programs such as these, humans have an advantage over machines by being able to adapt and grow professionally. Human flexibility is a special advantage that will be difficult for AI to defeat. It is also an attribute important in making the most of recruiting opportunities.
Technology is enabling new capabilities in the job recruitment industry. However, prematurely using it as a cure-all would be harmful. As AI and other technologies develop, finding the most effective balance between human and machine is critical – and there may be no better way of determining the appropriate balance than a human.
Written by Ross Plotkin, director of operations at Job Nexus