Is AI ethical?
How much can we trust AI?
Will AI take over the world?
We’ve all done a Google search on AI at some point in our lives, and the above three are common questions we’ve probably asked ourselves - and for good reason. Artificial Intelligence is revolutionising how we operate on a day-to-day basis, from the products we buy through to the systems we use. AI is considered new, and new also means unknown, we are well within our rights to question the ethical implications of Artificial Intelligence.
Particularly within talent management and talent acquisition, AI has been a huge help. From identifying job descriptions that need to be edited for gender-neutral language to mass screening of CVs to search for key skills, AI can speed up and streamline the hiring process tenfold. But, where does this leave us in five years' time? Will AI platforms and technology replace the humble Recruitment and HR function, or will it act as a true partner (much like it is now)?
The importance of transparency, fairness, and accountability in recruitment
When using any system that utilises AI within the world of recruitment, it’s important to ensure that there is a high level of accuracy and fairness at each stage of the hiring process. From screening CVs through to onboarding - there have to be checkpoints to ensure that there have been no errors or candidates that have slipped through due to the fault of AI.
The recruitment landscape - regardless of industry - is riddled with biases. Whether it’s more conscious such as gender or age, or unconscious biases that we have picked up throughout our lives that are harder to identify, candidates deserve a fair process that will enable them to have the best possible chance of success during an interview process. When using AI tools that filter or eliminate candidates before they’ve had contact with a human, you run the risk of not having a transparent or fair recruitment process.
Humans programme AI, and humans have biases
Although AI is defined as artificial intelligence, it still had to start somewhere: and that somewhere is that a human put the initial program and code together to jumpstart the AI. Let’s say for example the information provided to an AI CV-Screening tool was given by a woman in her early 20s, the results over time would be different in comparison to if it was provided by a man in his late 50s. This is a simple example - but it’s important to acknowledge that whoever is using the AI tool also has an influence on it. This is also known as Machine Learning bias.
“Machine Learning bias, also known as algorithm bias or Artificial Intelligence bias, refers to the tendency of algorithms to reflect human biases. It is a phenomenon that arises when an algorithm delivers systematically biased results as a consequence of erroneous assumptions of the Machine Learning process.”- Unesco
What is the solution?
Within recruitment, the key solution to combatting the ethical landscape of AI and Talent Management is to keep close monitoring of the AI tools that you’re already using. It’s important that if you’re implementing a new system or methodology powered by AI that is going to have a significant impact on how candidates are screened and managed, that you have oversight of this and understand the negative implications that can come with it.
For example, AI-based scheduling systems are incredible because they work to streamline your workflow and make your scheduling more straightforward, but they don’t directly impact a candidate's ability to get a role within your organisation.
However, job-writing tools, screening platforms and even AI feedback tools can have a direct impact - so understanding these inside and out as well as ensuring that bias is put in front of mind is crucial. We need to see AI as a form of empowerment within recruitment, rather than a replacement for what we already have.