AI's revolutionary impact in recruitment

What does hiring look like when we take the human out of the loop? Sue Weekes investigates best practice with the rise of AI in the recruitment process

The cliché “There are more questions than answers” has often been applied over the years to how we feel when trying to get to grips with a new technological development.

With artificial intelligence (AI) and its own ability to learn at high speed in our midst, we are likely to continue looking for answers for some time yet as we figure out what it all means in our daily lives today and in the future.

Developments over the past couple of years, and certainly the rise of generative AI, means the recruitment industry is at a critical juncture with its use of AI. Keith Rosser, director of Reed Screening and director of group risk, and chair of the Better Hiring Institute, sums up the challenge perfectly, saying, “AI needs a good parent” to help with its development, especially if the industry is to embed its use into the recruitment processes fairly, responsibly, ethically and transparently.

AI in hiring was the focus of discussion at a recent All-Party Parliamentary Group (APPG) on Modernising Employment, which brought together industry experts, employers and representatives from academia and the legal world. The aim is to compile a set of best practice guidance, which will be published later this year.

Rosser, a member of the APPG, explains that these will be distinct from the ‘Responsible AI in Recruitment’ guidance, recently published by the Department of Science, Innovation and Technology, which focuses on good practice for the procurement and deployment of AI systems for HR and recruitment. “The best practice for AI in hiring we are developing will cover usage and abusage by both employers and jobseekers, and will be industry-grounded and less about procurement,” he says.

The APPG keynote was given by Lord Chris Holmes, sponsor of the Artificial Intelligence (Regulation) Bill, who set the tone perfectly by saying the “stakes couldn’t be higher, nor the opportunities greater”.

“We had a really good turnout of more than 120 people, which is important as we need a broad church to create principles like these rather than it just being from a pro-business or legal standpoint,” says Rosser.

Expert speakers giving evidence and who are instrumental in creating the principles include Dr Huw Fearnall-Williams, lecturer in organisation, work and technology, Lancaster University; Russell White, recruitment specialist and director of Future Work; Tamara Quinn, intellectual property and data privacy partner at legal firm Osborne Clarke; and Estelle McCartney, chief growth officer of task-based psychometric assessment provider Arctic Shores. Around 100 hiring organisations also took part, many of whom will be providing feedback on the best practice guide.

Speakers covered the uses and abuses of AI in hiring by both employers and jobseekers. For example, jobseekers could use AI to enhance their CVs or apply for jobs automatically, but they could also use deepfakes. Employers can use AI to be more efficient but need to ensure there is no algorithmic bias. As expected, the importance of transparency, regulation, the issue of dehumanising recruitment, how we guard against bias, as well as how we ensure we keep up with the sheer pace of development of AI were all topics of discussion and are likely to feature in the recommendations.

White, from Future Work, believes there is no doubt that AI will have a revolutionary impact on hiring in terms of speed, efficiency, communication and employee performance, and is confident it will learn who are the most appropriate for roles. However, he stresses that we are at the start of this journey. “There are still a number of hurdles to overcome with its use,” he says. “It can be prone to bias as a result of what it’s ‘taught’, so consequently it will need to be regularly audited and checked. AI brings automation, across all aspects of the hiring process, this can be dehumanising.”

Appropriate use

Moving forward, he says, both government and companies have to set policies that ensure applicants for roles have certain rights and are aware their application is being considered by AI: “It should be very clear from the outset that an application is being assessed by AI. Equally, if an applicant is rejected by AI, they should be given the reasons why and should have the right to appeal.”

Fearnall-Williams said the hope is that AI technologies (particularly machine learning or deep learning algorithms) are used appropriately and ethically to improve the hiring process and experience for recruiters, employers and candidates. When asked about his fears, he says that they relate to how the hiring process could become “reorganised around” AI technologies: “Problems can arise as they are anthropomorphised or understood as ‘objective’ and ‘unbiased’ computer systems.”

AI brings automation, across all aspects of the hiring process, and this can be dehumanising

The problem with anthropomorphising AI technologies is that it creates the fallacy that the machine is performing the same role as the human recruiter when it is doing an altogether different job, he points out. “There is no like-for-like replacement since these technologies are searching for probabilistic statistical patterns in the data, compared to a human recruiter who is socialised and embedded in the world.”

AI certainly brings into question the role human beings will have in the recruitment process in the future. One of the areas that struck Rosser as potentially game-changing for the industry is the rise of tech start-ups developing tools that find out what kind of jobs a candidate is interested in, mass-apply on their behalf and tweak their CV each time to tailor it to the role.

“If you think about the level of support that would offer some work seekers, it’s revolutionary. Individual people could end up with a bot that acts like a personal recruitment agency on their behalf,” says Rosser. “Used in the right way, it could be a leveller for people who maybe struggle with applying for jobs, aren’t as computer literate, or are a time-poor mum or dad who haven’t got time to apply for jobs.”

Of course, this potentially would lead to a huge increase in job applications for hirers – so do they then turn to AI to sift through candidates? “And does that mean you have AI bots on behalf of the candidate talking to AI bots on the hiring side?” says Rosser, questioning at what point a human enters the loop. “Is the human in the loop just at the interview stage because when it comes to, for example, mass application, I feel we would struggle to have humans assessing the initial job application.”

Emergent biases

Indeed, where AI solves a problem, it can also create another set of dilemmas. In a similar way, it is simultaneously credited with helping to eliminate bias as well as being accused of creating it. Fearnall-Williams says treating such technologies as “cold, calculating, rational and objective machines” that can reduce or remove human biases in the hiring process requires serious scrutiny. “This is down to how machine learning algorithms can learn and infer patterns from historical data and even develop ‘emergent biases’ unbeknownst to the original designers and programmers.”

For example, machine learning algorithms programmed to objectively rank candidates can amplify pre-existing biases. He adds that it can learn statistical correlations between groups that are dominant in certain sectors, such as men in STEM, and then develop a preference for male candidates and even actively filter out women applicants.

“Emergent biases are more troublesome since they can be more difficult to spot, this is because they are ‘algorithmic biases’ that we may not be aware of as they are unexpected from how the machine learns and what it is focusing on in terms of its probabilistic pattern recognition processes.

“For instance, you could ask a machine learning model to analyse video recordings of successful candidates and, in this dataset, all the successful candidates coughed during the interview. The machine learning algorithm may determine that coughing makes the candidate more suitable for the position. It could discover and find similar correlations such as blinking. A human interviewer would disregard this and not see it as relevant to the candidate performance during the interview as it is a natural part of human-to-human interaction.”

Rosser fears that unless we train AI properly, people who already find it difficult to get a job or an interview – due to prejudices and preconceptions or other diversity & inclusion issues – will have an even tougher time. He welcomes the approach put forward by Fearnall-Williams that uses the principle of ‘red teaming’ (a form of cybersecurity testing), which involves frequent testing and checking of outcomes and retraining the AI if necessary. “How we address this is critical, or the worry is it will just amplify the problems we have now on a bigger scale,” he says.

In general, Rosser would like to see more reporting around the use of AI by employers and recruiters much like organisations have to do with annual accounts and environmental and modern slavery statements. “Such as producing a short summary of how it is using AI and checking it every quarter – so along the lines of: ‘We’ve processed 10,000 applicants and we found it to do A, B and C, or we found there to be no bias or whatever’,” he says. “There needs to be an obligation on companies to act in a fair way.”

It is also worth noting that the AI (Regulations) Bill (at the Committee Stage in the House of Lords, at the time of writing) does call for an AI regulator and for companies of a certain size to appoint an AI officer in the same way they have data protection officers.


Also check out…

Responsible AI in Recruitment, published by Department of Science, Innovation and Technology, which specifically focuses on procurement and deployment of technologies used in the hiring process, such as sourcing, screening, interview and selection.

https://www.gov.uk/government/publications/responsible-ai-in-recruitment-guide

The European Parliament has passed the landmark EU AI Act, which Recruiter reported on in the January-February issue. The agreed text is expected to be finally adopted in April 2024. For latest updates, see below link.

https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai


 

Employer attitudes

Another area that needs to be taken into account is employer attitudes to the use of AI. As already stated, some candidates may abuse the use of AI but for others it will be a genuine leveller. Research carried out by Arctic Shores found that seven out of 10 candidates will use ChatGPT in an application or assessment within 12 months and that neurodiverse candidates, those from lower socio-economic households and black and mixed heritage students were all heavier users of generative AI.

Employers and recruiters need to have a position on the use of AI in hiring and for agencies it certainly needs to be an issue discussed at board level”

“I hope we are all supported, coached and empowered to enjoy those benefits and the opportunity AI brings to level the playing field,” says Arctic Shores’ McCartney. “But I fear that companies will brand candidates using GenAI as cheats, and candidates will see companies using AI as irresponsible and uncaring – and the trust, confidence and relationship between candidates and companies will be damaged.”

Arctic Shores has published ‘The ultimate guide to managing candidates’ use of GenAI’, which provides context, advice and templates to help them understand how to define and communicate their position on candidates’ use of GenAI. It’s curated from best practices in the TA Disruptors community and insights from the likes of Siemens, HelloFresh, The Institute of Student Employers, the Government Skills and Curriculum Unit, UCAS, and many more.

Rosser believes more employers and agencies need to put guidance in place for how jobseekers should use AI. There’s no shortage of media reports on the number of companies that would consider banning its use and in March this year, the Telegraph reported that the Big Four accountancy firms had banned graduates from using AI tools for writing job applications.

“Employers and recruiters need to have a position on the use of AI in hiring and for agencies it certainly needs to be an issue discussed at board level. For example, do employers/recruiters ask candidates to declare whether they’ve used it and set out what they can and can’t do?” says Rosser, who also warns that younger members of the workforce see the use of AI as part of a modern company and banning it might impact employer brand.

Whatever the decisions of individual companies, Rosser believes how we select candidates will change in the era of AI – even if it isn’t clear precisely how now. “If we assume that the candidate and AI has made sure the qualifications and skills for the job are met, do we need to start thinking more about assessing people for soft skills and ability to fit into the organisations and more human things at the interview stage?” he says. “Either way, I think we need to rethink how we choose candidates in the future.”

  • At the time of writing, the AI in Hiring guidance from the APPG on Modernising Employment is at draft stage and will be published later this year.

What AI thinks of itself when it comes to hiring?

Russell White of Future Work posed AI (in this case ChatGPT) the same questions as Recruiter asked him: “What are your biggest hopes but also fears for the use of AI in hiring?” These are the responses…

HOPES

Fairness and bias reduction: AI has the potential to reduce bias in hiring processes by focusing solely on relevant qualifications and skills rather than extraneous factors like race, gender or age.

Efficiency: AI-driven hiring processes can streamline the recruitment process, making it faster and more efficient for both employers and jobseekers.

Improved matches: AI algorithms can match candidates with roles that align closely with their skills, experience and career goals, leading to better job satisfaction and performance.

Data-driven insights: AI can analyse vast amounts of data to provide valuable insights into hiring trends, candidate behaviours and organisational needs, helping companies make more informed decisions.

Diversity and inclusion: By removing human biases from the hiring process, AI has the potential to promote diversity & inclusion by ensuring that all qualified candidates are given fair consideration.

FEARS

Bias amplification: If AI algorithms are trained on biased data or developed without careful oversight, they can perpetuate or even amplify existing biases in the hiring process.

Lack of transparency: AI-powered hiring systems may lack transparency, making it difficult for candidates to understand why they were or were not selected for a particular role.

Loss of human touch: Over-reliance on AI in hiring could lead to a loss of human connection and empathy, which are important aspects of the recruitment process.

Privacy concerns: AI systems that collect and analyse large amounts of personal data raise concerns about privacy and data security, especially if that data is misused or mishandled.

Job displacement: There are concerns that AI-driven automation in hiring could lead to job displacement for human recruiters and create barriers for certain groups of jobseekers who may not have access to or be comfortable with AI-driven application processes.


Image credit | Getty

The last word: Alan Furley

In March, I chaired a panel at an International Women’s Day event.

IT/Telecoms 8 May 2024

Juggling jobs

If you could do your full-time job with time to spare, what would you do?

8 May 2024
megaphone

Soundbites: May/June 2024

Karl Helliwell

8 May 2024
Top