Special Report - AI: Rise of the machines in recruitment
As the relentless rise of artificial intelligence and machine technology marches on, recruiters need to be both on board but wary of the challenges
While we are still very much at the dawn of the use of artificial intelligence (AI) in recruitment, both application developers and recruiters are already starting to realise the huge potential the technology offers. And it isn’t just about using AI to more intelligently match candidates to jobs, it is even being used to tackle issues such as wellbeing and mental health (see Case study 2).
The application of AI, in which machines simulate human intelligence and actions, is also laced with controversy. Replacing a recruiter with a machine will naturally raise concerns in the profession but there are also ethical and data privacy concerns. Recruiters need to be aware that the area will increasingly be bound by regulation with the European Commission’s recently released draft legislation including fines for non-compliance.
Amid the discussion around its use though, one thing is certain: it is impossible to stop its march, and those recruiters who understand both its potential, and the warnings that come with it, will be the ones best placed to realise its benefits.
It is perhaps talent assessment and screening where AI can deliver the most obvious ones, whether that be when sifting through a high volume of applicants or to predict the performance and behaviours of shortlisted candidates.
Guy Thornton is founder of Picked, a set of analytics and predictive hiring tools for recruiters that make use of AI. He has seen more widespread acceptance of the technology since his product launched in 2018. “Originally, there was a healthy scepticism about the role of AI and predictive hiring in recruitment. There was a real feeling of ‘How can computers do my job better than me?’, but this is perfectly understandable,” he says. “However, now the technology is becoming increasingly used, recruiters have seen the benefits in enhancing what they do, not replacing what they do. This has led to an increasingly positive sentiment and growing uptake.”
Originally, there was a healthy scepticism about the role of AI and predictive hiring in recruitment”
Guy Thornton, Founder of Picked
He said recruiters need to move away from the notion that AI and machine learning means “clicking a button” and letting the machine do all the hiring for you. “Rather, it can be used to increase speed and efficacy of the hiring process,” he says. “This could be by improving sourcing, screening, assessment and more.”
Picked’s clients include search and media company Reverse Media Group, which uses data and AI in all of its business units and had wanted a more scientific approach to recruitment. Greg Burgess, CEO, says it had tried other matching and automated tools for talent acquisition but they hadn’t lived up to their promise. He says the data that Picked’s tools provide is extremely granular and provides a strong indication of the capabilities of applicants, as well as whether they will fit in to the team and have the right value set for the company.
Case study 1
How AI helped Hermes tackle its Christmas recruitment challenges
Initially, it led to a wave of extra applications as the economy locked down, but as 2020 progressed the competition intensified and drove the cost-per-hire (CPH) up. In the run-up to the peak season before Christmas, the demand for couriers rises exponentially leading to increased competition, so Hermes knew it would need a fresh approach to recruit all the self-employed couriers it needed to fulfil its busy Christmas delivery schedule.
This new approach came in the form of Creed Communications’ Programmatic Performance model. It was developed by the full-service recruitment marketing agency to enable clients to advertise on lots of different channels, buying inventory on the right channel at the right time for the best price.
Hermes became the first client to benefit from the platform that uses advanced machine learning programmatic job board software to drive performance. Hermes used it to deliver job applications where they were most needed while delivering a cost-effective cost-per-action (CPA).
Creed Communications explains that commercial agreements with different channels shifted the balance of power between employer and advertiser so rather than a guaranteed investment upfront irrespective of performance, money would be invested on whichever channels were most effective. The software’s AI learns over time to continually improve performance, ensuring that Hermes kept on delivering during the most demanding Christmas peak it has ever encountered.
The metrics proved to be impressive. The approach delivered a 52% lower CPA than comparable channels, a 45% increase in hires compared to 2019 and a 25% reduction in CPH. Hermes supported by Creed Communications won the Best Use of AI Award at this year’s The FIRM Awards, sponsored by Recruiter.
Jonny Heyhoe, client partner at Creed Communications, believes the Programmatic Performance is a great platform for showcasing the best of AI in recruitment. “The way it automates the bidding process with numerous channels, learns and improves performance is a step-change from manual set-ups and analysis on each channel,” he says. “It improves performance and return whilst giving our digital team more control and insight than ever before.”
Case study 2
“This has vastly improved our hiring over the past year and has now allowed us to proudly say that talent acquisition is not just aligned with the rest of our business, it’s actually one of our leading areas,” he says.
There has been an explosion in AI tools in the past three years and, as Barb Hyman, CEO of Predictive Hire, explains, it is important for recruiters to understand that “not all AI is equal” and calls for a lot more education in the market. “Video interviews which use AI have become popular, for example, but they can lead to much more biased outcomes for candidates than text-based AI interviews, which are blind,” she says.
Indeed, one of AI’s boasts is that it can remove human bias but how can recruiters trust the algorithms deliver on such claims? Hyman reckons recruiters need to take it upon themselves “to get educated” about the technology they are using. “On our end, we try to be as transparent as we can about our technology, rather than have it operate in a black box,” she says. “We also publish all our research so it’s peer-reviewed, and recently released a framework called FAIR (Fairness in Recruiting), which aims to set a global standard for ethical AI in recruitment so that recruiters can get educated on the topic.”
Elin Öberg Mårtenzon, CEO of Tengai, which has a suite of AI-based solutions that seek to remove unconscious prejudice, including Tengai Robot and Tengai Digital Interview, agrees that the use of technology must be prompted by “a deep understanding” of how technology works to serve its purpose. “By understanding the components of the technology that recruiters use and by critically examining the outcome, recruiters can also better understand whether the technology they use is keeping its promise to search out, handle and process great talent,” she says. “Technology used wisely and in the right stage of the funnel can definitely deliver on its aims.”
Regulating the use of AI
In April, the European Commission published its draft regulations for artificial intelligence. Its aim is to preserve the safety and rights of individuals and organisations, as well as help to foster innovation. Although the UK has left the European Union, there are clearly implications for those UK companies that are both developing and likely to use AI-based applications in the recruitment sector.
The international legal practice Osborne Clarke explains that the legislation envisages a full regulatory framework that will include new EU and national bodies with strong enforcement powers and heavy fines for non-compliance. It adds that the proposed legislation is shaped around the level of risk created by different applications of AI. Three levels are identified under the headings of:
- Prohibited AI systems
- High-risk AI systems
- Codes of conduct and transparency for all other AI systems
The legal practice anticipates that the draft provisions are likely to be subject to extensive lobbying and do not expect it to become law before 2023 at the earliest. That said, it is important recruiters become aware of how the legislation could impact their practices sooner rather than later.
John Buyers, Osborne Clarke’s head of AI and machine learning, answers some of the key questions for recruiters.
What are the main areas of concern for recruiters in the framework?
The main concern in the draft EU framework is the classification of AI, which is used to ‘select individuals for recruitment; for filtering applications or evaluating candidates’ as ‘High Risk’ (see Annex III to the Reg, Section 4 [a]) and also AI, which is used to make decisions for ‘task allocation and for monitoring or evaluating performance’ (see Annex III, Section 4[b]).
High Risk AI is subject to a raft of mandatory requirements too comprehensive to list here, but in short will require considerable levels of investment in appropriate tools and people to ensure compliance, including ensuring, for example, appropriate demographic representation in datasets used by AI systems, and avoiding bias on an ongoing basis.
Which applications of AI in recruitment could most lead to non-compliance/unethical behaviour?
Automated (biometric) facial recognition systems used to detect autonomic responses in AI interview contexts – particularly to filter out unconscious responses to determine whether or not the candidate is telling the truth. These are typically used more in the US. These systems are questionably ethical, functionally variable and arguably unlawful in GDPR-governed countries, without specific user consent (which would seem to be very difficult to obtain on a lawful basis in that context given the circumstances of an interview).
Automated filtering of CVs, especially using deep neural networks (which are opaque ‘black boxes’). These systems can create real bias and discrimination issues, particularly where they make ‘false correlations’ (as was shown when one automated system equated membership of golf clubs with success).
In your experience so far, what are recruiters’ main concerns when using AI in the recruitment process?
Currently the industry is very personal data and GDPR-focused (rightly). There is little or no awareness of the risks – whether legal or ethical – in the use of AI.
How can they ensure they behave ethically and compliantly?
- Only use AI in demonstrable and verifiable cases where it is really needed – not as a “nice-to-have”.
- Run proper GDPR Data Protection Impact Assessments (DPIAs) to ensure this is the case (and AI risk assessments on the AI side).
- Make it clear to candidates precisely what technology is being used.
- Understand the pitfalls of machine learning and the ‘black box paradigm’ (for example, you don’t necessarily understand how or why such a system reaches the decisions it does).
- Invest in independent ethical and legal advice, and not exclusively from software providers who often have a vested interest in selling such systems.
Mårtenzon identifies three main ways where AI can bring benefit to recruiters: collecting objective data for making more informed decisions; creating unique and customised experiences and ensuring applicants get fair and equal treatment; and increasing efficiency throughout the funnel by relieving them of repetitive tasks.
Predictive Hire’s Hyman believes its tools are of most benefit “at the top of the funnel” when organisations have thousands of candidates applying for positions. Its Phai machine learning tool allows recruiters to literally interview everyone who has applied for a position via text chat and claims to do so in a fair and equal way to see who is the best fit, “no matter what age, gender, sexuality, experience, ethnicity”.
Over the past year, the technology adoption in both recruiters and candidates has skyrocketed”
Elin öberg Mårtenzon, CEO of Tengai
“And we can send every one of them personalised feedback. We can understand immediately what soft skills a person has, something you cannot tell from a CV. You can’t do any
Users include grocery retailer Iceland, which faced huge recruitment challenges in the pandemic. The tools helped them to recruit 5,500 new team members in a month even though they had zero capacity for recruitment. It was receiving 50,000 applications a month and Predictive Hire’s mobile-first solution asked candidates five customised questions and gave each one personalised feedback. The cost per hire was reportedly £5.
In the short time since Tengai launched, Mårtenzon has seen a shift in both perceptions and use of its platform, partly fuelled by the pandemic. Initially, they met resistance for a number of reasons but mainly because recruiters weren’t technically mature enough to insert technology in at the interview-phase and were also afraid that their candidates weren’t technically mature enough.
Key recruitment AI terms explained
Programmatic advertising: this is where computer algorithms decide where and when to buy and place job advertising and for how much. It makes use of big data to target job ads and real-time bidding to place them.
Machine learning: this is a branch of AI where algorithms can be trained to learn by repeatedly accessing data without being programmed. In recruitment, such algorithms can be trained to perform repetitive tasks.
Facial recognition software: a system which can match a human face to a digital or video image, which could be stored in a database. When combined with AI, the database can be searched and matches instantly made.
“Over the past year, we can see that the technology adoption in both recruiters and candidates has skyrocketed and people are now very used to conducting several stages of the funnel with the help of technology,” she says. “That said, the need for our product has increased since our initial launch and is now not only focusing on use of technology but around the purpose it serves: for instance to mitigate bias, increase efficiency, create data-driven processes, secure overall candidate experience and more.
“Since technology development is exponential, it is only natural that the use of technology follows that pattern.”
Editor’s comment
Along with hybrid working, and the most appropriate way of handling online abuse, artificial intelligence is one of those hotly debated subjects about which many clever people have opinions – but have little in-depth knowledge about how to work with it in practice.
Recruiter’s technology journalist Sue Weekes shines a light on greater understanding of AI and its potential benefits in recruitment and beyond in our Special Report, featuring conversations with passionate practitioners of this science as well as two exciting case studies. Involving recruiter Hydro Energy Group, Creed Communications and delivery firm Hermes, these thought-provoking case studies will definitely stimulate your strategic thinking about how AI might work for you and your firm.
We also explore the complex regulatory picture, with the European Commission having published draft regulations around AI this past spring.
This is a ‘must read’ in this issue of Recruiter.
DeeDee Doke
Editor Recruiter/recruiter.co.uk