Critical issues in digital hiring

An end-to-end digital hiring experience raises issues of exclusion and dangers from AI. Sue Weekes investigates.

The Digital Right to Work scheme continues to embed itself into the recruiting landscape since it came into effect last October. When giving evidence in favour of the Data Protection and Digital Information Bill, which is currently under scrutiny by the House of Commons committees, Keith Rosser, Reed’s director of group risk and the director of Reed Screening, presented a case study to demonstrate its progress. In a sample of 70,000 people who had obtained a job since the scheme came in, 58,000 had chosen to go down the digital route, which represents 83%.

“That is a really good uptake and they were also able to complete the progress in three-and-a half minutes,” says Rosser. “That meant they could get jobs flexibly, remotely, regionally, whereas before, they’d have had to physically take their documents and attend in person.”

As we know there are still painful paper-based parts of the hiring process”

Rosser contends that the Bill matters to recruiting and employment for a number of reasons. Firstly, it will help to put safeguards in place around the Digital Right to Work scheme and place more responsibility on the Identity Digital Service Provider (IDSPs) – who complete the checks – and how they behave in areas such as tackling fraud. There are now around 41 IDSPs certified by the UK government and up to 60 in total.

Secondly, it provides “a legislative hook”, says Rosser, to allow UK government to share other types of data via digital identity for employers. It isn’t yet clear what this will mean but he suggests benefits could include sharing HM Revenue & Customs payroll data that proves where someone has worked for the last 20 years – rather than rely on the information from the employee. This would also give an immediate and “hard stamp” of when the person was first and last paid.

“It could also be that government holds information on qualifications or allows a link into the driving licence system to prove a person’s credentials and address,” he explains. “As we know there are still painful paper-based parts of the hiring process, and these are those sorts of opportunities that the Bill could help to create.”

He also highlights the importance of more information and data sharing for promoting the “regionality” of jobs today and in the future. “Sharing digital information like this could mean that somebody living in Fife in Scotland could start working for a company in London without having to meet that company, and this would help spread jobs into the regions.”

This is one of several areas of discussion around the Bill that also directly links to the work of the All-Party Parliamentary Group (APPG) on Modernising Employment, which wants to make the UK hiring the fastest in the world (‘New parliamentary group launches to improve UK standards in recruitment’, 30 May 2023, recruiter.co.uk).

Digital exclusions?

While Rosser believes the Bill could deliver a range of benefits, he also acknowledges why there is a pushback in some quarters. There has always been a danger that an end-to-end digital hiring experience would leave some people behind. While the UK does not suffer the levels of digital exclusion as some parts of the world, lack of access to the online world is still an issue for some people. For example, according to the Digital Poverty Alliance, more than a quarter (26%) of young people do not have access to a laptop or similar device.

Rosser points out that the scheme is already leaving people behind and not just because of digital exclusion. To take the digital route, individuals need an in-date British or Irish passport or visa. “This means one in five people can’t take the digital route. For these reasons, digital hiring cannot be mandatory and there must always be a way for people to get jobs, non-digitally,” says Rosser. “And then we have to work to ensure the system is as fair and inclusive as possible.”

And we have to work to ensure the system is as fair and inclusive as possible”

There is also a fear that digital hiring would fast-track artificial intelligence (AI) into hiring in a more biased way to the point that companies use robots to sort, sift and select people. In reality, AI has long begun its march into the recruitment space and Rosser’s response is that the only way of dealing with it is to hardwire it into the digital hiring process with rules at the outset. “We need to find some way of regulating AI in recruiting to ensure we remove any bias,” he says.

He points to the approach and warnings made by the TUC at its AI conference in April, which voiced concerns about AI-powered technologies making “high-risk, life-changing” decisions about workers’ lives and these decisions include “line-managing, hiring and firing staff”. The TUC says employers must disclose to workers how AI is being used in the workplace to make decisions about them and that every worker should be entitled to a human review of decisions made by AI systems, including job applications, so they can challenge decisions that are unfair and discriminatory.

There is little doubt that digital hiring is the direction of travel in the world of recruitment. Clearly, there are still critical issues to address. These must be confronted by the industry sooner rather than later. Given unknown threats ahead, the recruitment industry must ensure it remains part of the discussion going forward.


Deepfakes and algorithmic bias

Another area touched upon in the Commons was the notion of deepfakes and how to avoid parties creating fake digital identities that pass the IDSP test. Deepfake candidates, where a criminal manipulates a video, image or sound file to effectively create a person that doesn’t exist, are seen as a rising threat in recruitment around the world. In the entertainment world, deepfakes of stars such as Tom Cruise have circulated on social media demonstrating how convincing they can be.

In the US, the FBI Internet Crime Complaint Centre has warned of an increase in complaints reporting the use of deepfakes and stolen Personally Identifiable Information (PII) to apply for a variety of remote work and work-at-home positions.

A public service announcement form the Bureau said that the remote work or work-from-home positions reported include information technology and computer programming, database and software related job functions. The FBI wasn’t explicit about the aims of the scammers but significantly some of the reported positions included access to customer PII, financial data, corporate IT databases and/or proprietary information.

Complainants report the use of voice spoofing, or potentially voice deepfakes, during online interviews of the potential applicants. The FBI added that in these interviews, the actions and lip movement of the person seen interviewed on-camera do not completely co-ordinate with the audio of the person speaking. “At times, actions such as coughing, sneezing or other auditory actions are not aligned with what is presented visually,” it stated.

Russ Cohn, general manager of IDVerse, formerly OCR Labs, a certified IDSP, explains that deepfake technology is developing quickly. “It won’t be long until we have a GPTSight or similar that can do for visual content what ChatGPT has done for text. Already it is very easy to create a realistic fake person in minutes using online tools readily available on the internet,” he says.

He explains that deepfakes are created using AI with a combination of machine learning and neural networks. “Two competing algorithms – one that generates fake content and one that assesses how realistic it is – work together to create a fake person who looks entirely real.”

Cohn estimates that there is a 400% year-on-year increase in the use of deepfakes in creating fake identities and says what’s called “ghost fraud deepfakes” are a real threat to recruitment. “This type of fraud happens when a fraudster steals the identity of a recently deceased person and uses deepfake technology to convince the authorities that the person is alive and well.”

So will digital hiring increase the risk of deepfakes? Cohn said that digital identities are intended to be more secure, not less. “It’s actually quite easy to make fake ‘analogue’ identity. Cash and VHS tapes are other analogue items that are also counterfeited. That’s why governments, companies and the world economy have been transitioning to digital, whether that’s mobile wallets, movie streaming or digital identity.”

Cohn said like any technology, it can be used for good or bad and adds: “For the human resources management field, the modern remote nature of workforces means that more secure digital ID verification is needed to detect synthetic identities and job histories.”

And he adds that in addition to deepfakes, another threat is algorithmic bias. “Decision-making systems are programmed by people, and those programs can contain bias – for instance, some systems, especially older ones created more than five years ago,” he says. “IDVs [identity verification systems] in the era of generative AI do not work well for people with darker skin tones. Generative AI-based technologies can be used to de-bias algorithms.

“The industry should ensure that biometric and document verification technologies work for users of all skin colours, ages and accessibility needs is essential.”

When choosing an IDSP, it is essential to ask about the platform’s certification and verification. For example, Bixelab, based in Australia, and iBeta from the US are the two internationally recognised labs that certify biometrics. Also, look for IDSPs that are accredited and meet the UK Trusted Digital Identity Framework requirements.

See also Recruiter March-April 2023, p18-22.


Image credit | iStock

The Last Word July/August 2023: Gavin Sharpe

Recruitment is a funny old game.

6 July 2023

FAST 50: Growth rates on the rise

Last year’s FAST 50 analysis made two predictions:

6 July 2023

Soundbites: July/August 2023

Dominic Morton

6 July 2023
Top