How AI Is Deciding Who Gets Hired
Algorithms are becoming more powerful in society, not less.
We live in an increasingly algorithmic world, but this also has a dark side. One of those issues is AI bias at scale. In 2022, we can safely say that job screening is practically automated.
Bloomberg Quicktake recently covered this very nicely, so I recommend you watch/listen to the video below. If you can’t see it watch it here. How A.I. is impacting human resources is really significant.
Think about it, the job hunt has changed as artificial intelligence increasingly scores resumes, runs interviews and decides who gets access to opportunity. The Great Resignation and so many job openings also means more algorithms at work impacting the potential bias in the algorithms we have created to sort an abundance of candidates vs. a talent shortage in many domains.
The introduction of a bevy of new artificial intelligence (AI) tools by industry vendors over the past few years has been essentially embraced by HR practitioners seeking to use machine-learning algorithms to bring new efficiencies to recruiting, employee engagement, shared services, learning and development, and other areas of HR. Whether you get a job or even reach the interview stage likely has more to do with A.I. than human choice.
The Black Box Problem of Algorithmic Software at Scale
We know that such tools tend to be incredibly biased and there are some serious concerns about a lack of transparency in the way that many AI vendors’ tools work—namely that too many still function as “black boxes” without an easily understood explanation of their inner workings—and that machine-learning algorithms can perpetuate or even exacerbate unconscious bias in hiring decisions.
The adoption of A.I. in HR has also been nearly universal in recent years as the ratio of job openings to applying candidates has skyrocketed. With higher wages in 2022, this is even more true today. With many early retirements in 2021 we must also be aware that in the U.S. our labor force is rapidly aging, which could lower our productivity as a society. Leaders recognizing this are implementing A.I. at a faster rate — before they are completely safe.
It’s hard to fathom just how ubiquitous algorithms are becoming in the recruitment process.
AI is used to screen application materials such as a resume or a brief on an applicant’s key competencies.
AI-based resume screening tools can highlight to recruiters which candidates they should reach out to first.
They can match a candidate’s skills and competencies to open requisitions and job requirements.
They have brought significant efficiencies to HR departments by drastically cutting the amount of time spent to process pools of applications.
AI is also used to decide who gets hired. Moreover, they have brought fundamental changes to the management of applications and a noticeable degree of fairness to offering each resume the attention and time it deserves.
As we all know, recruiting professionals often cannot afford to give sufficient time and attention to each resume they receive.
As such job applicants who understand this system can game the algorithms to get an easier path to the interview stage of their application. Then there are newer AI systems that use facial recognition in the interview process as well, to further judge a candidate.
Creating a World With Less Free Will
Scientifically speaking no artificial intelligence system can get out of an infinite loop on its own. Only human beings have the ability to realize it and use their free will to step out of it. Yet as a society with the implementation of algorithms, we are taking some human free will out of the equations of our professional lives, how we consume information and even how we connect with each other. This is possibly a very dangerous thing to do for the mental health and well-being of people.
The implementation of A.I. can improve productivity, spur automation and help society function better, but it can also damage how people navigate an algorithmic world, isolating us and dividing us based on rules and biases that are not of a world based on merit, equality or the free will of the individual.
In a world of A.I. at this stage, you are just a persona to be monetized and a candidate to be shortlisted, if you get that far. One of the functions of Application Tracking Systems (ATS) is to short list candidates based on their resume credentials.
At the time of submitting your application, your resume is entered into an ATS that matches your experience with required qualifications in the job description. Leveraging AI, an ATS can match candidates based on their skills. This is a step forward in expediting the hiring process and making it more efficient as many resumes are not reflective of the skills the HR is looking for.
Our HR A.I. Are Still Biased
But efficiency is not always humane, and researchers are pointing out that these systems have significant AI bias in them. How skilled applicants are at gaming the system of algorithms can also further introduce bias. When you are competing with potentially thousands for the same job or position, the smallest detail can disqualify you. Typically it’s an A.I. that ends your chances, not a human being.
There’s also a gap in the legislation around the use of facial recognition in the interview process. There’s not always proper consent in how the world is being automated at scale. The use of facial recognition in HR is such an example.
In January 2020 the state of Illinois signed into law groundbreaking legislation regulating the use of AI in video job interviews. The law requires companies to provide notice to candidates that the technology will be used to analyze their video interviews, explain to candidates how the AI works and obtain candidate consent to be evaluated by AI before any interview takes place.
The Hiring Process Is Full of Bias
Algorithmic bias likely augments human bias in ways which really create an unfair system.
One of the biggest concerns of regulators and legislators is that AI-driven recruiting tools can perpetuate bias in hiring processes. Experts say decisions made by human recruiters about applicants have long been fraught with their own unconscious bias, and, because the datasets used to train AI systems are based on human decisions, the resulting algorithms could be just as likely to encourage discriminatory choices or disparate impact unless this issue is mitigated.
The HR industry and vendors are working to improve the situation. Some vendors have increased efforts to validate and audit their algorithms to protect against such bias. In a Great Reshuffle of work, not all is well in how algorithms have shifted the human resources process. Many candidates will simply fall through the cracks.
If you enjoyed this article you might enjoy my Newsletter Sublink, where I discuss business breaking news at the intersection of technology.
Finally if you enjoy my content please consider giving me a tip as this is my full-time gig and I cannot continue to write without extensive community support.
I am seeking a patronage model where the majority of my articles are kept free while the community supports my work directly. On top of that I’m building ways to encourage advanced features like communities such as Discord.
If you believe in me, this is my full-time gig I have no other job.
Create your profile
Only paid subscribers can comment on this post
Check your email
For your security, we need to re-authenticate you.
Click the link we sent to , or click here to sign in.