Machine Learning/AI

Breaking Down the AI Firewall in Recruitment

Standing in front of a club is a big intimidating bouncer, ominous, unwavering and imposing. This bouncer is the only thing keeping you from getting in that door. Looking at the building and the door behind the bouncer you try to guess the magic words to get in, but no luck. You are turned away and you will never know why. This is how AI in recruiting works and it needs change: fast.

Whilst AI has the power to handle the mundane and exhausting task of screening resumes, it has been given far too much control over who gets hired and who does not. This was most likely done as a way to save costs and increase efficiency, but the AI may also reject qualified candidates. Of course, human recruiters can do that too, which perhaps explains why 88% percent of executives surveyed by Professor Joe Fuller of the Harvard Business School say they know that the AI does this, and are presumably okay with it.

A second often-cited reason for using AI in recruitment is to reduce bias. The theory sounds good—an AI or machine learning model is just a statistical model, and therefore shouldn’t be screening people out based on, say, gender or skin colour. But in practice this rather depends on the training data; if the workforce of your company is predominantly white and male, and you train the AI on data based on who is already successful at your company, you effectively encode that bias into the system.

Amazon’s recruitment AI, for example, discriminated against women: something that became such a problem that they ultimately scrapped the project altogether. Yet other companies are out there developing, training and selling AIs that are making decisions on their own. When you look at how widespread AI is in recruitment (some estimates are as high as 75% of companies use AI), the scope of the issue will most likely impact every single job seeker at some point in their working careers.

Can an AI make decisions?

When we talk about AI making decisions, really I’d argue that they are coming to a result. To my mind, a decision implies that someone or something is weighing all the information and reaching a conclusion. That conclusion can be derived logically or illogically, emotionally or rationally. which is why, I’d argue, humans reach decisions and not results. And, as Daniel Kahneman suggests in “Thinking, Fast and Slow” how we reach a decision is not entirely black and white.

We should say here that it isn’t yet known if human thought is actually computational, but there are grounds to think that it might not be. Sir Roger Penrose speculated in his book “The Emperor's New Mind” that human consciousness is non-algorithmic, and thus is not capable of being modelled by a conventional Turing machine.

By contrast, a result is a given outcome of a set of parameters and operators, much like a maths equation. In the case of AI, it is the constraints or the parameters set in by humans that guide them to a result, though how they get there is based on their training. As we train AI on datasets they can increase the value of some variables over that of others, such as in the case of Amazon’s AI favouring men over women.

A further challenge is that the field of auditing AI models is still in its infancy, and as a result AI models are often essentially black boxes—we don't know how they arrive at their result. In Star Trek Kirk, Bones and others have said time and time again to Spock, “Curse your Vulcan logic”, but at least Spock could explain. Currently, AIs are coming to results that are both unexpected and in some cases unexplainable. These results could become a major issue if companies start to face discrimination complaints on the back of them. So at the very least those of us who work in recruitment need to find a way to utilise AI to increase transparency and aid in the decision making process.

Harnessing AI for recruitment

Imagine that the bouncer who opened this piece had a job that was not to stop people from getting in but rather to help people get in. In this scenario we are going to take away any ability the bouncer has to make a decision, and instead ask them to collect information for each person at the door and pass that information on to a human decision maker. So what would that look like in a recruiting process? Instead of having a webpage where you need to upload your resume and cover letter, then enter your name, address, degree, job experience etc. thereby wasting your time regurgitating your CV, this process would ideally look something like this:

  • Step 1: Upload your CV.
  • Step 2: Receive a prompt to have an optional chat with an AI who will first verify your contact information, and then will ask you some clarifying questions based on what is in your CV and what the hiring manager is looking for.
  • Step 3: Chat with AI for 5-10 minutes to enhance your application.
  • Step 4: Receive a transcript of the chat, and have that transcript also sent to the recruiter.

Often when we are making our resumes we are playing a guessing game as to what the recruiter, hiring manager or AI are looking for, but this aims to change that.

The questions asked would be based on criteria specific to the job in question and would be used to enhance an employee's application. The extra information captured would then be used by the recruiter or hiring manager to make the decision, with the clarifying information probably more heavily weighted, as the AI would be asking questions about their experience as it directly relates to this role.

This information could be easily highlighted and summarised by the AI for ease of reading, but with the full transcript available should the recruiter want to see more. Already we can see how this use of AI is pulling the best out of candidates that it might have otherwise rejected.

Now let’s get a bit more radical with that AI. Imagine we took all that information we learned from Amazon’s biased AI and others, and used that as a check on the human decision. Real-time screening of the decisions that the hiring managers and recruiters are making.

We know humans have biases, both conscious and unconscious. Humans can fall for all sorts of fallacies and biases when screening resumes: as much as we try to avoid and reduce them they can never go away, but maybe we can take a system that inadvertently developed its own biases to flag our own. Now we are taking AI from a firewall, bouncer, gatekeeper etc, to something that is actively trying to open the door for talent that would be otherwise missed due to a resume that just didn't have all the right keywords, or because of some learned bias by the AI or human.

Yes that might mean we are now spending more time reading resumes, and sorting through piles of applications, but that is the job. And by doing it, we are also increasing the number of quality candidates we attract, increasing our recruitment funnel, and hopefully hiring the best candidate. At the same time, an AI can still help with that workload: it can still sift and sort resumes, it just should not have the power to turn someone down—that should firmly remain in the hands of someone who can justify or explain why they made that decision.

For those people concerned about the amount of time spent sifting through CVs, we can still continue to use pass/fail questions to reject some applicants, which can be prompted by the AI, such as are you legally entitled to work in this country?

Using AI to search through past applicants

So we’ve looked at the way AI can improve the applicant’s CV, but that is only half of what an AI tool could do to improve the effectiveness when hiring. We have massive pools of past applicants who are potentially open to future opportunities and for whom, subject to their data protection preferences, we have contact details. By providing AI with the questions to ask candidates, it could also screen our databases of applicants to see who has previously answered these, or similar, questions or have these experiences listed on their CV. A human recruiter could then reach out to promising-looking applicants to see if they are interested in the role.

This could be particularly useful when the pipeline is weak and it has therefore been difficult to find talent. The AI system could be tasked with searching the database for older CVs that include experience in the relevant area but not perhaps to a high enough level. After review by a recruiter, we could reach out to these applicants to see if they are interested in the role, and then get their agreement to a quick chat with the AI to update their profile; if someone applied to us say two years ago, they may have upskilled in a relevant area but it will obviously be missing from their now 2-year-old CV.

Now standing at the club door is not some ominous individual holding the keys to a great evening or not, but someone who wants to give everyone a chance to get inside by showing off their best dance moves (skills and experiences). They are also finding people who tried to come in last time but weren’t so lucky. An AI that helps open the door does sound like it will increase workload for a recruitment team, and possibly even be a step backwards for efficiency… unless you are a company like Container Solutions, which believes humans should review all applications. By refocusing on how AI can help in recruitment, what we are trying to achieve is an increase in transparency, quality of hire and accountability during the hiring process and having a process that is fair and equitable. As well as this, we are taking proactive steps to reduce discrimination and bias in the hiring process, which is critical for the success of companies (as well as preventing future lawsuits). It’s time to start looking at AI as a better way of getting candidates through the process and not rejecting them from the start.

New call-to-action

Comments
Leave your Comment