How Well Do You Know Your AI? Candidate Screening and the Data Science of Interpretability

    In the late 90’s, the emergence of Applicant Tracking Systems (ATS) changed the recruiting landscape. Over time, as job boards simplified the application process to a single button click, such systems became a necessity for companies to filter through thousands of applications and narrow down an overwhelming list to a qualified pool of candidates. Today, there are far more automated tools in the HR tech stack than a simple ATS. With developments in artificial intelligence (AI)—mainly via the popularization of machine learning—many tools have emerged promising to infuse inferential or predictive analytics into the recruiting process, often making bold claims about its accuracy, precision, and of course, its time and cost savings.
    user-circle Rafael Guerra
    hourglass-01 4 min read
    itemeditorimage_626339011161d-1

    But for many job seekers, the rapid adoption of an ever-growing stack of AI-powered tools has been frustrating. Without the right keywords on their profile, or in some cases, possibly due to a particular aspect in the applicant’s background (or lack thereof), applicants may be disqualified from consideration without a human being ever reviewing them. The argument often presented, particularly by those creating the tools, is that many AI models do a better job than humans at removing bias and staying grounded in objectivity. While this is always the goal, understanding how a tool actually delivers your outputs is critical to ensuring it meets the goals of your organization through its approach. That understanding, however, is not always a guarantee.

    AI can be a powerful tool in driving organizational efficiency while minimizing human biases; however, it is only as good as the humans who wrote it, and understanding the strengths and weaknesses of each tool will help you make more informed decisions for your organizational needs.

    What does this mean for the people who interpret these models?

    Model interpretability describes the degree in which someone could reasonably understand the results of a model as well as, more generally, how the model came up with those results. Here’s the catch, though: some of the most accurate and computationally-expensive models are not very easily interpretable. Also dubbed ‘black box models’, tools using these models do indeed tend to perform better than simpler models, but at the expense of a clear mechanism to better understand them.

    Understanding the inner machinations of a model may not be such a big deal in certain kinds of problems such as image classification—we can clearly see in the end whether an image of a dog was classified correctly. But things get complicated quickly when we deal with subjective human data; data such as whether someone is a good fit for a job. There have already been many cases where well-intentioned AI tools ended up introducing more biases than they reduced, to great human cost. To address this in HR, starting in 2023, companies operating in the states of New York and California that provide artificial intelligence services for hiring decisions must be audited for bias. It remains to be seen what kinds of standards will be put in place, but it’s a safe bet to assume interpretability will be a key component.

    How can interpretable models reduce bias in hiring?

    Interpretable models can reduce bias in hiring due to its superior ability to communicate whether a possibly biased variable was relevant in the analysis, which could prompt the company deploying the model to further examine whether the variable should stay in the model or whether it should be pre-processed or cleaned. Some variables may be obvious—we shouldn’t include race or ethnicity in a hiring decision. But other variables are trickier. ZIP code, for instance, can be highly correlated with race in parts of the country, and if a model does not remove ZIP code, it is still implicitly containing potentially biased information into the model.

    Black box models are not impossible to interpret, to be fair. But they do require a lot more work and expertise and even so, there will still be differences in ‘global’ and ‘localized’ interpretations, which is probably a topic too big to address here. In some popular models, such as random forests, you may even be easily able to understand the ‘information gain’ of certain variables—in other words, how helpful they were to determine a result—but you won’t be able to easily know in what direction that variable swung (e.g. did it make someone more likely to be a good fit for the job, or less likely?).

    Interpretable models such as regressions, on the other hand, may lack the ‘buzz’ of more sophisticated models, but they will tell you the magnitude and direction of how your variables affected your outcome. They will be able to tell you whether ZIP code and race are collinear. They are not perfect models by any means, but if your company values transparency and if you have a team of both technical and non-technical people who should be able to verify both the process and output of a hiring decision, they may be an excellent choice.

    Zooming out

    It is possible for someone to use a sophisticated model with improved safeguards around their data collection pipelines and end up with a minimally-biased result just as it is possible for someone to use a simpler model and still retain bias due to poor data quality or lack of active debugging.

    Having an interpretable system is an important foundation to building a robust and minimally-biased recruiting system—but just as important as the model choice is having a team committed to actively engaging, debugging, and interpreting the model. So when considering model interpretability, focus not only on the kind of model, but the ‘big picture’ approach to its interpretability, all the way from data collection to the deployment of the model itself.

    The infusion of data science into recruiting has and will continue to save time and cut costs for companies. It will continue to shake up and disrupt the recruiting landscape much like Applicant Tracking Systems did decades ago. There is much to be excited about, but with regulation coming, it is more important than ever to be mindful of responsible, equitable practices in AI-powered recruiting—and there is no better place to start than answering the fundamental question: how well do you know your AI?

    If you’re interested in learning more about how to reduce bias in hiring, read our white paper on the impact skills-based hiring and digital credentials have in the workplace now.

    This article is written by Rafael Guerra, Credly's Data Scientist, and Bailey Showalter, Credly's Vice President of Talent Solutions.

    Read More

    What Happens To Your Data When You Issue Digital Credentials?

    GDPR, privacy policies, data governance are all top of mind for forward-thinking...
    Learn more

    What is AI Transparency & Why is it Critical to Your Recruiting Strategy?

    This need for trust and transparency extends to recruiting and hiring practices. Organizations...
    Learn more

    How to Add Your Digital Certification Badge to Your Email Signature

    Before you get started, make sure you have you assets ready to go (instructions are the same...
    Learn more

    Are You Ready To Issue Digital Credentials? You're Not on Your Own After Launching Your Program!

    So what happens after you sign on the dotted line? You're not alone! Hear from our VP of...
    Learn more

    Can Digital Credentials Save Your Company Time?

    Digital credentials offer a competitive edge for businesses in a variety of ways. We recently...
    Learn more

    Micro Credentials, Macro Impact

    As a Credly Customer Success Manager, I get to work hand-in-hand with organizations that are...
    Learn more

    Ready to Get Started with Credly’s Acclaim platform?