This need for trust and transparency extends to recruiting and hiring practices. Organizations today must show customers, applicants, and other stakeholders that the tools and practices they use to hire the right people for the job are equitable and free from biases that might exclude qualified candidates.
An important way to accomplish this is by ensuring that the algorithms used in ATS (Applicant Tracking Systems) and other recruiting technology are powered by transparent AI (Artificial Intelligence).
AI transparency ensures that ethical, moral, legal, cultural, sustainable, and social-economic factors are taken into consideration during the development and ongoing use of AI systems. The past several years have seen several high-profile examples of how ethical dilemmas have torpedoed well-intentioned AI projects—including facial recognition technology that is biased against certain ethnicities and credit lending algorithms that discriminate against women.
Transparent AI ensures that ethics and the company’s values are integrated with business processes—including hiring. Ensuring that employees, applicants, customers, and partners trust AI decision making throughout the hiring process is not only the ethical thing to do, it also increases confidence and public perception of your brand—leading to better business outcomes overall.
Unfortunately, traditional recruiting and sourcing practices are rife with inherent biases—from favoring candidates who are able to game recruiters’ keyword searches to overlooking women, people of color, and other historically underrepresented groups. As AI continues to revolutionize the hiring process, HR professionals and hiring managers need to ensure these biases are not baked into rules-based machine thinking. They must be able to understand and explain how and why an algorithm arrived at its decision.
You’d think that moving decisions from humans to machines would eliminate unfair hiring practices (and it’s true that you can reduce bias in the hiring process with intelligent and intentionally designed technology), but keep in mind that AI is only as good as the data you feed into it. Unconscious bias can seep into AI decision making through the developers and data scientists who build and train these models, even if done unintentionally.
As AI continues to take over more decision making throughout the recruiting and hiring process, organizations need to be aware of the technology that powers their HR tech tools, from both an ethics and compliance perspective.
When assessing the AI-powered tools you incorporate into your organization’s recruiting strategy, remember these three critical factors.
New legislation being introduced in New York, California, and other places around the world will require audits of AI-powered HR technology—making it imperative that employers be aware of, and focus on, the transparency of their AI-powered hiring tools.
This shift in regulation may cause some HR professionals to feel uneasy, which is understandable. You can feel confident and comfortable, however, when you’re equipped with the right knowledge and understand the questions to ask HR tech vendors to ensure your tool is compliant. Your partner should be open to explaining the algorithm of their technology and how matches are made—in other words, the explainability of the tool.
Explainable AI is the concept of understanding and explaining exactly how your AI system produces outputs. For HR and hiring managers, this means being able to pinpoint the factors that led to one person being hired over another.
What’s important in this equation is to understand what the algorithm’s matches are based on. To be truly unbiased and compliant with new regulations, these matches should be based on verifiable skills, credentials, or certifications. They shouldn’t be based on names, past work experiences, profile pictures, or unverified additional data from the web. Being able to identify the deciding factors and context surrounding AI decision making is important for transparency and trust in the system.
To meet AI transparency standards, companies running machine learning recruiting tools should not only inform users whenever their data is being used in algorithmic decision-making, but they should also get their consent up front. After gaining consent, all user data in machine learning-based algorithms needs to be protected and anonymized. If an AI model can operate without PII (Personal Identifiable Information), it’s best to remove it, ensuring that decisions are not biased on PII data points, such as gender, race, or zip code.
If you partner with a company for AI-powered hiring tools, it’s critical that they build data integrity checks into the tool’s regular code development and review cycles. In addition, the company should regularly engage independent experts to run penetration tests and vulnerability scans of the code and operating environments. Being ISO 27001 (Information Security) certified is an additional measure that proves a company is adhering to the highest security standards in the industry.
Machine Learning and AI-powered hiring tools are revolutionizing recruitment and applicant sourcing—making it easier and faster to find the right employees for open positions. However, unintentional biases can make their way into AI systems and lead to unintended outcomes. When you’re responsible for the output of your AI-powered tools, it’s important to know what goes into them.