Blog & News | backgroundchecks.com

Can Artificial Intelligence Take Bias Out of the Hiring Equation?

Written by Michael Klazema | Jan 30, 2020 5:00:00 AM

One of the trendiest technologies today, artificial intelligence seems to be everywhere. Tech giants say that it powers web search, helps recommend media that you'll enjoy, finds better road directions and airfare, and has the potential to do even more. Ai is also in use as a tool to help employers determine which applicants are best suited to move on to an in-person interview. With many startups pouring cash and development time into AI models for hiring, proponents say that these efforts will lead to the first neutral, bias-free hiring process. 

Traditional hiring is fraught with bias, from preferences about an applicant's educational background to judgments about his or her criminal past following employee background checks. Such biases could mean missing out on some of the best candidates. Allowing AI to make evaluations and judgments about the many parts of an applicant's profile could enable these candidates to float to the top.  

Concerns over these new developments were quick to develop. Some early attempts to legislate controls over the use of AI in hiring arose at both the state and federal level. Employers in Illinois must receive an applicant's consent if they plan to use a video interview with corresponding AI facial analysis. Other proposed legislation would oversee the control of biometric data gathered by employers, and in some cases encourage the adoption of AI systems to reduce bias in hiring. 

Some companies, such as Uber and Lyft, already use AI-powered employee background checks to pull criminal history data from multiple sources into one report.  

Pitfalls remain. Artificial intelligence, a misnomer for a process more accurately described as machine learning, is still a very young technology. Not only is there a great deal of uncertainty in how to create systems that will produce reliable results but hiring AIs could also potentially introduce even more bias into hiring.  

Critics point to well-reported examples of AI implementations that have exhibited ingrained or accidental biases against women and minorities. Machine learning is only as good as the information that it uses to learn, and it isn't uncommon for algorithms to produce unintended or even unexplainable results. Some applicants may also find themselves uncomfortable with the idea of rolling the dice with AI that they do not understand, and they could seek employment elsewhere.  

While employers should not discount the potential benefits of using technology to streamline hiring, they should also be wary of over-reliance on black box technology. It won't be the algorithm held responsible for discriminatory actions, and it likely won't be the developer, either. As AI continues to develop, it is crucial to look beyond the buzzwords to evaluate the level of real value that these tools may bring to the table when it's time to invest in talent acquisition.