By: Leah Maurer, Member Candidate, J.D. Candidate, May 2020, St. Thomas University School of Law.

Artificial intelligence (“AI”) is being used more and more frequently by human resource departments and in employment decisions. AI can be involved from the start of finding job applicants and sorting through resumes, to the most recent use of actually conducting job interviews and screening applicants through video and voice analysis. While the objective is to help eliminate or reduce bias, companies need to be aware of the legal implications that might arise in doing so.

An AI system can help reduce employment discrimination claims. However, employers need to ensure the algorithms used in AI are encompassing a complete and comprehensive data set, otherwise, the result could end up with the biases it was intended to keep out. Algorithms used in AI function off of the information inputted into them. Our own subconscious bias can end up being incorporated in the algorithm if it uses an incomplete data set. Therefore, employers need to ensure that the AI algorithm uses a complete and appropriate data set. Common errors and oversights in gathering and developing the AI algorithm could have costly and detrimental legal implications.

One up-and-coming tool companies are utilizing is video analysis, which uses a video recording to screen employees and rates them before a recruiter sees them. The AI algorithm evaluates the employee’s performance based on body language, enthusiasm, eye contact, and how well they match up with the requirements of the job. Voice-recognition software and video analysis, up particular issues with disability and ethnicity/national origin compliance. People with a disability or a native accent could consequently score poorer than an interviewee without such condition. Therefore, employers need to make sure they follow employment laws that protect such categories if they use AI technology in conducting interviews or screenings and ensure that those protected categories of people are not negatively impacted by the technology.

Another tool employers are using to sort through job applicants is automated searches of the entire web, including social media accounts, and analyzing content users have posted from many years back. Companies must ensure they are complying with the Fair Credit and Reporting Act (FRCA) when utilizing such a strategy. The FRCA can apply to employment decisions and allows employees to challenge the accuracy of information collected about them. Part of compliance also includes obtaining consent, sharing results with the consumer, and giving them an opportunity to respond.

AI algorithms can be helpful in complying with employment laws and avoiding disparate impact claims, however, employers need to use caution when doing so. The use of AI algorithms in employment decisions is bound to continue as it increases efficiency and productivity in recruiting and hiring. Companies need to be aware of the legal implications that could come along with using AI algorithms and protect themselves accordingly.