Anticipate increased scrutiny of your use of AI in talent acquisition from various stakeholders, including regulators, legislators, candidates, and employees. 

Although few cases have investigated the potential bias machines generate in the recruiting process so far, the topic is gaining traction, prompting organizations to take action.

History in the Making

One of the initial cases in this field, Mobley vs. Workday, Inc., is currently underway. It’s essential for anyone using or considering the use of AI-driven algorithms in their recruiting process to follow this case closely.

The lawsuit, brought by Derek Mobley, an African-American over the age of 40 with anxiety and depression, alleges that despite applying for over 100 positions since 2019 using Workday’s platform, he was rejected for each one. Mobley claims that Workday’s talent acquisition suite, which incorporates AI capabilities, exhibits bias based on race, age, and disability.

Workday contends that its software is not biased.

In January, a U.S. District Court judge dismissed the case, stating that she did not find sufficient evidence to support Mobley’s claim that Workday was acting as an employment agency, which is necessary for Title VII to apply. Last week, Mobley filed an amended complaint in an effort to address these shortcomings.

The revised complaint argues that employers delegate their hiring decision-making authority effectively to Workday. The platform achieves this by analyzing job application data to identify patterns that companies can consider.

The complaint read, “Because there are no guardrails to regulate Workday’s conduct, the algorithmic decision-making tools it utilizes to screen out applicants provide a ready mechanism for discrimination.”

In this case, Mobley stated that he was turned down for positions despite meeting or surpassing their requirements, sometimes within hours of submitting his application. 

New Technology, New Laws, More Litigation

Mobley’s case is not the first to bring attention to AI’s impact on talent acquisition. In 2023, the online tutoring company iTutorGroup settled an EEOC complaint. The complaint alleged that the company violated the Age Discrimination in Employment Act (ADEA) by creating software that rejected candidates based on age (55 for women, 60 for men). The complaint stated that the software excluded over 200 applicants.

The problem came to light when a candidate had her application rejected immediately after including her real birthday. However, when she submitted an identical application at a younger age the next day, she was offered an interview.

In addition to paying a total of $365,000 to rejected applicants, iTutorGroup agreed to conduct training on the Age Discrimination in Employment Act (ADEA), Title VII, and other federal laws. They also committed to reviewing and revising their anti-discrimination policies and establishing a complaint process for candidates and employees.

HR professionals and attorneys anticipate an increase in similar cases. This expectation stems from the expanding use of AI in talent acquisition and the growing interest among agencies and legislatures in developing regulations to safeguard privacy and job seekers. For instance, in July 2023, New York City passed the first law to regulate the use of AI in hiring. This law mandates annual audits of AI-based recruiting tools to verify that they are not leading to discriminatory outcomes.

According to SHRM, many businesses are integrating AI into their HR practices. Approximately one-third of HR professionals who utilize AI for recruiting use it for tasks such as reviewing or screening resumes, automating searches, or communicating with candidates. Other surveys suggest that up to 80% of American employers incorporate AI at some stage of their hiring process.

The Forecast Calls for Fog

The EEOC has been monitoring the increasing trend of AI-based recruiting, cautioning employers that they can be held accountable if their recruiting software is found to discriminate against certain job seekers. However, this is a relatively new area, and only a few lawsuits regarding this issue have been filed thus far.

Nevertheless, it has been demonstrated that regulations typically target employers rather than the technology itself. The favored approach is to mandate transparency instead of regulating algorithms. This implies that employers must undertake the majority of the effort, in addition to increased compliance risk.

Under New York City’s law, any business that hires city residents must calculate and disclose “adverse impact ratios” to illustrate how AI impacts their hiring and employment choices. Additionally, individual candidates have the option to opt out of AI-related processes, and employers are required to comply with their requests.

This area has been gaining momentum. Illinois’ Artificial Intelligence Video Interview Act mandates that companies disclose the workings of their AI, inform applicants about its use, obtain their consent, and implement specific measures to safeguard privacy. Maryland has a comparable law, and similar measures are under consideration in California, New Jersey, Vermont, and Washington, D.C.

Employers face challenges in complying with these laws, as several fundamental questions remain unanswered. Attorneys are concerned about advising their clients when many aspects of AI have not yet been clearly defined. While the EEOC issued guidance last year regarding AI and discrimination, labor and employment attorneys are not fully satisfied with the clarity provided.

“The EEOC guidance is quite broad and includes software and other types of tools that employers have been using for decades,” one attorney told Bloomberg. Another attorney mentioned that even basic sorting or filtering of candidates during the vetting process could fall under the EEOC’s definition of bias originating from technology.

However, some believe that the EEOC is not doing much more than addressing issues it has been concerned about for years.

“That’s just plain old unlawful,” one individual commented on the iTutorGroup case. “A, you don’t need a computer to do that. B, you don’t need a computer or artificial intelligence to tell you that that’s not okay.”


The use of AI in talent acquisition is under increased scrutiny due to concerns about bias and discrimination. Cases like Mobley vs. Workday, Inc. and the settlement with iTutorGroup highlight these issues. Organizations are advised to monitor legal developments closely and consider the implications of AI in their recruiting processes. 

Regulations, such as New York City’s law mandating annual audits of AI-based recruiting tools, are emerging to address these concerns. Employers integrating AI into their HR practices should be aware of potential legal challenges and ensure transparency in their AI usage.



Write A Comment