The concept of “responsible AI” is central to a growing resistance against the unregulated use of automation in the US hiring industry.

After unsuccessfully applying to almost 100 jobs through the human resources platform Workday, Derek Mobley noticed a peculiar trend. 

“I would receive all these rejection emails at 2 or 3 in the morning,” he told Thomson Reuters Foundation. “I suspected it was automated.”

Mobley, a 49-year-old Black man with a finance degree from Morehouse College in Georgia, had previously worked as a commercial loan officer and held other finance-related positions.

He applied for mid-level positions in various industries, such as energy and insurance. However, despite using the Workday platform, he did not receive any interviews or call backs. As a result, he often had to resort to gig work or warehouse shifts to make ends meet.

Mobley suspects that he faced discrimination from Workday’s AI algorithms. 

In February, he initiated a class action lawsuit against Workday Inc., claiming that the repeated rejections he and others faced indicated the use of an algorithm that discriminates against Black individuals, those with disabilities, or those over 40 years old.

Workday responded to the Thomson Reuters Foundation by stating that Mobley’s lawsuit lacked factual allegations and assertions. The company emphasized its commitment to “responsible AI.”

The debate over the definition of “responsible AI” is central to a growing resistance against the widespread use of automation in the US hiring industry.

Mobley’s legal case, currently progressing through California’s judicial system, is just one part of a larger conflict surrounding automation in work environments.

In the United States, both state and federal officials are struggling to determine how to regulate the use of AI in hiring practices and prevent algorithmic bias.

Recent surveys indicate that approximately 85% of major American employers, including nearly all Fortune 500 companies, utilize some type of automated tool or AI to assess and rank job applicants.

This involves the use of resume screening software that scans and evaluates applicants’ resumes automatically, assessment tools that gauge an applicant’s fit for a position through online tests, and facial or emotion recognition technology that can analyze video interviews.

In May, the Equal Employment Opportunity Commission (EEOC), the federal agency responsible for enforcing workplace civil rights laws, issued new guidelines to assist employers in preventing discrimination when employing automated hiring procedures.

In August, the EEOC resolved its initial case involving automation, penalizing iTutorGroup $365,000 for utilizing software to automatically decline applicants aged 40 and above. The company, which offers English-language tutoring to students in China, denied any wrongdoing as part of the settlement.

City and state officials are also participating in the discussion.

A new law governing AI use in hiring was enacted in New York City in July, and legislators in several states, including California, Vermont, and New Jersey, are advancing similar bills.

“At present, the landscape resembles the Wild West,” remarked Matt Scherer, an attorney at the Center for Democracy and Technology (CDT), a nonprofit organization focused on advocating for civil rights in the digital era. “However, this is expected to evolve.”

Algorithmic Blackballing

Potential bias introduced by technology is a concern due to AI’s utilization of algorithms, data, and computational models to emulate human intelligence. It depends on ‘training data,’ and if this data contains bias, often reflecting historical patterns, it may be perpetuated in an AI system.

In 2018, Amazon discontinued an AI resume screening tool that had begun to automatically penalize applicants whose CVs included the term ‘women’s,’ such as ‘women’s chess club captain.

Amazon’s decision stemmed from the fact that its AI models were trained on a dataset spanning a decade, during which most applicants were men, reflecting the industry’s male-dominated nature.

Brad Hoylman-Sigal, a New York state senator, is concerned about this type of discrimination. In August, he proposed a bill that would mandate audits of hiring tools and prohibit specific types of data collection, such as emotion recognition software.

“Numerous studies have shown that many of these tools excessively intrude on employees’ privacy and exhibit bias against women, individuals with disabilities, and people of color,” he stated.

Ifeoma Ajunwa, who heads the AI and the Law program at Emory University, notes that job seekers frequently lack the option to opt out of automated hiring procedures.

She has cautioned against the risk of “algorithmic blackballing,” in which hiring systems repeatedly reject an applicant based on undisclosed criteria.

She also urged the Federal Trade Commission (FTC) to intervene and prohibit specific types of automated hiring tools.

In April, the FTC and three other federal agencies, including the EEOC, issued a statement indicating their scrutiny of potential discrimination originating from datasets used to train AI systems and the opaque “black box” models that complicate efforts to ensure they are unbiased.

Some proponents of AI recognize the potential for bias but argue that it can be mitigated. 

Frida Polli, co-founder and former CEO of pymetrics, a company that develops AI-driven assessment tools, suggested that developers can adjust the factors considered by an automated system, a capability not available with the human brain.

CDT’s Scherer is skeptical.

“The industry claims these tools can enhance diversity, but I believe there’s a significant contradiction,” he stated. “In essence, you’re simply automating the process of human bias in hiring.”

Taming The Tech

Lawmakers like California state assembly member Rebecca Bauer-Kahan are concerned about this issue. She proposed a bill this year that would permit job applicants to opt out of automated hiring platforms and mandate these platforms to undergo fairness audits.

Her proposed legislation, AB331, would have also simplified the process for private individuals to file lawsuits against hiring platforms if they believe there is bias.

The final point posed a significant challenge. Businesses and technology groups, including those represented by California’s Chamber of Commerce, expressed concerns about the private right of action, among other issues.

The bill did not advance out of the assembly, but Bauer-Kahan intends to reintroduce a revised version in the upcoming session in December.

“The federal government isn’t very active currently,” she remarked. “States will likely need to take the lead.”

New York City is at the forefront. In July, it became the first place in the country to enact a law specifically aimed at regulating algorithms in hiring.

According to the law, individuals can request notification if they are evaluated by automated tools. Additionally, hiring software that uses AI to select or reject candidates must undergo audits to detect racist or sexist biases.

However, numerous digital privacy advocates argue that the law’s scope is limited. It only covers AI hiring tools that significantly aid or replace human efforts, and it fails to address biases that could impact disabled applicants.

Cody Venzke, a senior policy counsel at the American Civil Liberties Union (ACLU), expressed significant concern about regulatory efforts that he views as weakened or diluted.

“Some proposals would require harmed applicants and employees to prove that an algorithm directly caused discrimination, when many algorithmic hiring tools’ real harm is their strong influence on human decision-making,” he said.

“Other proposals would give employers a second bite at the apple to come up with non-discriminatory hiring practices, rather than giving applicants harmed by discriminatory technology their day in court.”

As regulations strive to adjust to the widespread use and complexity of AI in recruitment, Mobley hopes his lawsuit will at least shed light on the extent of algorithmic bias.

“I know I’m not the only one,” he said. “There are a lot of people out there, applying for jobs they were probably qualified for … but (who) are being unfairly discriminated against.”


The debate over AI’s role in recruitment is gaining momentum in the United States, with concerns about bias and discrimination at the forefront. Derek Mobley’s lawsuit against Workday Inc. highlights the challenges faced by job seekers, particularly those from marginalized groups. 

While some argue that AI can help mitigate bias, others, like Senator Brad Hoylman-Sigal and Ifeoma Ajunwa, advocate for stricter regulations to prevent discrimination. Lawmakers at both state and federal levels are grappling with how to regulate AI in hiring, with New York City leading the way with its new law. 

Despite these efforts, concerns remain about the effectiveness of current regulations in addressing algorithmic bias. As the conversation continues, the hope is that increased awareness and regulation will lead to fairer hiring practices for all.



Write A Comment