Navigating AI Bias in Hiring: Strategies for Fair and Inclusive Recruitment

Bias and discrimination can sneak into your hiring process in various ways. Here are some key areas to watch out for, along with tips on how to reduce their negative impact.

AI hiring tools can automate nearly every aspect of the recruitment and hiring process, significantly easing the workload on HR teams.

However, they can also magnify inherent biases against protected groups that may be underrepresented in the company. Hiring bias can shrink the pool of potential candidates and expose companies to the risk of non-compliance with government regulations.

Amazon was an early adopter of AI to enhance its hiring practices in 2018. Despite deliberately filtering out information about protected groups, the company eventually discontinued the program due to discovered biases in hiring recommendations.

What is AI bias, and how can it impact hiring and recruitment?

AI bias refers to the potential for AI and data analytics tools to perpetuate or exacerbate bias. This can manifest in various ways, such as favoring one gender, race, religion, or sexual orientation over others. Another form of bias may involve discrepancies in job offers that widen income disparities among different groups. The predominant source of bias in AI stems from the historical data used to train the algorithms. Even when teams attempt to disregard specific differences, AI can unintentionally learn biases embedded in the data.

“Every system that involves human beings is biased in some way because, as a species, we are inherently biased,” Donncha Carroll, a partner at Axiom Consulting Partners who leads its data science center of excellence, stated. “We make decisions based on what we have seen, what we have observed, and how we perceive the world to be. Algorithms are trained on data, and if human judgment is captured in the historical record, there is a real risk that bias will enshrined in the algorithm itself.”

Bias can also enter the equation through the algorithm’s design and how individuals interpret the outputs of AI. It can stem from incorporating data points like income that correlate with biased results or from a lack of sufficient data on successful individuals from underrepresented groups.

Examples of AI bias in hiring

Bias can be introduced at various stages of the hiring process, including sourcing, screening, selection, and making offers.

In the sourcing stage, AI might help hiring teams decide where and how to post job announcements. Some sites may attract more diverse candidates than others, potentially biasing the pool of potential applicants. 

AI could also suggest different language for job offers that might unintentionally appeal more to certain groups. For example, if a sourcing tool notices that recruiters contact candidates from certain platforms more frequently, it might prioritize posting more ads on those sites, which could amplify existing biases. Additionally, teams might unknowingly use coded language, like “ambitious” or “confident leader,” which tends to resonate more with privileged groups.

In the screening phase, tools like resume analytics or chat applications might use certain criteria to filter out candidates, which could indirectly affect protected classes. For example, they might consider factors like employment gaps, which the algorithm might associate with lower productivity or success. Other tools might analyze candidates’ performance in a video interview, which could inadvertently favor certain cultural groups. 

This could happen if, for instance, the algorithm favors candidates who excel in sports more commonly played by specific genders or races, or who participate in extracurricular activities more often associated with wealthier individuals.

In the selection phase, AI algorithms may favor certain candidates over others based on biased metrics. 

After a candidate is selected, the firm needs to make a hiring offer. AI algorithms can analyze a candidate’s previous job roles to make an offer that the candidate is likely to accept. These tools can inadvertently exacerbate existing disparities in starting and career salaries across gender, racial, and other differences.

Government and legal concerns

Government organizations such as the U.S. Equal Employment Opportunity Commission (EEOC) have typically targeted deliberate bias under civil rights laws. Recent EEOC data indicates significantly higher jobless rates for Blacks, Asians, and Hispanics compared to whites. Additionally, there are notably more long-term unemployed job seekers aged 55 and over compared to younger individuals.

Katherine Jones on AI hype

Last year, the EEOC and the U.S. Department of Justice released new guidance on reducing AI-based hiring bias. They recommended that companies teach AI to ignore irrelevant variables in job performance and be aware that issues can still arise despite precautions. These steps are crucial as using AI in hiring without proper oversight can lead to legal consequences.

EEOC chair Charlotte Burrows suggests that companies should actively challenge, audit, and ask questions to validate AI hiring software.

Four ways to mitigate AI bias in recruiting

Here are some key ways organizations can reduce AI bias in their recruiting and hiring practices.

Incorporate human oversight

Pratik Agrawal, a partner in the analytics practice of Kearney, a global strategy and management consulting firm, suggests that companies should include humans in the decision-making process rather than relying solely on automated AI systems. For instance, hiring teams could develop a process to balance the different types of resumes that the AI system processes. It’s also crucial to manually review the system’s recommendations. Agrawal advises, “Constantly introduce the AI engine to more varied categories of resumes to ensure bias does not get introduced due to lack of oversight of the data being fed.”

Exclude biased data elements

Identifying data elements that introduce inherent bias is crucial. “This is a complex task that requires careful consideration when selecting the data and features to include in the AI model,” Carroll advised. When contemplating adding a data point, consider whether there is a tendency for that pattern to be more or less prominent in a protected class or type of employee unrelated to performance. Just because chess players excel as programmers doesn’t mean that non-chess players couldn’t possess equally valuable programming talent.

Prioritize representation of protected groups

Ensure that protected groups, which may currently be underrepresented in the workforce, are adequately represented in the AI model. “This will help prevent the AI model from encoding patterns that institutionalize past practices,” Carroll advised. He cited an example where a client assumed that having a degree was crucial for success. However, after removing this assumption, the client found that non-degree holders not only performed better but also had higher retention rates.

Define success metrics

Define success criteria for each role, focusing on objective outcomes such as increased output or reduced rework, which are less susceptible to human bias. Using a multivariate approach can help eliminate bias that may be present in traditional measures like performance ratings. This approach can provide a clearer understanding of the traits to seek in new hires.

Conclusion

AI hiring tools can streamline recruitment processes but may also amplify biases. Amazon’s AI-based hiring program was discontinued due to biases. AI bias can stem from historical data, algorithm design, and interpretation of outputs. Examples include biased job postings and resume screening. Government agencies like the EEOC are concerned about AI bias and recommend human oversight and excluding biased data elements. Prioritising the representation of protected groups and defining success metrics can also mitigate bias.

Source: https://www.techtarget.com/searchhrsoftware/tip/AI-hiring-bias-Everything-you-need-to-know

Author

Write A Comment