Organizations are increasingly turning to artificial intelligence and machine learning to help screen potential employees and more efficiently manage their human capital — including contingent workers. And while many expect AI systems will help eliminate potential bias that humans may inadvertently insert into the process, that is not always the case; in fact, AI-managed systems can do the opposite and create bigger bias problems.

Hence, it is important for contingent workforce managers to stay informed and pay close attention to their systems to avoid potential glitches that may insert unintended biases into their human capital decisions.

Potential Risk

SIA Research Analyst Francesca Profeta addressed the topic of AI bias in her 2021 report, “Diversity, Equity & Inclusion in the Contingent Workforce.” Without proper oversight, algorithms can reflect biases just as much as their fallible human counterparts, according to Profeta.

“One of the so-called advantages of artificial intelligence in recruitment is that machine learning can be used to eliminate human bias,” the report states. “But with so many players entering this market, all claiming to offer the latest cutting-edge AI machine learning technology, the question remains, how much reliance can be placed on machine learning to eliminate unconscious bias?”

SIA Executive Director of Global Research John Nurthen also discusses AI bias in his “Staffing Trends 2022,” report, which highlights seven trends that will impact the staffing industry this year. As automation continues to grow across talent acquisition technology, the chances of another algorithm scandal increases, and perhaps one that has serious legal ramifications, according to the report. Regulation is now generally considered necessary to both encourage AI and manage the associated risks. For employers, these developments will likely escalate both the risk and cost of an AI failure.

“Increased deployment of AI in HR-related contexts is inevitable but, with it comes increased risk,” Nurthen wrote. “So, the trend highlighted here isn’t that we’ll be seeing more use of AI, it’s that we’ll see more AI going wrong,”.

“Workforce managers should be aware of all of the places in their recruiting programs where AI is being used,” says Susannah Shattuck, head of product at Credo AI. “Anywhere that AI is being used to filter résumés or directly evaluate candidates during the recruitment process, workforce managers should be wary of bias — especially if the system has been trained on ‘historical data’ that is reflective of past and current workforce inequity.”

Credo AI, which released its products last year, provides AI governance and risk management technology that identifies unacceptable bias across the machine learning development lifecycles and enables organizations to build and deploy AI with high ethical standards. Credo AI’s intelligent SaaS enables enterprises to measure, monitor and manage AI-introduced risks.

“While there is great potential, there is also tremendous risk if these technologies aren’t developed and deployed responsibly that often stems from organizations lacking understanding of their limitations,” Shattuck says. “These risks can vary on a spectrum from potential bias and discrimination to privacy concerns.”

The top-of-mind issue for Credo AI is bias — systematically prioritizing one group over another — which could lead to unfair outcomes for the applicants.

Bias in AI-based Recruitment Tools

Still, bias, in the general sense, isn’t necessarily a bad thing when it comes to AI. “It is a feature, not a bug,” Shattuck explains.

Algorithms’ and AI systems’ powerful pattern-recognition tools can evaluate vast amounts of data more quickly than a human ever could, while identifying subtle patterns that a human might miss. When an AI system is trained to recognize a specific pattern — for example, finding and recognizing patterns in CVs and résumés of candidates who ended up passing the interview process — it is made biased by its human developers.

Bad bias in training data, in the context of AI, is unintentional bias that gets baked into the system during development, Shattuck explains. Using the CV reviewer system as an example, if the data used to train the algorithms that make up this system are only ever shown résumés from white men, those algorithms will become unintentionally biased against résumés from other groups — which then results in those groups being declined interviews at much higher rates than the white male group.

If AI developers do not remove unintentional bias at each step of the development process, the resulting algorithms and AI systems may exhibit some of the same harmful biases that humans would.

“We live in a biased world with biased historical data; if we train our AI on this data without thinking critically about how to ensure these systems are fair, we run the risk of hard-coding human prejudice into our technology,” says Shattuck.

Part two of this article will explore actions being undertaken to provide oversight in this arena, both from companies themselves as well as the US government.

print