One of the main reasons that organizations are increasingly turning to artificial intelligence tools in the recruitment process is their potential to contribute to diversity, equity and inclusion by removing the element of human bias from the equation during the initial vetting process.

However, there are many questions and concerns when it comes to AI’s actual effectiveness in this area. In fact, in a paper published in Philosophy and Technology, two professors from the University of Cambridge’s Centre for Gender Studies argued that AI tools “may entrench cultures of inequality and discrimination by failing to address systemic problems within organizations.”

Such concerns could lead to an erosion of trust in these tools. Contingent workforce program managers should be aware of how these tools can unfairly screen out qualified applicants.

Francesca Profeta, a research analyst at SIA, describes how the use of these tools needs to evolve.

“Despite technological advancements and progression made within AI, a ‘one-size-fits-all’ recruitment process is unlikely to be successful when considering candidates from a diverse talent pool,” Profeta says. “Regardless of the sophistication of said technology, the outcome is only as good as the data on which the algorithm is being trained.”

Profeta points to data from the World Economic Forum that states about 78% of global professionals with AI skills are male. “For the solution to be completely unbiased, it must incorporate diverse perspectives to mitigate these forces.” She adds that AI tools might be inclined toward a group of people solely based on sex, gender or age, but other groups may be negatively affected by its use as well. “We are losing sight of physical and non-visual disabilities,” she says. “Would AI make reasonable adjustments for candidates during the interview process or provide reassurance or advice as a great recruiter would?”

AI should be a supportive tool in the process, but people still need to be involved, says Ben Schiller, senior marketing manager at ConverzAI, an AI-based candidate engagement platform. “Organizations should maintain human decision-making in their recruitment processes, while AI technologies exist to support those decisions with factual and current data.”

The Contingent Connection

CW program managers whose programs or staffing providers engage AI in their processes should “learn where AI is being implemented and how it interacts with candidates or operates on candidate data,” Schiller advises. “Adopt AI technologies that are built to prioritize candidate experience and that do not have any biases.”

To address some of these concerns, governments and organizations around the world are developing regulatory frameworks and codes aimed at ensuring recruitment processes are fair and nondiscriminatory.

Contingent workforce program managers who plan to implement AI tools into their systems should keep up to date on existing and developing regulatory frameworks.

For example, the World Employment Confederation announced last month that its members have agreed to a set of principles to guide the deployment of AI in the recruitment and employment industry. The “Code of Ethical Principles in the Use of Artificial Intelligence” is a living set of principles that will be adapted as AI evolves.

“Fairness, nondiscrimination, diversity, inclusiveness and privacy — principles that WEC members also abide in their overall practice of HR services — are also principles to be followed to guarantee ethical use of AI in recruitment and employment,” the WEC states. “As for the principles enshrined in WEC’s overall Code of Conduct, WEC members have a duty to apply those ethical principles in their use of AI.”

John W. Healy, VP and chair of the taskforce on digitalization at the WEC, says, “As there is a variety of governance frameworks related to AI-based systems, we began by seeking the collective guidance of our membership, both amongst the global corporate members of the World Employment Confederation as well as the individual National Federations within our membership.

“From there, we extended the conversation to include the position of a wide array of our commercial partners (many of whom operate within the HR technology and education technology sectors) as well as the many national and international policy-making organizations who also are exploring the role of AI in the process of connecting individuals with work,” Healy adds.

Meanwhile, governments around the world are also developing new laws to address the use of AI in employment.

Some of these new laws and proposals have put the focus on candidate screening tools with guidance deeming these tools to be high risk and requiring a conformity assessment before they can be used.

The next issue of CWS 3.0 will discuss regulations being developed around the world, including in the US, EU and China.

print