Tools for trustworthy artificial intelligence can play a vital role in managing known risks of using AI in recruitment and hiring and building trust, according to a guide published by the UK government’s Department for Science, Innovation and Technology. 

The Responsible AI in Recruitment guide identifies potential ethical risks of using AI in hiring — including perpetuating existing biases, digital exclusion, and discriminatory job advertising and targeting — and outlines how assurance mechanisms and global technical standards can enable organizations to evaluate the performance of AI systems, manage risks and ensure compliance with statutory and regulatory requirements. 

The guide is aimed at organizations seeking to procure and deploy AI systems in their recruitment processes. Written for a non-technical audience, the guide assumes a minimal understanding of AI and data-driven technologies and is appropriate for organizations with or without a comprehensive AI strategy. 

According to the guide, all stages in the recruitment process, including sourcing, screening, interviewing and selection, carry a risk of unfair bias or discrimination against applicants. 

“Additionally, inherent to these technologies is a risk of digital exclusion for applicants who may not be proficient in, or have access to, technology due to age, disability, socio-economic status or religion,” the guidance states. 

The guidance outlines a range of considerations that should be considered by all organizations seeking to procure and deploy AI in recruitment. Alongside this, the guide outlines options for mechanisms that may be used to address concerns, actions or risks identified as a result of these considerations. 

“As AI becomes increasingly prevalent in the HR and recruitment sector, it is essential that the procurement, deployment and use of AI adheres to the UK government’s AI regulatory principles, outlined in ‘A pro-innovation approach to AI regulation,’” according to the guidance. 

These principles are safety, security and robustness; appropriate transparency and explainability; fairness; accountability and governance; and contestability and redress. 

Apsco, which represents the professional recruitment sector globally, worked closely with DIST to develop the guidance. Tania Bowers, global public policy director at Apsco, said in a press release that AI has significant potential in the staffing sector, adding, “As with any new tools, there are inherent risks that must be mitigated against, and clear guidance such as this has a crucial role to play.” 

Bowers noted the timeliness of the guidance, given passage of the European Union’s Artificial Intelligence Act earlier this month. “While there will be an implementation period, this new regulation will impact any recruiters with operations in or who provide services into the EU,” she said. 

print