As organizations globally look to implement AI tools in their recruitment processes, contingent workforce program managers should keep up to date on the latest imposed and in-progress regulations on AI.

A 2022 report by SIA, “Using AI: Risks and Challenges,” highlighted that globally, national and local governments have begun to adopt strategies and issue guidelines for the ethical use of AI, noting that these will soon be followed by legislation, with the EU and China leading the way.

Up-to-date information on AI regulation and policy worldwide will enable CW program managers to identify potential areas of risk that might arise while employing the use of AI-enabled technology.

Here are some of the recent regulatory and policy actions on AI and AI discrimination bias being taken by governments and organizations worldwide.

North America

United States. Earlier this year, the US Equal Employment Opportunity Commission turned its attention to AI tools. The draft Strategic Enforcement Plan covering fiscal years 2023-2027 states that the EEOC will focus on recruitment and hiring practices and policies that discriminate against racial, ethnic and religious groups; older workers; women; pregnant workers and those with pregnancy-related medical conditions; LGBTQI+ individuals; and people with disabilities.

These include the “use of automated systems, including artificial intelligence or machine learning, to target job advertisements, recruit applicants, or make or assist in hiring decisions where such systems intentionally exclude or adversely impact protected groups.”

Furthermore, the plan states that it will also focus on “screening tools or requirements that disproportionately impact workers based on their protected status, including those facilitated by artificial intelligence or other automated systems, pre-employment tests and background checks.”

The EEOC’s guidance noted how AI use in the recruitment process could lead to discrimination.

In 2022, the White House also published the “Blueprint for an AI Bill of Rights,” a framework with a set of principles to help guide the use of automated systems and to protect the public. It includes guidance on algorithmic discrimination protections.

Additional national legislation includes the Algorithmic Accountability Act of 2022, which was introduced in both houses of Congress in February 2022. In response to reports that AI systems can lead to biased and discriminatory outcomes, the proposed legislation would direct the Federal Trade Commission to create regulations that mandate “covered entities,” including businesses meeting certain criteria, to perform impact assessments when using automated decision-making processes. This would specifically include those derived from AI or machine learning.

Elsewhere, in a recent notable development, officials in New York City are reviewing Local Law 144, which requires employers and employment firms that use automated employment decision tools (AEDT) within the city to conduct independent audits of such tools for bias and provide disclosures to candidates and employees at least 10 business days prior to using AEDT. The Department of Consumer and Worker Protection finalized the rule earlier this month, with enforcement scheduled to begin July 5.

Canada. In June 2022, the government of Canada proposed the Artificial Intelligence and Data Act as part of Bill C-27, the Digital Charter Implementation Act, in 2022. The proposed act would set the foundation for the responsible design, development and deployment of AI systems that impact the lives of Canadians. Under the AIDA, businesses would be held responsible for the AI activities under their control. They would be required to implement new governance mechanisms and policies that consider and address the risks of their AI system and give users enough information to make informed decisions.

The bill is currently undergoing a second reading under the House of Commons.

Europe

European Union. The AI Act, a proposed European law which would impact all 27 member states, assigns applications of AI to three risk categories.

  1. Systems with unacceptable levels of risk are prohibited from being made available on the EU market and include systems that deploy subliminal techniques or exploit vulnerabilities of specific groups, systems used for social scoring, and real-time biometric identification systems by law enforcement in public places.
  2. Systems with minimal risk, including spam filters or AI-enabled video games, comprise the majority of the systems currently being used on the market and will be unregulated.
  3. Systems with limited risk — i.e., those that interact with humans; detect humans or determine a person’s categorization based on biometric data; or produce manipulated content, including chatbots and those used to produce deepfakes — have transparency requirements. Users must be informed that they are interacting with an AI system, that an AI system will be used to infer their characteristics or emotions, or that the content they are interacting with has been generated using AI.

United Kingdom. In the UK, the government in late March published a White Paper on AI in which they take the view that “rigid and onerous legislative requirements on businesses could hold back AI innovation and reduce our ability to respond quickly and in a proportionate way to future technological advances. Instead, the principles will be issued on a non-statutory basis and implemented by existing regulators.”

“Existing regulators will be expected to implement the framework underpinned by five values-focused cross-sectoral principles: 1. Safety, security and robustness; 2. Appropriate transparency and explainability; 3. Fairness; 4. Accountability and governance; and 5. Contestability and redress.”

The government adds that “without regulatory oversight, AI technologies could pose risks to our privacy and human dignity, potentially harming our fundamental liberties.”

John Buyers, head of AI at the law firm Osborne Clarke, commented on the white paper, telling CNBC that the move to delegate responsibility for supervising the technology among regulators risks creating a “complicated regulatory patchwork full of holes.”

“The risk with the current approach is that a problematic AI system will need to present itself in the right format to trigger a regulator’s jurisdiction, and moreover the regulator in question will need to have the right enforcement powers in place to take decisive and effective action to remedy the harm caused and generate a sufficient deterrent effect to incentivize compliance in the industry,” Buyers told CNBC via email.

Asia Pacific

While there is no overarching AI legislation across the Asia Pacific region yet, countries with large staffing markets such as Japan and China have discussed their approaches to AI regulation.

Japan. In Japan, the Ministry of Economy, Trade and Industry issued its Governance Guidelines for Implementation of AI Principles Ver. 1.1 in 2021. Updated in 2022, the paper states, “While the discussion on AI governance is developing in Japan and around the world, it is not easy to design actual AI governance.”

It adds, “Legally binding horizontal requirements for AI systems is deemed unnecessary at the moment. Even if discussions on legally binding horizontal requirements are held in the future, risk assessment should be implemented in consideration of not only risks but also potential benefits.”

China. Meanwhile, the China Academy of Information and Communications Technology issued the White Paper on AI Governance, which lays out ethical standards for using AI, including that algorithms should protect individual rights. The paper proposed that “AI should treat all users equally and in a non-discriminatory fashion and that all processes involved in AI design should also be nondiscriminatory.”

It adds, “AI must be trained using unbiased data sets representing different population groups, which entails considering potentially vulnerable persons and groups, such as workers, persons with disabilities, children and others at risk of exclusion.”

print