The chair of the US Equal Employment Opportunity Commission, jointly with officials from three other federal agencies, on Monday released a statement outlining a commitment to enforce their respective laws and regulations to protect the public from bias in automated systems and artificial intelligence.

It serves as a warning to industries — including those in the workforce ecosystem — to proceed with caution when using AI.

In addition to EEOC chair Charlotte Burrows, the statement was signed by Rohit Chopra, director of the Consumer Financial Protection Bureau; Kristen Clarke, assistant attorney general for the Justice Department’s Civil Rights Division; and Lina Khan, chair of the Federal Trade Commission. All four agencies — which are responsible for enforcing civil rights, nondiscrimination, fair competition, consumer protection and other legal protections — have previously expressed concerns about potentially harmful uses of automated systems and resolved to “vigorously enforce” their collective authorities and to monitor the development and use of automated systems.

The group uses the term “automated systems” broadly to mean software and algorithmic processes, including AI, that are used to automate workflows and help people complete tasks or make decisions.

“We have come together to make clear that the use of advanced technologies, including artificial intelligence, must be consistent with federal laws,” said Burrows, chair of the EEOC. “America’s workplace civil rights laws reflect our most cherished values of justice, fairness and opportunity, and the EEOC has a solemn responsibility to vigorously enforce them in this new context. We will continue to raise awareness on this topic; to help educate employers, vendors and workers; and, where necessary, to use our enforcement authorities to ensure AI does not become a high-tech pathway to discrimination.”

While these tools can be useful, they also have the potential to produce outcomes that result in unlawful discrimination, according to the group. Potential discrimination in automated systems may come from different sources, including problems with:

Data and data sets. Automated system outcomes can be skewed by unrepresentative or imbalanced data sets, data sets that incorporate historical bias or data sets that contain other types of errors. Automated systems also can correlate data with protected classes, which can lead to discriminatory outcomes.

Model opacity and access. Many automated systems are “black boxes” whose internal workings are not clear to most people and, in some cases, even the developer of the tool. This lack of transparency often makes it all the more difficult for developers, businesses and individuals to know whether an automated system is fair.

Design and use. Developers do not always understand or account for the contexts in which private or public entities will use their automated systems. Developers may design a system on the basis of flawed assumptions about its users, relevant context or the underlying practices or procedures it may replace.

“We already see how AI tools can turbocharge fraud and automate discrimination, and we won’t hesitate to use the full scope of our legal authorities to protect Americans from these threats,” said FTC Chair Khan. “Technological advances can deliver critical innovation — but claims of innovation must not be cover for lawbreaking. There is no AI exemption to the laws on the books, and the FTC will vigorously enforce the law to combat unfair or deceptive practices or unfair methods of competition.”

More information on AI regulations can be found in the recent CWS 3.0 article, “AI regulation is not one size fits all.”

print