The US Equal Employment Opportunity Commission on May 18 released a technical assistance document explaining the application of key established aspects of Title VII of the Civil Rights Act to an employer’s use of automated systems, including those that incorporate artificial intelligence.

The document, “Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964,” focuses on preventing discrimination against job seekers and workers.

Employers increasingly use automated systems, including those with AI, to help them with a wide range of employment matters, such as selecting new employees, monitoring performance and determining pay or promotions, according to the EEOC. Without proper safeguards, the use of such systems may run the risk of violating existing civil rights laws.

Many employers routinely monitor their more traditional decision-making procedures to determine whether these procedures cause disproportionately large negative effects on the basis of race, color, religion, sex or national origin under Title VII, but they may have questions about whether and how to monitor the newer algorithmic decision-making tools.

“As employers increasingly turn to AI and other automated systems, they must ensure that the use of these technologies aligns with the civil rights laws and our national values of fairness, justice and equality,” said EEOC Chair Charlotte A. Burrows. “This new technical assistance document will aid employers and tech developers as they design and adopt new technologies.”

The document also includes a Q&A section to address concerns employers and tech developers may have about how Title VII applies to use of automated systems in employment decisions and assists employers in evaluating whether such systems may have an adverse or disparate impact on a basis prohibited by Title VII.

Burrows encourages employers to conduct ongoing self-analyses to ensure that their use of technology does not inadvertently result in discrimination, Forbes reported. Further, employers should recognize their responsibility for the AI tools they use, even if developed or administered by an external vendor. By utilizing the EEOC’s technical assistance document, employers and vendors can better understand how civil rights laws apply to automated systems used in employment.

According to a JD Supra blog post by law firm Polsinelli, important takeaways from EEOC’s guidance on adverse impact include:

  • Employers may be responsible for the effect of third-party software. EEOC’s guidance signals the agency will look to hold employers responsible for adverse impact even if the AI/ML (machine learning) tool in question is third-party software the employer did not develop  The guidance states that this responsibility can arise from either the employer’s own administration of the software or a vendor’s administration as an agent of the employer.
  • Employers rely on vendor assurances at their own risk. Although EEOC encourages employers to “at a minimum” ask their AI/ML software vendors about steps taken to assess adverse impact, EEOC’s position is that reliance on the vendor’s assurances is not necessarily a shield from liability. Employers still face liability “if the vendor is incorrect about its own assessment.”
  • Self-audits are advisable. Given the inability to rely on a vendor’s assurances, employers are best served by periodically auditing how the AI/ML tools they use impact different groups.  To do such an audit, employers need access to the AI/ML tool’s underlying data, which is best ensured at the time the tool is implemented.

“I encourage employers to conduct an ongoing self-analysis to determine whether they are using technology in a way that could result in discrimination,” the EEOC’s Burrows said. “This technical assistance resource is another step in helping employers and vendors understand how civil rights laws apply to automated systems used in employment.”

print