Public and private sector employers alike are now contending with the legal implications associated with a number of algorithmic decision-making tools that have emerged in the human resources marketplace, including tools used to screen candidates for recruitment, to effectuate hiring processes, for performance management, and to make promotion, hiring and dismissal or layoff decisions.

The U.S. Equal Employment Opportunity Commission (EEOC) recently issued guidance regarding the use of software, algorithms and artificial intelligence (AI) in employers’ “selection procedures,” in other words, employment decisions related to hiring, promotion and firing. (The guidance explicitly does not address other employment practices that may implicate AI.) The guidance, titled “Select Issues: Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964,” provides insights for employers using AI tools to ensure that the use of such tools does not disproportionately impact individuals based on protected characteristics such as race, color, religion, sex, or national origin.

The EEOC enforces federal equal employment opportunity laws and aims to clarify the application of these laws to the use of software and automated systems in employment decisions. The EEOC’s guidance addresses the potential adverse impacts that AI tools may have on protected classifications under Title VII of the Civil Rights Act of 1964 and provides clarity regarding existing requirements under the law. The guidance emphasizes that employers must be mindful of the potential for AI tools to reinforce biases or perpetuate discrimination, even if unintentional, and provides recommendations on how to mitigate these risks.

The guidance provides definitions for key terms like “software,” “algorithm” and “AI” in the workplace context. It explains that software and applications incorporating algorithmic decision-making tools are used in different stages of the employment process. Examples include resume-screening software, employee monitoring software, virtual assistants, video interviewing software and testing software. Employers need to be aware of the potential disparate impact of these tools and whether they are job-related and consistent with business necessity.

Under Title VII, employers can be held responsible for the use of algorithmic decision-making tools, even if they are developed or administered by third-party entities. Employers should evaluate whether the tools result in lower selection rates for individuals with protected characteristics and consider whether alternatives exist with less disparate impact. The guidance discusses the four-fifths rule, which compares selection rates between different groups, but notes that compliance with this rule does not guarantee compliance with Title VII.

Key points covered in the guidance include:

  1. Disparate Impact Analysis: Employers are encouraged to conduct a thorough and ongoing evaluation of the potential disparate impact of AI tools used in employment selection on protected classes. This involves assessing whether the algorithms used in the selection process disproportionately exclude individuals from certain groups based on protected characteristics. The guidance provides that an employer may be liable for discriminatory determinations generated by tools or software used by a third-party where the same are utilized by the employer.
  2. Validation and Testing: The EEOC recommends that employers regularly validate and test the AI tools used in employment selection procedures to ensure their reliability and accuracy. Employers should assess whether the tools are properly designed, implemented and maintained to minimize any potential bias and engage outside vendors providing AI tools about this topic as needed.
  3. Data Management: Employers must ensure that the data used to train AI tools is comprehensive, relevant and representative of the diverse applicant pool. Adequate data management practices should be established to maintain data accuracy, privacy and security.
  4. Compliance with the Law: The EEOC reminds employers that the use of AI tools should not violate any federal laws or regulations. Employers must remain diligent in their efforts to comply with anti-discrimination laws and take corrective action if any disparities are identified.

Employers must familiarize themselves with this guidance to ensure that their employment selection procedures comply with federal regulations and promote equal opportunities for all applicants. By proactively addressing the potential biases and risks associated with AI tools, employers can foster fair and inclusive hiring practices.

The above information is only an overview. We encourage you to consult with your attorneys to ensure your organization is using software, algorithms, artificial intelligence and algorithmic decision-making tools in employment selection in compliance with Title VII and this guidance from the EEOC.

For further information, please contact Spencer Wilson or Anastasia Bondarchuk.