Share

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vivamus convallis sem tellus, vitae egestas felis vestibule ut.

Error message details.

Reuse Permissions

Request permission to republish or redistribute SHRM content and materials.

EEOC, DOJ Warn Companies: Do Not Use AI to Discriminate


A group of people in a laboratory looking at a laptop.


​Employers must review their artificial intelligence tools to ensure they are not violating the Americans with Disabilities Act (ADA), according to new guidance released by the Equal Employment Opportunity Commission (EEOC) and the U.S. Department of Justice (DOJ).

Earlier this month, the EEOC and DOJ each released a technical assistance document warning of the possibility of disability bias when companies use software tools like AI to make employment decisions.

"New technologies should not become new ways to discriminate," EEOC Chair Charlotte Burrows said in a press release. "If employers are aware of the ways AI and other technologies can discriminate against persons with disabilities, they can take steps to prevent it."

The EEOC's guidance, "The Americans with Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees," focused on preventing bias against applicants and employees with disabilities.

The document outlines issues that employers should consider to ensure their software tools do not create disadvantages for workers or applicants with disabilities. It focuses on three primary concerns under the ADA:

  • Employers should have a process in place to provide reasonable accommodations when using algorithmic decision-making tools.
  • Without proper safeguards, workers with disabilities may be screened out from consideration in a job or promotion even if they can do the job with or without a reasonable accommodation.
  • If the use of AI or algorithms results in applicants or employees having to provide information about disabilities or medical conditions, it may result in prohibited disability-related inquiries or medical exams.

"As a nation, we can come together to create workplaces where all employees are treated fairly," Burrows added. "This new technical assistance document will help ensure that persons with disabilities are included in the employment opportunities of the future."

The DOJ's guidance document, "Algorithms, Artificial Intelligence, and Disability Discrimination in Hiring," explains how algorithms and AI can result in disability discrimination in the hiring process. The document:

  • Offers examples of the types of technological tools that employers are using.
  • Clarifies that employers must consider how their tools could impact different disabilities.
  • Explains employers' obligations under the ADA when using algorithmic decision-making tools, including when an employer must provide a reasonable accommodation.
  • Provides information for employees on what to do if they believe they have experienced discrimination.

"This guidance will help the public understand how an employer's use of such tools may violate the Americans with Disabilities Act, so that people with disabilities know their rights and employers can take action to avoid discrimination," Kristen Clarke, assistant attorney general for the DOJ's civil rights division, said in a press release.

In October 2021, Burrows announced that the EEOC was launching an initiative to ensure AI and other emerging tools used in hiring and other employment decisions comply with federal civil rights laws that the agency enforces.

AI Can Also Discriminate Based on Age, Sex and Race

Lauren Daming, an attorney with Greensfelder Law Firm in St. Louis, said disability bias is just one potential concern associated with companies using AI. These technologies can also discriminate based on race and sex.

"There's a lot of evidence that facial recognition technologies often are less reliable when it comes to evaluating or validating women or people of color," Daming noted.

AI technologies can also discriminate against age: In May, the EEOC sued three integrated companies providing English-language tutoring services to students in China under the "iTutorGroup" brand alleging that they programmed their online software to automatically reject more than 200 older applicants.

In 2020, iTutorGroup programmed their application software to automatically reject female applicants age 55 or older and male applicants age 60 or older, the EEOC claims. This conduct violates the Age Discrimination in Employment Act, which protects applicants and employees from age bias.

What Companies That Use AI Should Know

Craig Leen, an attorney at law firm K&L Gates in Washington, D.C., said AI can be an effective tool. It can help eliminate unconscious bias by human decision-makers, leading to an increase in equal employment opportunity when done correctly.

This can be accomplished through self-auditing, checking on a regular basis for bias or adverse impact, and by ensuring that there is an effective way for candidates with disabilities to request reasonable accommodations and be considered based on their skills and qualifications.

"AI can also be very beneficial because it can eliminate the possibility of implicit bias," Leen said. "There is unconscious bias in human decision-making, so AI can be a tool to help HR managers make good employment decisions."

To alleviate potential hiring concerns associated with AI, employers should consider why they're using AI in the first place, Daming said.

"Whatever you're assessing needs to be aligned with the requirements of the position and needs to be measured by the technology—not just inferred," she explained. "Before adopting any of these tools, the company needs to understand how they work, what they're measuring, and how they may affect different employees."

Daming said employers should also be transparent with applicants and employees about the use of automated technologies and how they work. This gives applicants notice that they may need to request an accommodation.

"The guidance does not go so far as requiring notice and consent, which is more of a privacy principal than a discrimination concept," Daming explained. "But that just shines a light on how these technologies involve many unique concerns springing from individual privacy rights and protected characteristics."


Advertisement

​An organization run by AI is not a futuristic concept. Such technology is already a part of many workplaces and will continue to shape the labor market and HR. Here's how employers and employees can successfully manage generative AI and other AI-powered systems.

Advertisement