Groundbreaking AI Discrimination Settlement: Three Key Lessons for Human Resources
Posted: August 21, 2023
The Equal Employment Opportunity Commission (EEOC) and iTutorGroup, Inc. have reached an agreement to resolve the first artificial intelligence (AI) discrimination lawsuit.
The EEOC's complaint in EEOC v. iTutorGroup, Inc. (pdf) alleged the company's hiring software automatically rejected older applicants in violation of the Age Discrimination in Employment Act (ADEA). Specifically, the lawsuit claimed the hiring program software rejected "female applicants age[d] 55 or older and male applicants age[d] 60 or older," effectively screening out more than 200 applicants.
The problem was discovered when one applicant submitted two applications that were identical — with one exception, the suit alleged. The first application listed the woman's real date of birth. On the second application, she entered a more recent date of birth. The applicant was contacted for an interview after she submitted the second application showing a younger age.
The suit sought back pay and damages for more than 200 applicants who were "denied jobs because of their age."
The company agreed to pay $365,000 to be distributed to a group of applicants rejected on the basis of age. It also agreed to comprehensive injunctive relief. Among other things, the company:
- Is enjoined from screening applicants based on age
- Is enjoined from requesting dates of birth before a job offer is made
- Must provide four hour-long training sessions conducted by EEOC-approved third parties to all supervisory and management level employees focusing on the ADEA, Title VII of the Civil Rights Act of 1964 (Title VII) and other federal Equal Opportunity Employment laws
- Must post a notice about employees' rights
- Must review and revise anti-discrimination policies
- Must incorporate the updated policies into the employee handbook
- Must implement a complaint process for employees and applicants who wish to file a complaint, and
- Must submit to EEOC monitoring for the duration of the agreement.
Why is this settlement such a big deal?
1. Trailblazing settlement
At the risk of sounding obvious: This is the first-ever AI discrimination settlement. Clearly, AI has been on the EEOC's radar. The agency launched its Artificial Intelligence and Algorithmic Fairness Initiative in January. And in May, it issued a new resource that outlines important considerations when incorporating AI tools into employment decisions (more).
This settlement shows the agency is prioritizing AI-related discrimination and is committed to directing its enforcement resources to AI compliance.
"This case is an example of why the EEOC recently launched an Artificial Intelligence and Algorithmic Fairness Initiative," the EEOC said in a press release announcing the settlement. "Workers facing discrimination from an employer's use of technology can count on the EEOC to seek remedies."
2. Steep AI learning curve
As new and improved technology emerges, many companies have recognized AI's potential and jumped on the bandwagon. There is a learning curve for using AI effectively in compliance with employment laws, as this case shows. And it is not the only one. In a similar case, Workday, a popular human capital management (HCM) platform, is facing a lawsuit alleging its AI screening system discriminates against Black applicants.
AI discrimination settlement: three takeaways specifically for human resources (HR)
First things first: This is not about the general use of AI at work. Of course, you need to consider an AI policy. (PDF of a sample policy)
Here, we are focusing on key lessons specifically for HR professionals who use (or plan to use) AI at work:
- Employers and HR professionals must talk AI. Open communication is essential. The use of AI at work is exploding, but it is still new technology, and employees are navigating the learning curve. The truth is, HR has questions about AI even as the department implements AI tools to streamline and automate specific tasks. Employees, especially HR, must be trained on the legal risks of AI use.
- Conduct audits. When navigating the learning curve, it is helpful to look at relevant legislation. In the case of AI, New York City passed the first law regulating the use of AI at work. And that law requires companies to arrange independent bias audits conducted by a third party. Even if you are not in NYC, the legislation provides insight into what is considered a "best practice" under an evolving area of law. Want even more help? Check out the city's final rule, which provides additional guidance for compliance with the law.
- Stay tuned to the EEOC for more information. This area of employment law is in flux — and it likely will be for a while as we navigate this new technology. On the federal level, the EEOC is the best source to stay up to date on evolving AI guidance. And even if you find yourself stumbling a bit, you can show good-faith efforts by being aware of and trying to implement EEOC guidance.
Posted In: Discrimination, Illegal; Equal Employment Opportunity Commission (EEOC); Human Resources, General; Title VII of the Civil Rights Act of 1964 (Title VII)
Want to know more? Read the full article by Carol Warner at HR Morning