AI Hiring Discrimination: How Algorithms Reject Millions of Qualified Workers
Workday lawsuit certified as nationwide class action. iTutorGroup $365K settlement. HireVue discriminates against deaf applicants. 83% of companies use AI to screen resumes. Your rights when algorithms discriminate.
By Compens.ai Editorial Team
Insurance Claims Expert
AI hiring discrimination: how algorithms reject millions of qualified workers
Updated: December 2025
When Derek Mobley applied for jobs, he did everything right. He tailored his resume, highlighted his qualifications, submitted applications promptly. He applied to more than 80 positions at companies using Workday's hiring platform. He was rejected every time—often within minutes or hours.
Mobley is African American, over 40 years old, and disabled. He believes an algorithm, not a human, decided he wasn't worth interviewing.
In May 2025, a federal judge certified his case as a nationwide collective action—the first of its kind to challenge AI hiring discrimination on this scale. Mobley v. Workday represents millions of job applicants who may have been rejected not by humans making considered judgments but by software making snap calculations based on data patterns that encode historical bias.
This case isn't an isolated incident. It's the tip of an iceberg that threatens to reshape employment discrimination law for the AI age.
In 2023, the EEOC secured its first-ever settlement over AI hiring discrimination: $365,000 from a tutoring company that programmed its software to automatically reject applicants over certain ages. In March 2025, the ACLU filed complaints against HireVue and Intuit on behalf of a deaf, Indigenous woman whose AI video interview told her to "practice active listening." Another ACLU complaint alleges that Aon's personality tests discriminate against people with disabilities and non-white applicants.
These cases share a common theme: AI hiring tools that are marketed as objective and efficient often embed and amplify the biases they're supposed to eliminate. When algorithms trained on historical data learn that successful hires tend to be white, male, and non-disabled, they encode those patterns as predictors of success—and reject everyone who doesn't fit the mold.
If you've applied for a job in the past five years, an algorithm has probably evaluated you. An estimated 83% of companies now use AI to screen resumes. The question isn't whether AI affects hiring—it's whether anyone is checking if the AI discriminates.
The scale of AI in hiring
How many employers use AI
The numbers reveal how deeply AI has penetrated the hiring process:
| Metric | Statistic | |--------|-----------| | Companies using AI in hiring (2025) | 99% use some AI | | Companies using AI for resume screening | 83% | | Companies using AI chatbots for candidates | 40% | | Companies using AI for interviews | 24% (rising to 29%) | | AI in talent acquisition | 44% of organizations | | Market value of AI recruitment industry | $661 million (projected $1.12B by 2030) |
The efficiency gains are real: AI can reduce time-to-hire by 50% and recruitment costs by 30%. But efficiency in screening out candidates also means efficiency in discrimination—when bias is built in, it operates at scale and speed no human could match.
What AI hiring tools actually do
Understanding how these tools work reveals why bias is so hard to detect and so easy to embed:
Resume screening: AI scans resumes for keywords, experience patterns, and other signals it has "learned" predict success. The learning comes from training data—usually past hiring decisions. If a company historically hired mostly white men for engineering roles, the AI learns that white male names, fraternities, and male-dominated activities correlate with hiring success.
Personality assessments: Tools like Aon's ADEPT-15 evaluate candidates based on personality traits. But research shows these tests measure characteristics that are close proxies for disability status. A test that favors extroversion may screen out people with autism or social anxiety—not because they can't do the job but because they don't match a personality profile derived from neurotypical employees.
Video interviews: HireVue and similar platforms use AI to analyze candidates' facial expressions, tone of voice, word choice, and body language. The AI compares these to patterns from "successful" candidates. But speech recognition works worse for people with accents, deaf speakers, and non-native English speakers. Facial analysis may misread people with facial differences or certain disabilities.
Gamified assessments: Aon's gridChallenge uses games to assess cognitive function. But the specific type of puzzles and the way they're structured can favor certain educational backgrounds and cultural experiences—and the data shows white applicants score higher on average.
The fundamental problem
AI hiring tools don't eliminate human bias—they encode it. When algorithms learn from historical hiring data, they learn from decisions made by humans who had their own biases, conscious or not.
Amazon learned this lesson in 2018 when it scrapped an AI recruiting tool that had taught itself to penalize resumes containing the word "women's" (as in "women's chess club captain") and to downgrade graduates of all-women's colleges. The system had learned from 10 years of hiring data—data that reflected a male-dominated tech industry.
Amazon's system favored candidates who used verbs like "executed" and "captured"—language more commonly found on male engineers' resumes. It didn't explicitly say "reject women." It just learned that successful candidates looked like the men who had been hired before.
Amazon caught the problem and killed the project. But many companies using AI hiring tools don't conduct the kind of rigorous testing that revealed Amazon's bias. They trust that the software is objective because it's software—a dangerous assumption the courts are now starting to examine.
Mobley v. Workday: the landmark case
The plaintiff's story
Derek Mobley is not a professional plaintiff. He's a job seeker who noticed something was wrong.
Mobley is African American, over 40 years old, and has a disability. Starting in 2020, he applied for jobs through company career portals that used Workday's hiring platform. He submitted over 80 applications. Each time, his application was rejected—often within minutes or hours, faster than any human could reasonably review a resume and make a decision.
The speed of rejection told Mobley what was happening: no human was reviewing his applications. An algorithm was deciding his fate, and the algorithm kept saying no.
In February 2023, Mobley filed a lawsuit against Workday in the United States District Court for the Northern District of California. He alleged that Workday's AI-powered hiring tools discriminated based on race, age, and disability status—violating Title VII of the Civil Rights Act, the Age Discrimination in Employment Act (ADEA), and the Americans with Disabilities Act (ADA).
Workday's role in hiring
Workday is not a household name, but it's one of the most influential companies in American hiring. Over 11,000 organizations worldwide use Workday's platform to post jobs, recruit candidates, and manage the hiring process. Millions of job listings run through Workday's technology each month.
When you apply for a job at many large companies, your resume goes to Workday. Workday's AI evaluates you. If the AI recommends you, a human might see your application. If it doesn't, you're rejected—often without any human ever looking at your qualifications.
This role is what makes the lawsuit so significant. Mobley isn't suing the companies that rejected him. He's suing the vendor whose software made the recommendations. If Workday's AI systematically discriminates, the harm extends to every company using the platform and every applicant the AI evaluates.
The legal breakthrough
The case has faced legal obstacles that highlight how employment discrimination law is struggling to adapt to AI.
January 2024: Judge Rita Lin initially dismissed the case because the lawsuit didn't provide enough evidence to classify Workday as an "employment agency" subject to anti-discrimination law. The traditional framework assumes a relationship between employer and employee—but Workday is a third-party vendor.
February 2024: Mobley's legal team filed an amended complaint with a new legal theory: Workday acts as an "agent" of the employers who use its software. Just as a human recruiter screening resumes would be liable for discrimination, so should an AI performing the same function.
April 2024: The EEOC filed an amicus brief supporting the plaintiffs. The agency stated that algorithmic hiring tools can violate anti-discrimination laws even without explicit discriminatory intent. Disparate impact—when a facially neutral practice disproportionately excludes protected groups—applies to AI just as it applies to human decision-makers.
July 2024: Judge Lin issued a mixed ruling that allowed the case to proceed on the agent theory. She wrote: "Workday's role in the hiring process is no less significant because it allegedly happens through artificial intelligence rather than a live human being who is sitting in an office going through resumes manually to decide which to reject."
This was a breakthrough. The court recognized that delegating hiring decisions to AI doesn't insulate companies from discrimination liability.
May 2025: The case reached another milestone when Judge Lin granted preliminary certification as a nationwide collective action under the ADEA. The plaintiffs—Mobley and four others, all over 40—now represent all job applicants aged 40 and older who were denied employment recommendations through Workday's platform since September 24, 2020.
The class is believed to include millions of applicants.
The central legal question
The case now focuses on a common question: "Whether Workday's AI recommendation system has a disparate impact on applicants over forty."
The plaintiffs allege that Workday's algorithm "disproportionately disqualifies individuals over the age of forty (40) from securing gainful employment." They claim to have submitted hundreds of applications through Workday and been rejected each time—sometimes within minutes or hours, a speed that indicates automated screening rather than human review.
Workday denies the allegations. A spokesperson called the May 2025 certification order a "preliminary, procedural ruling... that relies on allegations, not evidence." The company maintains that "this case is without merit."
But the case will now proceed to discovery, where plaintiffs can obtain data about how Workday's algorithm actually works and whether it produces disparate outcomes by age, race, or disability status. What that data reveals could reshape employment discrimination law.
iTutorGroup: the first EEOC settlement
The case that proved AI can discriminate
Before Mobley v. Workday, there was iTutorGroup—the case that proved the EEOC would pursue AI hiring discrimination.
iTutorGroup was a virtual tutoring company that hired tutors in the United States to provide online instruction from their homes. The company used automated software to screen applicants.
What the EEOC discovered was brazen: iTutorGroup had programmed its software to automatically reject female applicants 55 or older and male applicants 60 or older. The system didn't just correlate age with some other factor—it used age directly as a disqualifier.
The discrimination was discovered when an applicant submitted two identical applications with different birth dates. The application with the older date was rejected. The application with the younger date received an interview.
More than 200 qualified applicants were rejected solely because of their age.
The settlement
On September 8, 2023, a federal court approved a consent decree requiring iTutorGroup to pay $365,000 to affected applicants. The settlement also required the company to:
- •Adopt new anti-discrimination policies
- •Distribute an internal memo about the violations
- •Conduct multiple anti-discrimination trainings
- •Invite all applicants rejected due to age in March and April 2020 to reapply
- •Provide written notice to the EEOC of any future discrimination complaints
- •Stop requesting birth dates from applicants
iTutorGroup denied wrongdoing but settled early.
The EEOC's message
EEOC Chair Charlotte Burrows made clear what the settlement meant: "Even when technology automates the discrimination, the employer is still responsible... Workers facing discrimination from an employer's use of technology can count on the EEOC to seek remedies."
The iTutorGroup case involved intentional discrimination—someone programmed the system to reject older applicants. But the same legal principles apply to unintentional discrimination through disparate impact. If an AI system produces discriminatory outcomes, the employer can be liable even if no one intended to discriminate.
The settlement put employers on notice: AI doesn't provide legal cover. The algorithm is the employer's agent, and the employer is responsible for what it does.
The ACLU's challenge to HireVue
The deaf applicant's story
D.K. is an Indigenous and Deaf woman pursuing a master's degree in data science. She communicates using American Sign Language and English with a deaf accent.
Since 2019, D.K. had worked seasonal roles for Intuit, the financial software company. Her supervisors gave her positive feedback and bonuses every year. In spring 2024, her supervisor encouraged her to apply for a promotion to seasonal manager.
The application process required D.K. to complete a video interview through HireVue's AI platform. HireVue uses automated speech recognition to transcribe applicants' spoken responses, then evaluates the transcripts and other factors.
D.K. requested human-generated captioning as an accommodation for her deafness. Intuit allegedly denied the request, telling her that HireVue's software included subtitling. But when D.K. began the interview, no subtitling option was available. She had to rely on Google Chrome's automated captioning—a system known to be incomplete and inaccurate, particularly for deaf speakers.
D.K. was rejected for the promotion.
The feedback she received was generated by AI. Among the recommendations: that she "practice active listening."
For a deaf applicant, that feedback revealed exactly what was wrong with the system.
The legal complaint
On March 19, 2025, the ACLU, Public Justice, and other organizations filed complaints with the Colorado Civil Rights Division and the EEOC on behalf of D.K.
The complaint alleges that Intuit and HireVue violated:- •The Americans with Disabilities Act (ADA)
- •Title VII of the Civil Rights Act of 1964
- •The Colorado Anti-Discrimination Act
The allegations focus on two problems:
Inaccessibility: HireVue's platform was inaccessible to deaf applicants. The company failed to provide reasonable accommodations, and the automated systems weren't designed to work for people who communicate differently.
Algorithmic bias: Speech recognition systems are known to perform worse for non-white speakers and people with accents, including deaf speakers. An Indigenous woman communicating in deaf-accented English would face multiple layers of bias in a system trained primarily on standard American English.
The "practice active listening" feedback was evidence that the AI had evaluated D.K. based on characteristics directly related to her disability—not her actual qualifications for the job.
Company responses
HireVue CEO Jeremy Friedman called the complaint "entirely without merit," stating that "Intuit did not use a HireVue AI-based assessment."
Intuit said the allegations are "entirely without merit" and that "we provide reasonable accommodations to all candidates."
The case is pending before the EEOC and Colorado Civil Rights Division.
Aon's personality tests under fire
The FTC complaint
On May 30, 2024, the ACLU filed a complaint with the Federal Trade Commission alleging that three Aon hiring assessment tools discriminate against people with disabilities and non-white applicants.
The tools in question:
ADEPT-15: A personality test that evaluates candidates on 15 personality dimensions. The ACLU alleges the test identifies characteristics that are close proxies for mental health disabilities and neurodivergent conditions. People with autism, anxiety disorders, or other conditions may score differently—not because they can't do the job but because they don't match a neurotypical personality profile.
vidAssessAI: A video interview tool that uses AI to evaluate candidates based on the ADEPT-15 framework. It analyzes speech patterns, facial expressions, and other factors that may disadvantage people with disabilities or different communication styles.
gridChallenge: A gamified cognitive assessment using puzzles and games. Aon's own testing data showed that white applicants scored higher on average than applicants who were Asian, Black, Hispanic/Latino, or of two or more ethnicities.
The evidence
The ACLU obtained Aon's own model cards—documents that describe how the AI systems work. According to the complaint:
Aon reported racial disparities in gridChallenge scores. Applicants who were non-white scored lower on average than white applicants. The ACLU argues this shows the test has disparate impact by race—a violation of Title VII.
The personality tests evaluate traits associated with disability status. High scores on certain dimensions may correlate with not having conditions like autism, depression, or anxiety—meaning the test screens out disabled applicants not because of job performance but because of disability status.
The deceptive marketing claim
The ACLU's complaint focuses heavily on how Aon markets its products. Aon claims its assessments are "bias-free" and objective. The ACLU alleges this is false advertising under the FTC Act.
"Aon falsely claims that its assessments are bias-free, and yet its assessments carry an unacceptably high risk of screening people out based on who they are and not whether they can do the job," the complaint states.
The ACLU is asking the FTC to:- •Investigate the tests for discrimination
- •Enjoin Aon from making deceptive claims
- •Pause sales of the tests until changes are made
Aon's response
An Aon spokesperson defended the products: "The design and implementation of our assessment solutions—which clients use in addition to other screenings and reviews—follow industry best practices as well as the Equal Employment Opportunity Commission, legal and professional guidelines."
The FTC investigation is ongoing.
Harper v. Sirius XM: the race discrimination case
The newest lawsuit
In August 2025, a new AI hiring discrimination lawsuit joined the growing docket. A job applicant sued Sirius XM Radio in the Eastern District of Michigan, alleging the company's AI-powered hiring tool discriminated against him based on race.
The plaintiff alleges that Sirius XM's AI system relied on historical hiring data that perpetuated past biases. Like Mobley v. Workday, the case challenges whether employers can avoid discrimination liability by delegating decisions to algorithms that encode historical bias.
The case is in early stages, but it represents the expansion of AI hiring litigation beyond the pioneering cases against vendors like Workday and HireVue to the employers themselves.
How AI discrimination works
Training data bias
AI systems learn from training data. In hiring, that typically means past hiring decisions, performance reviews, and career outcomes of existing employees.
The problem: historical hiring was discriminatory. If a company hired mostly white men for decades, the AI learns that white men are "successful" candidates. It then replicates that pattern in future recommendations.
This isn't the AI being racist—it's the AI being accurate about the past. But accurate predictions of biased historical patterns produce biased future outcomes.
Example: Amazon's recruiting AI penalized resumes with "women's" in them because historically, Amazon's successful hires (mostly men) didn't have that word on their resumes. The system accurately predicted what past hires looked like—and in doing so, discriminated against women.
Proxy discrimination
Even when AI doesn't use protected characteristics directly, it can use proxies that correlate with those characteristics. This is perhaps the most insidious form of algorithmic discrimination because it appears neutral on its face while producing discriminatory outcomes.
Consider zip codes. An AI might learn that applicants from certain zip codes are less likely to be hired and use that as a negative factor. The system isn't explicitly considering race—but because of residential segregation, zip codes are highly correlated with race. An AI that screens out applicants from certain neighborhoods is effectively screening out Black and Latino applicants, even if race never appears in the algorithm.
The same dynamic applies to educational background. College names correlate with socioeconomic status and race. Extracurricular activities often signal gender—women's sports teams, sororities, and certain volunteer activities are more common on women's resumes. Communication styles and speech patterns correlate with neurodivergent conditions, national origin, and disability status.
The AI doesn't need to see your race, age, or disability status to discriminate based on those characteristics. It just needs to identify patterns that correlate with them—patterns that may be invisible to humans but obvious to an algorithm processing thousands of data points.
Feedback loops
AI systems often use their own predictions to improve future accuracy. If an AI recommends mostly white candidates, and those candidates get hired, the AI learns that its predictions were "correct." Over time, the bias compounds.
This creates a self-fulfilling prophecy: the AI recommends candidates who look like past successful hires, those candidates get hired, and their success "validates" the AI's discriminatory recommendations.
Opacity and accountability
One of the most troubling aspects of AI discrimination is that it's often invisible. The applicant doesn't know they were evaluated by an algorithm. They receive a generic rejection email—if they receive anything at all—with no explanation of why they were rejected or how the decision was made.
Even the employers using these tools may not understand how they work. The AI is a black box: data goes in, recommendations come out, and the logic in between is proprietary. Vendors claim trade secret protection for their algorithms, refusing to disclose how decisions are made even when those decisions affect millions of people's livelihoods.
This opacity creates a fundamental accountability problem. When a human manager discriminates, there's a paper trail—emails, interview notes, performance reviews that can be examined for bias. When an algorithm discriminates, there may be no smoking gun. There's no email saying "don't hire old people," no manager admitting bias in a deposition. There's just a pattern of outcomes that systematically excludes protected groups, with no individual human accountable for any single decision.
New York City tried to address this with Local Law 144, which requires bias audits of AI hiring tools. But a recent state comptroller audit found that only 18 of 391 sampled employers had published the required audit reports. Compliance is abysmal, and enforcement has been weak. The gap between regulation and reality shows how far we have to go in holding AI systems accountable.
The legal landscape
Federal anti-discrimination laws
Several federal laws apply to AI hiring discrimination:
Title VII of the Civil Rights Act of 1964: Prohibits employment discrimination based on race, color, religion, sex, and national origin. Both intentional discrimination (disparate treatment) and unintentional discrimination with discriminatory effects (disparate impact) are illegal.
AI hiring tools can violate Title VII through disparate impact even without discriminatory intent. If an algorithm disproportionately screens out Black applicants compared to white applicants, and the employer can't show the algorithm is job-related and consistent with business necessity, the employer violates Title VII.
Age Discrimination in Employment Act (ADEA): Prohibits discrimination against workers 40 and older. The Mobley v. Workday collective action proceeds under ADEA, alleging Workday's AI disproportionately rejects older applicants.
Americans with Disabilities Act (ADA): Prohibits discrimination based on disability and requires reasonable accommodations. AI tools that disadvantage people with disabilities—whether through inaccessible interfaces, speech recognition that fails for deaf speakers, or personality tests that penalize neurodivergent traits—may violate the ADA.
The ADA also prohibits medical examinations before a job offer. Some AI personality assessments may constitute prohibited medical examinations if they identify characteristics associated with mental health conditions.
EEOC guidance
The EEOC has increasingly focused on AI discrimination:
May 2022: The EEOC and DOJ issued guidance on how AI hiring tools can violate the ADA. The agencies warned that employers are responsible for ensuring AI tools don't discriminate, even when developed by third-party vendors.
2023: The EEOC's AI and Algorithmic Fairness Initiative investigated AI discrimination in employment. The iTutorGroup settlement was the first result.
April 2024: The EEOC filed an amicus brief in Mobley v. Workday supporting the plaintiffs' legal theory.
2025: The Trump Administration ended the EEOC's AI Initiative after an executive order mandating agencies "deprioritize enforcement of all statutes and regulations to the extent they include disparate-impact liability."
The EEOC's position on AI discrimination remains the same, but enforcement priorities have shifted. Private litigation and state enforcement may become more important.
State laws
While federal law provides the foundation for challenging AI hiring discrimination, several states have enacted their own protections that give applicants additional rights and remedies.
Illinois has been at the forefront of AI employment regulation. In 2019, the state passed the Artificial Intelligence Video Interview Act, becoming the first state to regulate AI in video interviews. The law requires employers to tell applicants that AI will analyze their interview, explain what characteristics the AI evaluates, and obtain consent before using the technology. Applicants can also request that their videos be destroyed. This law has already spawned significant litigation, including the Deyerler v. HireVue case discussed above.
Illinois also has the Biometric Information Privacy Act, which requires consent before collecting biometric identifiers like facial geometry. Because AI video interview tools often analyze facial expressions—which requires capturing facial geometry—they may trigger BIPA's consent requirements. BIPA is uniquely powerful because it provides statutory damages of $1,000 to $5,000 per violation, even without proof of actual harm. This has led to massive settlements: Facebook paid $650 million, Google paid $100 million, and TikTok paid $92 million. The combination of AIVIA and BIPA makes Illinois one of the most challenging jurisdictions for AI hiring tools.
New York City enacted Local Law 144, effective July 5, 2023, which requires employers using automated employment decision tools to conduct annual bias audits by independent auditors and publish the results on their websites. Employers must also notify candidates that AI is being used and allow them to request alternative selection processes. Penalties range from $500 for the first violation to $1,500 for subsequent violations. However, enforcement has been disappointing—a state comptroller audit found widespread non-compliance, with most employers ignoring the audit and disclosure requirements entirely.
California came close to enacting comprehensive AI hiring regulation with AB 2930, which would have required impact assessments for AI hiring tools and imposed $25,000 fines for discriminatory AI use. The bill died in the legislature in 2024 after being narrowed to only cover employment decisions. Sponsors plan to reintroduce it. In the meantime, the California Civil Rights Council adopted regulations on automated decision-making systems in March 2025, creating new compliance obligations for employers.
Colorado's AI Act, effective in 2026, takes a different approach. It requires both developers and deployers of "high-risk" AI systems—a category that includes employment decisions—to conduct impact assessments and take reasonable care to avoid discrimination. This dual responsibility on vendors and employers mirrors the legal theory in Mobley v. Workday and may foreshadow where AI regulation is heading.
What applicants should know
Signs you were screened by AI
How do you know if an algorithm rejected you? The most telling sign is speed. If you received a rejection within minutes or hours of applying—sometimes before you've even finished clicking through the application portal—a human almost certainly didn't review your materials. No recruiter works that fast.
Other indicators include a complete absence of human contact at any stage of the process. You applied, you received an automated confirmation, and then you received an automated rejection. No phone screen, no email from a recruiter, no indication that a human being ever saw your name. The rejection language is generic, offering no specific feedback about why you weren't selected.
You were also likely evaluated by AI if you completed a video interview with no human interviewer—just you, speaking to your computer camera, answering questions that appeared on screen. Or if you were required to play online games or complete personality assessments before your application would be considered. If you applied through a major company's career portal, the company almost certainly uses Workday, Greenhouse, or similar platforms that incorporate AI screening.
The fact that AI screened your application isn't inherently problematic. The problem arises when the AI discriminates—and when that discrimination is invisible to everyone involved.
Your rights
You may have legal claims against AI hiring discrimination in several situations. If you're over 40 and have been repeatedly rejected from jobs that use AI screening, you may be part of the class in Mobley v. Workday or have independent claims under the Age Discrimination in Employment Act. If you have a disability and weren't provided accommodations for AI assessments—or if those assessments evaluated characteristics related to your disability rather than your job qualifications—you may have claims under the Americans with Disabilities Act.
More broadly, if you're a member of any protected class and believe that AI has systematically screened you out of opportunities, you may be able to establish disparate impact discrimination under Title VII or state civil rights laws. In Illinois specifically, if you weren't given proper notice and didn't consent to AI analysis of your video interview, the company may have violated the Artificial Intelligence Video Interview Act. In New York City, you can check whether the employer conducted and published the required bias audits under Local Law 144—if they didn't, they're violating the law.
Documentation is essential for any potential claim. Keep records of every application you submit, including the date, company name, position, and outcome. Note the timing of rejections—immediate rejections suggest AI screening. Screenshot any assessments, games, or video interview platforms you're required to use. If you're in New York City, check the employer's website for the required bias audit reports. Request information from employers about how your application was evaluated, though they may not provide it.
Filing complaints and joining cases
If you believe you've experienced AI hiring discrimination, you can file a charge of discrimination with the EEOC. You must file within 180 days of the discrimination (or 300 days if your state has a fair employment practices agency). Filing can be done online at eeoc.gov. You can also file with your state civil rights agency, which may have extended deadlines and additional protections beyond federal law.
Many employment discrimination attorneys work on contingency, meaning you pay nothing unless you win. The National Employment Law Project and local bar associations can provide referrals to attorneys who handle these cases.
If you're over 40 and applied for jobs through Workday's platform since September 2020, you may be eligible to join the Mobley v. Workday collective action. Because it's a collective action rather than a class action, you must affirmatively opt in to participate—watch for formal notice communications. Other AI hiring discrimination class actions may emerge as this area of law develops. Websites like topclassactions.com and classaction.org track new cases and settlement opportunities.
What employers should do
Understand your liability
Employers cannot outsource discrimination to vendors. If you use AI hiring tools that produce discriminatory outcomes, you're liable—even if you didn't design the tools and don't understand how they work. The Mobley v. Workday decision made this unmistakably clear: delegating hiring decisions to an algorithm doesn't insulate you from discrimination claims. As Judge Lin wrote, an AI's role in hiring "is no less significant because it allegedly happens through artificial intelligence rather than a live human being."
This means that when you purchase an AI hiring tool from a vendor, you're accepting responsibility for whatever that tool does. If the algorithm discriminates, your company faces the lawsuit—not just the vendor. Due diligence in selecting and monitoring AI tools isn't just good practice; it's a legal necessity.
Conduct bias audits
Before deploying any AI hiring tool, you should require the vendor to provide comprehensive documentation of bias testing. This should include disparate impact analysis broken down by race, sex, age, and disability status. What are the selection rates for different groups? Are there statistically significant disparities? What has the vendor done to address any disparities found?
Don't rely solely on vendor-provided data. Consider conducting independent audits using your own applicant data or hiring a third-party auditor to evaluate the tool. Establish ongoing monitoring to detect discriminatory patterns that may emerge over time as the AI learns from new data. Document all of this analysis thoroughly—if you're ever sued, you'll want to demonstrate the good faith efforts you made to prevent discrimination.
Provide accommodations
The Americans with Disabilities Act requires reasonable accommodations in the application process, and this obligation extends to AI hiring tools. If a candidate with a disability cannot complete a standard AI assessment, you must provide an alternative unless doing so would cause undue hardship.
This might mean offering extended time for timed assessments, providing assessments in alternative formats for candidates with visual or hearing impairments, offering human interview options when AI video platforms are inaccessible, or providing human review of applications when AI screening may have missed qualified candidates with disabilities. The key is flexibility—your hiring process should be able to accommodate applicants who can't navigate AI tools, whether because of disability, lack of technology access, or other reasons.
Maintain human oversight
AI should support human decision-making, not replace it entirely. Don't let algorithms make final hiring decisions without human review. Structure your process so that AI recommendations are just that—recommendations, subject to human judgment and override.
Consider flagging borderline cases for human evaluation rather than letting the AI make automatic rejections. Review rejections of apparently qualified candidates to catch cases where the AI may have missed something. Train your recruiters and hiring managers to question AI recommendations when they don't align with an applicant's actual qualifications. And document the role that human decision-makers play in the process—this can be important evidence if your hiring practices are ever challenged.
Comply with disclosure requirements
If you operate in jurisdictions with AI disclosure laws, compliance is mandatory. In New York City, you must conduct annual bias audits, publish the results on your website, notify candidates that AI is being used, and allow candidates to request alternative selection processes. In Illinois, you must provide notice and obtain consent before using AI to analyze video interviews. In California, follow the developing regulations on automated decision systems adopted by the Civil Rights Council.
Even if you don't operate in these jurisdictions, consider adopting their requirements as best practices. Transparency about AI use, proactive bias testing, and offering alternatives to candidates who can't or won't engage with AI tools are all practices that reduce legal risk and may become required nationwide as regulation develops.
Frequently asked questions
General questions
Can AI legally make hiring decisions? Yes, but AI hiring tools must comply with the same anti-discrimination laws as human decision-makers. If an AI tool produces discriminatory outcomes—even unintentionally—employers can be liable.
Who is responsible when AI discriminates—the employer or the vendor? The employer is responsible for discrimination in hiring, regardless of whether a vendor's tool caused it. Vendors may also be liable under emerging legal theories, as the Mobley v. Workday case explores.
How do I know if AI screened my application? If you received a rejection very quickly (minutes or hours), weren't contacted by any human, or completed online assessments, AI likely played a role. Most large companies use AI in some part of their hiring process.
Can I refuse to be evaluated by AI? In some jurisdictions (like Illinois for video interviews), employers must obtain consent and provide alternatives. Elsewhere, you can request accommodations or alternatives, but employers aren't always required to provide them.
Legal questions
What's disparate impact? Disparate impact occurs when a facially neutral policy disproportionately affects a protected group. An AI that doesn't explicitly consider race but rejects Black applicants at higher rates than white applicants produces disparate impact—and may violate Title VII.
Do I need to prove the employer intended to discriminate? No. Disparate impact liability doesn't require proof of intent. If the AI produces discriminatory outcomes and the employer can't justify the practice as job-related and necessary, the employer is liable.
How do I prove AI discrimination if I can't see the algorithm? Statistical evidence of disparate outcomes can establish a prima facie case. In discovery, plaintiffs can seek data about the AI's recommendations and outcomes by protected group. Expert testimony can analyze whether the algorithm produces discriminatory patterns.
What damages can I recover? Depending on the law and facts, you may recover back pay, compensatory damages for emotional distress, punitive damages (in intentional discrimination cases), and attorneys' fees. Class actions can recover damages for all affected applicants.
Settlement and case questions
Am I eligible for the Mobley v. Workday case? If you're 40 or older and applied for jobs through Workday's platform since September 2020, you may be eligible. The case is a collective action requiring opt-in. Watch for formal notice.
What happened to the iTutorGroup settlement? The case settled in 2023 for $365,000 distributed to affected applicants. It established the precedent that AI discrimination is actionable under employment law.
What's the status of the HireVue and Aon cases? The HireVue/Intuit complaint is pending before the EEOC and Colorado Civil Rights Division. The Aon complaint is pending FTC investigation. Neither has been resolved.
Industry-specific impacts
Technology sector
The tech industry—where AI hiring tools are most prevalent—also has some of the worst discrimination patterns encoded in hiring data.
Tech has historically been dominated by white and Asian men. When AI learns from this history, it perpetuates it. Amazon's failed recruiting tool is the canonical example: trained on 10 years of mostly male hires, it learned to penalize "women's" on resumes and downgrade graduates of women's colleges.
The irony is stark: the industry creating AI is the industry most at risk of AI discrimination claims. Every major tech company uses AI in hiring, and many have workforces that remain predominantly white and male despite years of diversity initiatives.
Healthcare
Healthcare companies increasingly use AI to screen applications from nurses, physicians, and administrative staff. But healthcare has its own historical biases—male doctors have historically been hired for surgical specialties while women were channeled into primary care and pediatrics.
AI screening that perpetuates these patterns may face legal challenges. And given the high stakes of healthcare decisions, the ADA implications of AI screening are particularly acute. A personality assessment that disadvantages applicants with depression or anxiety could screen out qualified candidates in a field where mental health awareness should be paramount.
Financial services
Banks, investment firms, and insurance companies use AI hiring tools extensively. These industries have historically excluded women and minorities from high-paying positions—biases that get encoded in training data.
The Aon complaint specifically targets personality tests used in financial services hiring. If gridChallenge and ADEPT-15 produce racially disparate outcomes, the financial services clients using these tools face liability alongside the vendor.
Retail and hospitality
High-volume hiring in retail and hospitality makes AI screening particularly attractive—and particularly risky. Companies screening thousands of applications for hourly positions may not notice when AI systematically excludes protected groups.
The iTutorGroup case involved exactly this dynamic: automated screening of applicants for remote tutoring positions. The brazen age discrimination—programming the system to reject applicants over certain ages—may be unusual, but subtler biases in high-volume screening likely affect millions of workers.
Gig economy platforms
Gig companies like Uber and Lyft use AI not just for customer-facing algorithms but for driver applications and background checks. Automated deactivation decisions—discussed in our gig worker misclassification article—raise similar discrimination concerns.
When an algorithm decides to deactivate a driver based on ratings, the ratings themselves may reflect customer bias. Studies have shown that customers rate drivers of certain races lower than others. AI that uses biased ratings to make employment decisions may perpetuate discrimination.
Timeline: key events in AI hiring discrimination
2014-2017: early warnings
- •2014: Amazon begins developing AI recruiting tool
- •2015: Initial research identifies bias risks in algorithmic hiring
- •2016: White House report warns about AI discrimination
- •2017: Amazon's AI recruiting tool shows gender bias internally
2018-2020: the Amazon revelation
- •October 2018: Reuters reports Amazon scrapped its AI recruiting tool after discovering it penalized women
- •2019: Illinois passes AIVIA, first state law regulating AI video interviews
- •2020: Companies expand AI hiring during COVID remote work boom
2021-2022: regulatory attention
- •2021: NYC passes Local Law 144 requiring AI bias audits
- •May 2022: EEOC and DOJ issue guidance on AI and ADA compliance
- •November 2022: EEOC Chair announces AI and Algorithmic Fairness Initiative
2023: first enforcement
- •January 2023: NYC Local Law 144 takes effect
- •February 2023: Derek Mobley files initial Workday lawsuit
- •July 2023: NYC begins enforcing Local Law 144
- •September 2023: iTutorGroup settles with EEOC for $365,000
2024: legal developments
- •January 2024: Judge dismisses initial Workday claims; amended complaint filed
- •April 2024: EEOC files amicus brief supporting Workday plaintiffs
- •May 2024: ACLU files FTC complaint against Aon
- •July 2024: Court allows Workday case to proceed on agent theory
- •August 2024: California AB 2930 dies in legislature
2025: collective action certification
- •March 2025: ACLU files complaints against HireVue and Intuit
- •March 2025: California Civil Rights Council adopts AI regulations
- •May 2025: Workday case certified as nationwide collective action
- •August 2025: Harper v. Sirius XM filed
- •Ongoing: Multiple enforcement actions and lawsuits pending
The Amazon case study: lessons learned
What happened
In 2014, Amazon assembled a team to build an AI system that would automate resume screening. The goal was to develop a system that could review job applicants' resumes and identify the most promising candidates, rating them one to five stars like Amazon rates products.
The team created 500 computer models, each trained to recognize patterns in past successful hires. They taught the AI to identify 50,000 terms from past candidates' resumes and correlate them with hiring success.
By 2015, Amazon's team discovered a problem: the system had learned to penalize resumes that indicated the applicant was female.
How the bias emerged
The AI learned from resumes submitted to Amazon over a 10-year period—a decade during which Amazon, like most tech companies, hired predominantly men for technical roles. The system learned that male candidates were more likely to be hired, and it then identified patterns associated with male candidates and used those as predictors of success.
Some of these patterns were direct signals of gender. Resumes that included the word "women's"—as in "women's chess club captain" or "women's basketball team"—were penalized. Graduates of all-women's colleges like Smith or Wellesley were downgraded. These patterns explicitly identified female candidates and counted their gender against them.
But the AI also found indirect signals that correlated with gender. It favored verbs like "executed" and "captured"—language more commonly found on male engineers' resumes. It identified certain technical terms that men used more frequently than women. These patterns didn't explicitly identify gender, but they served as proxies that produced the same discriminatory result.
The AI wasn't explicitly programmed to prefer men. No engineer wrote code saying "reject female applicants." The system learned that preference from historical data—data that reflected decades of bias in tech hiring. The bias was real, but it was emergent rather than intentional.
Amazon's response
Amazon's team tried to make the system gender-neutral. They edited it to be neutral to explicitly gendered terms. But they couldn't be confident they had eliminated all the proxies the AI used to identify gender.
The project was eventually scrapped. Amazon determined that the system couldn't be trusted to make fair hiring recommendations.
The lessons
Amazon's experience foreshadowed the legal battles now playing out and offers lessons that apply to every company using AI in hiring.
The first lesson is that bias emerges from data, not intent. No one programmed Amazon's AI to discriminate against women. No engineer wrote code with discriminatory purpose. The discrimination emerged from training the system on historical data that reflected historical bias. This same dynamic affects every AI system trained on biased data—which is to say, virtually every AI system trained on historical hiring decisions.
The second lesson is that bias is remarkably hard to remove. Even when Amazon identified explicit gender signals like the word "women's," they couldn't be confident they had eliminated all the proxies the AI used to identify gender. AI systems can find patterns that humans don't anticipate, using combinations of factors that no one would think to look for. Eliminating one proxy may just shift the bias to another.
The third lesson is that testing matters. Amazon caught the problem because it conducted rigorous internal testing of its system before deployment. Many companies that purchase AI hiring tools from vendors don't conduct this kind of testing. They trust that the vendor has done the work. But as the Aon complaint shows, vendors may know about disparities in their tools and continue marketing them anyway.
The fourth lesson is that transparency helps. We know about Amazon's experience because Reuters reported it and because Amazon acknowledged the problem rather than deploying a discriminatory system. Most AI hiring bias never becomes public. It operates invisibly, rejecting qualified candidates who never know why they were rejected, creating patterns of discrimination that may only become visible years later through statistical analysis in litigation.
The Deyerler v. HireVue BIPA case
The lawsuit
In January 2022, a class action lawsuit was filed against HireVue in the Northern District of Illinois alleging violations of the state's Biometric Information Privacy Act (BIPA).
The plaintiffs alleged that HireVue's AI-powered video interview platform collected biometric identifiers—facial geometry—without proper consent. Under BIPA, companies must obtain informed consent before collecting biometric data and must follow strict requirements for storage and destruction.
The February 2024 decision
On February 26, 2024, the court largely denied HireVue's motion to dismiss, allowing most claims to proceed. The decision addressed several legal arguments that HireVue raised and rejected most of them.
HireVue's first argument was that BIPA claims were "precluded" by the Illinois Artificial Intelligence Video Interview Act. Since AIVIA specifically regulates AI in video interviews, HireVue argued, it should be the exclusive law governing this area. The court disagreed, finding that BIPA and AIVIA impose different but "concurrent" obligations. AIVIA requires notice and consent for AI analysis; BIPA requires specific procedures for collecting and storing biometric data. A company must comply with both laws, not choose between them.
HireVue's second argument was that facial scans aren't "biometric identifiers" under BIPA because they're not used to affirmatively identify specific individuals. The company argued that it uses facial analysis to evaluate personality traits, not to identify who someone is. The court rejected this argument, pointing to BIPA's plain language, which includes "facial geometry" as a biometric identifier without limiting the definition to identification purposes.
The court also found that it had personal jurisdiction over HireVue in Illinois because the company marketed and sold its software to at least one Illinois company, and that software was used to capture at least one plaintiff's biometric identifiers in the state. This opened HireVue to suit in Illinois despite being headquartered elsewhere.
The stakes
BIPA provides for statutory damages of $1,000-$5,000 per violation—without requiring proof of actual harm. In class actions with thousands or millions of affected individuals, damages can reach hundreds of millions of dollars.
The case is ongoing. If the plaintiffs prevail, it could expose every AI video interview platform collecting facial data in Illinois to massive liability.
Broader implications
BIPA is unique in providing a private right of action for biometric privacy violations. But other states are developing biometric privacy laws, and the federal Algorithmic Accountability Act would create nationwide requirements.
Companies using AI video interviews should understand that they may be collecting biometric data subject to legal requirements—and the "we didn't know" defense is unlikely to work.
The future of AI hiring
What needs to change
The current system allows AI hiring tools to operate with minimal oversight, opaque decision-making, and inadequate accountability. The gap between the technology's impact and the regulatory response is vast. Meaningful reform requires action on several fronts.
Every AI hiring tool should undergo independent bias testing before deployment and on an ongoing basis. New York City's Local Law 144 points in the right direction by requiring bias audits, but enforcement must improve dramatically. The audit requirement is meaningless if most employers simply ignore it. Other jurisdictions should adopt similar requirements with stronger enforcement mechanisms.
Applicants deserve transparency about when and how AI evaluates them. Currently, most applicants have no idea that an algorithm is deciding their fate. They should know when AI is being used, what factors the AI considers, and how to request human review or accommodations. This transparency serves two purposes: it allows applicants to make informed decisions about whether to participate, and it creates accountability for discriminatory outcomes.
Software vendors shouldn't escape liability when their products discriminate. The traditional employment discrimination framework focuses on employers, but when a vendor creates a tool that systematically discriminates against protected groups and sells it to thousands of companies, the vendor should share responsibility. The Mobley v. Workday case may establish that vendors who perform hiring functions are "agents" subject to anti-discrimination law—a development that could reshape vendor accountability nationwide.
AI should augment human decision-making, not replace it entirely. Final hiring decisions should involve human judgment, with AI recommendations subject to review and override. The efficiency gains from AI come at too high a cost if they come without human accountability.
Finally, AI tools must be designed to work for everyone from the start. Accessibility cannot be an afterthought. Systems that rely on speech recognition, facial analysis, or personality assessments must accommodate people with disabilities, non-native English speakers, and those with different communication styles. Universal design isn't just good ethics—it's a legal requirement under the ADA.
Regulatory developments
The regulatory landscape is evolving rapidly, though unevenly. At the federal level, the EEOC's AI Initiative has been deprioritized following executive orders from the Trump Administration. But the underlying legal framework remains intact. Disparate impact liability applies to AI just as it applies to any other hiring practice. Private litigation and state enforcement may become more important in filling the gap left by reduced federal enforcement.
At the state level, California, Colorado, Illinois, New York, and other jurisdictions are developing or implementing AI regulations. The patchwork of state laws creates compliance complexity for multi-state employers but may eventually prompt federal legislation to establish uniform standards. Until that happens, employers must navigate a maze of different requirements depending on where they operate and where their applicants are located.
Internationally, the European Union's AI Act classifies employment AI as "high risk" with significant compliance obligations including conformity assessments, human oversight requirements, and transparency obligations. U.S. companies operating in Europe must comply with these stricter standards, which may eventually influence domestic practices as companies adopt uniform global policies.
For applicants
The litigation wave against AI hiring discrimination is just beginning. Mobley v. Workday, the HireVue complaint, the Aon complaint, and emerging cases like Harper v. Sirius XM represent a growing recognition that algorithms aren't immune from civil rights law. Courts are adapting traditional discrimination frameworks to new technology, and plaintiffs are finding legal theories that hold AI systems accountable.
If you've been rejected from jobs by AI systems, you may have legal rights. The key is documentation. Keep records of every application you submit, note the timing of rejections, and be aware of which companies use AI in hiring. The more data you have, the stronger any potential claim becomes.
The algorithms may be opaque, but the outcomes aren't. When AI systematically rejects qualified candidates based on age, race, disability, or other protected characteristics, the law provides remedies. Courts are increasingly willing to apply traditional discrimination law to the AI age, and the companies deploying these tools are increasingly being held accountable for their discriminatory effects.
Resources
Legal organizations
- •ACLU: aclu.org
- •National Employment Law Project: nelp.org
- •Legal Aid Society: legal-aid.org
- •Public Justice: publicjustice.net
Government agencies
- •EEOC: eeoc.gov
- •EEOC Filing: eeoc.gov/filing-charge-discrimination
- •State civil rights agencies: varies by state
Research and information
- •AI Now Institute: ainowinstitute.org
- •Upturn: upturn.org
- •Electronic Frontier Foundation: eff.org
Class action information
- •Top Class Actions: topclassactions.com
- •ClassAction.org: classaction.org
Conclusion: your qualifications should determine your fate
The promise of AI in hiring was objectivity—algorithms that would evaluate candidates on merit alone, free from the biases that have always plagued human decision-making. The reality has been different.
AI hiring tools don't eliminate bias. They encode it. They scale it. They automate it. And they make it invisible.
When Derek Mobley was rejected from 80 jobs within minutes or hours, no human ever saw his resume. An algorithm decided he wasn't worth interviewing based on patterns it learned from historical data—data that reflected decades of discrimination against people like him.
The Mobley v. Workday case, the iTutorGroup settlement, the HireVue and Aon complaints—these represent a reckoning. Courts and regulators are recognizing that civil rights laws apply to AI just as they apply to human decision-makers. Employers cannot outsource discrimination to algorithms and escape accountability.
But legal action alone isn't enough. The fundamental problem is a system that allows AI hiring tools to operate with minimal oversight, affecting millions of people's livelihoods without transparency or accountability.
If you've been rejected by AI: Your experience matters. Document it. Report it. Consider joining legal actions. The algorithms may be invisible, but the outcomes aren't—and the law provides remedies for discrimination regardless of whether a human or machine made the decision.
For the job market: The scale of AI in hiring—99% of companies using some form of AI, 83% screening resumes automatically—means this affects almost everyone. Understanding how these systems work, and knowing your rights when they fail, is now an essential part of job searching.
The algorithms should work for you, not against you. When they discriminate, the law is on your side.
---
This guide provides general information about AI hiring discrimination and related legal issues. It does not constitute legal advice. Employment discrimination law is complex and varies by jurisdiction. Consult with an employment attorney for specific situations.
Sources: EEOC, ACLU, Fisher Phillips, Law and the Workplace, FairNow, Reuters, Insight Global, MIT Technology Review, Public Justice
Last updated: December 2025