AI Ethics
10/13/2025
14 min read
511 views

Algorithmic Deactivation and Discrimination 2025: How to Prove Bias and Fight Back

Mobley v. Workday class certified May 2025. EU AI Act in force. 85% of AI resume screeners prefer white names. NYC audit law, Colorado AI Act, EEOC guidance. Complete guide to proving algorithmic discrimination and winning.

C

By Compens.ai Collective Intelligence

Insurance Claims Expert

Algorithmic Deactivation and Discrimination 2025: How to Prove Bias and Fight Back

Updated: December 2025

The Algorithmic Discrimination Crisis

Artificial intelligence is making life-changing decisions about millions of workers every day—often without explanation, oversight, or accountability. From gig platform deactivations to automated resume rejections, AI systems are systematically discriminating against protected groups while companies hide behind claims of objectivity.

But the legal landscape is shifting. In May 2025, the landmark Mobley v. Workday case achieved class certification, potentially covering millions of job applicants. The EU AI Act is now in force, with employment AI classified as high-risk. Colorado became the first U.S. state to enact comprehensive AI discrimination legislation. And research continues to expose systemic bias: a University of Washington study found AI models preferred resumes with white-associated names in 85% of cases.

The reality is clear: you can prove algorithmic discrimination, and you can win.

The Scale of the Problem

| Metric | Finding | |--------|---------| | AI resume screeners preferring white names | 85% | | AI resume screeners preferring Black names | 9% | | Gig workers deactivated algorithmically | 150,000+ (Uber alone) | | Deactivations lacking clear explanation | 73% | | AI systems showing measurable bias | 78% | | California wage theft case potential | Billions of dollars |

---

Landmark Cases: 2024-2025

Mobley v. Workday: The Class Action That Changed Everything

Background: In February 2023, Derek Mobley—an African American male over 40 with a bachelor's degree from Morehouse College (an HBCU)—filed suit against Workday, alleging its AI-powered hiring tools discriminate based on race, age, and disability.

Key Evidence:
  • Some job rejections came within minutes of application submission
  • One rejection arrived at 1:55 a.m., less than an hour after applying at 12:55 a.m.—proving no human reviewed his application
  • Pattern of automated rejections despite strong qualifications

Legal Milestones:

| Date | Development | |------|-------------| | July 2024 | Federal judge rules AI vendors can be held directly liable for discrimination under "agent" theory | | April 2024 | EEOC files supporting brief stating algorithmic hiring tools can violate anti-discrimination laws without explicit intent | | May 2025 | Class certification granted, potentially covering millions of applicants |

Significance: This case establishes that software vendors—not just employers using the software—can be held liable for discriminatory outcomes. This could fundamentally reshape the AI hiring industry.

EEOC v. iTutorGroup: The First Settlement

In August 2023, the EEOC reached a $325,000 settlement (the first of its kind) against iTutorGroup for programming its recruitment software to automatically reject applicants over a certain age. This established that intentionally discriminatory algorithms are clearly illegal.

Intuit/HireVue ACLU Complaint (March 2025)

The ACLU Colorado filed a complaint with the EEOC and Colorado Civil Rights Division against Intuit and HireVue on behalf of an Indigenous and deaf job applicant. HireVue's AI conducted video interviews and provided scores. The applicant was rejected and told she needed to "practice active listening"—a particularly offensive instruction to a deaf candidate.

Aon AI Tools FTC Complaint (May 2024)

A complaint filed with the FTC argues that Aon's AI assessment tools—Adept15, Vid-assess AI, and gridChallenge—are discriminatory against people with disabilities and certain racial groups.

---

Gig Economy: Algorithm-Controlled Work

California Prop 22: A Pyrrhic Victory for Platforms

On July 25, 2024, the California Supreme Court unanimously upheld Proposition 22, allowing Uber, Lyft, and DoorDash to continue classifying drivers as independent contractors. However, this victory may be short-lived.

The Billion-Dollar Wage Theft Case

California drivers have filed a massive wage theft lawsuit against Uber and Lyft that could be worth billions of dollars. Key developments:

| Date | Development | |------|-------------| | October 2024 | U.S. Supreme Court refuses to hear Uber/Lyft appeals, allowing wage claims to proceed | | Spring 2025 | Companies enter settlement negotiations with California AG, Labor Commissioner, and city attorneys of LA, SF, and San Diego |

The core argument: Drivers don't have the control that defines independent contractors. If algorithms control workers like regular employees—dictating routes, pricing, acceptance rates, and deactivation—companies must provide adequate compensation.

Massachusetts Settlement

Uber and Lyft reached a $175 million settlement with Massachusetts over driver misclassification claims—a sign of mounting legal pressure nationwide.

How Gig Algorithms Discriminate

| Factor | Discriminatory Impact | |--------|----------------------| | Acceptance rate penalties | Disadvantages disabled drivers who need more time | | Completion rate rules | Penalizes drivers in low-connectivity areas (often minority neighborhoods) | | Customer ratings | Studies show systematic racial bias | | Surge pricing algorithms | Can exclude certain neighborhoods | | Deactivation thresholds | Applied inconsistently with no transparency |

---

The Regulatory Revolution: 2024-2025

EU AI Act: The Global Standard

The EU's landmark AI regulation entered into force on August 1, 2024, creating the world's first comprehensive framework for AI governance.

Employment AI = High Risk

Under the EU AI Act, any AI system used for recruitment or HR decision-making is classified as "high-risk", including systems that:

  • Screen resumes
  • Conduct automated interviews
  • Make hiring recommendations
  • Monitor employee performance
  • Allocate tasks based on personal traits
  • Make decisions on promotion or termination

Implementation Timeline:

| Date | Obligation | |------|------------| | February 2, 2025 | Prohibited AI practices banned; AI literacy requirements begin | | August 2, 2025 | General-purpose AI model requirements | | August 2, 2026 | Most high-risk AI system requirements | | December 2027 | Full enforcement for employment AI (delayed from August 2026) |

Prohibited Practices in Employment (Now in Force):

  • AI that manipulates or deceives candidates
  • "Social scoring" of applicants based on online behavior
  • Inferring sensitive traits from biometric data
  • Emotion recognition in candidate interviews or video assessments

Penalties: Up to €35 million or 7% of global annual turnover, whichever is higher.

Extraterritorial Reach: The Act applies to any company whose AI outputs are used in the EU, regardless of where the company is located.

Colorado AI Act: First Comprehensive U.S. State Law

Signed: May 17, 2024

Effective Date: June 30, 2026 (delayed from February 2026)

Colorado's law is the most comprehensive U.S. regulation of AI discrimination in employment decisions.

Key Definitions:

"Algorithmic discrimination" means any AI use that "results in unlawful differential treatment or impact that disfavors an individual or group" based on:
  • Race, color, ethnicity, national origin
  • Age, disability, genetic information
  • Sex, religion, veteran status
  • Reproductive health
  • Limited English proficiency
  • Any class protected under Colorado or federal law

Employer Requirements:

| Requirement | Details | |-------------|---------| | Risk Management Policy | Framework for identifying and mitigating discrimination risks | | Annual Impact Assessments | Document purpose, use cases, benefits, and discrimination risks | | Candidate Notification | Inform applicants of AI use and provide appeal process | | Adverse Decision Disclosure | Explain AI's role in rejections | | Attorney General Reporting | Notify within 90 days of discovering algorithmic discrimination |

Small Business Exemption: Companies with <50 employees that don't train AI on their own data and use systems for intended purposes are largely exempt.

Enforcement: State Attorney General only (no private right of action).

NYC Local Law 144: The Audit Mandate

Effective: July 5, 2023

New York City was the first local government to require bias audits of AI hiring tools.

Requirements:
  • Annual bias audit by independent third party
  • Publish audit results on company website
  • Notify candidates 10+ days before AI evaluation
  • Candidates can opt out (employer must provide alternative)

Enforcement Problems Exposed (December 2025):

A State Comptroller audit found significant enforcement gaps:

| Agency Finding | Comptroller Finding | |----------------|---------------------| | 1 instance of non-compliance identified | 17+ instances of non-compliance | | 2 complaints received in 2 years | Complaint intake process not verified |

The audit concluded that DCWP's "stakeholder education combined with complaint-based enforcement" is inadequate because companies that don't comply simply don't post disclosures—making them invisible to regulators.

Penalties: $500-$1,500 per day per violation, up to $10,000 per week of continued violation.

Illinois BIPA and AI Disclosure Law

BIPA (Biometric Information Privacy Act):
  • $1,000 per negligent violation
  • $5,000 per intentional violation
  • Private right of action available
  • Applies to AI systems using facial recognition, voice prints, etc.
AI Video Interview Act (effective January 2025):
  • Must disclose AI use in video interviews
  • Explain how AI evaluates applicants
  • Obtain consent before analysis

---

How to Detect Algorithmic Discrimination

Red Flags

  • Sudden metric drops without behavior change
  • Rejection timing suggesting automated decisions (middle of night, within minutes)
  • Repeated rejections despite strong qualifications
  • Deactivation after updating personal information
  • Different treatment than similar workers
  • Appeals denied instantly or formulaically
  • No human ever reviews your case

Building Your Evidence

Phase 1: Document Your Experience

| Document | Details to Record | |----------|-------------------| | Timeline | Every interaction, rejection, metric change | | Screenshots | Metrics, notifications, status changes | | Communications | All messages with platform (save as PDFs) | | Context | What you were doing when issues arose |

Phase 2: Request Your Data

Under CCPA (California), GDPR (EU), and similar laws, you can request:

 Under [applicable law], I request within the legal timeframe:

  • All personal data you hold about me
  • All data used in automated decisions affecting me
  • The logic and parameters of algorithms applied to my account
  • Categories of training data used in these algorithms
  • Records of any human review of my case
  • All metrics, scores, and ratings assigned to me over time
  • Comparison data showing how my metrics compare to averages

Phase 3: Find Patterns Across Workers

  • Survey community groups and forums
  • Look for demographic clusters in affected workers
  • Document statistical patterns

The Four-Fifths Rule

Federal guidelines presume discrimination when a protected group's selection rate is less than 80% of the highest group's rate.

Example:
  • White applicants: 90% acceptance
  • Black applicants: 65% acceptance
  • 65% ÷ 90% = 72%
  • 72% < 80% = Presumptive discrimination

---

Fighting Back: Step-by-Step

Step 1: Exhaust Internal Appeals

  • Request specific, written reasons for adverse decision
  • Demand human review (document if denied)
  • Cite your qualifications in detail
  • Set clear deadlines for response
  • Keep copies of everything

Step 2: File Government Complaints

EEOC (eeoc.gov)
  • File within 180-300 days depending on state
  • Describe how algorithm discriminated
  • Provide disparate impact evidence
  • Request right-to-sue letter if they decline investigation
State Agencies
  • California: DFEH
  • Colorado: Attorney General
  • New York: DCWP (for Local Law 144), Division of Human Rights
  • Illinois: Department of Human Rights
FTC
  • For deceptive practices and consumer harm
  • Report at reportfraud.ftc.gov

Step 3: Demand Arbitration (If Required)

Many platforms require arbitration, but this can work in your favor:

| Factor | Advantage | |--------|-----------| | Filing fee | Often only $200 | | Arbitrator cost | Platform usually pays | | Timeline | 60-120 days typically | | Discovery | Can compel algorithm disclosure |

Step 4: Legal Action

After EEOC Exhaustion:
  • File federal lawsuit under Title VII, ADA, ADEA
  • Add state law claims
  • Seek class certification if pattern exists
Expert Witnesses You'll Need:
  • Algorithm audit expert (reviews code for bias)
  • Statistician (analyzes disparate impact)
  • Industry expert (identifies less discriminatory alternatives)

Step 5: Join or Start Class Action

Check topclassactions.com for existing suits.

To start a class action:
  • Find employment attorney specializing in discrimination
  • Survey affected workers to document patterns
  • Identify common questions of law and fact
  • Demonstrate numerosity (enough affected workers)

---

Countering Platform Arguments

| Platform Argument | Your Counter | |-------------------|--------------| | "Our algorithm is objective" | Disparate impact doesn't require intent—only discriminatory outcome | | "We have a business necessity" | Less discriminatory alternatives exist (you can cite them) | | "A human reviewed your case" | Instant denials and timing prove otherwise | | "We can't disclose our algorithm" | Discovery can compel disclosure under protective order | | "This would eliminate all standards" | Law only eliminates discriminatory standards | | "You agreed to our terms" | Illegal discrimination can't be contracted away |

---

Success Strategies

Build Power

  • Coalition building: Connect with other affected workers
  • Media pressure: Contact journalists covering tech/labor
  • Social media: Document and share your experience
  • Labor organizing: Join or form worker associations

Use Available Leverage

  • Leverage NYC audit law to access bias data
  • Cite EU AI Act for global platforms
  • Reference EEOC guidance on AI discrimination
  • Point to class action precedents like Workday

Document Everything

  • Screenshot all metrics, ratings, and communications
  • Save emails and messages as PDFs
  • Note dates, times, and exact wording
  • Create backup copies off-platform

Be Strategic

  • File complaints while evidence is fresh
  • Don't accept settlement without understanding full value
  • Consider whether individual or class approach is better
  • Work with attorneys who understand algorithmic discrimination

---

The Future: Algorithm Accountability

Coming Regulations

| Jurisdiction | Development | |--------------|-------------| | Federal (U.S.) | Algorithmic Accountability Act (proposed) | | Additional states | Following NYC and Colorado models | | EU | Full AI Act enforcement by 2027 | | UK | Algorithmic transparency requirements expanding | | Australia | Fair Work Act covering algorithmic management |

Emerging Legal Theories

  • Negligent algorithm design: Failure to test for bias
  • Strict liability: For high-risk AI applications
  • Fiduciary duty: Platform obligations to workers
  • Fraud/deception: For concealing algorithmic decision-making

---

Resources

Legal Help

| Resource | Focus | |----------|-------| | NELA (nela.org) | National Employment Lawyers Association | | Legal Aid | Free legal services by location | | ACLU | Civil liberties including employment | | EFF | Electronic Frontier Foundation |

Advocacy Organizations

| Organization | Focus | |--------------|-------| | Algorithmic Justice League | AI bias research and advocacy | | AI Now Institute | Policy research on AI harms | | Data & Society | Tech and social impact research | | Partnership on AI | Industry-academic collaboration |

Government Agencies

| Agency | Website | |--------|---------| | EEOC | eeoc.gov | | FTC | ftc.gov | | NYC DCWP | nyc.gov/dcwp | | Colorado AG | coag.gov | | California DFEH | dfeh.ca.gov |

Research

| Institution | Focus | |-------------|-------| | Stanford HAI | Human-Centered AI research | | MIT Media Lab | Algorithm accountability | | Oxford Internet Institute | Digital economy research | | Georgetown Law | AI and civil rights |

---

Conclusion: You Can Win

Algorithmic discrimination is real, measurable, and illegal. The legal framework to fight it is stronger than ever:

  • Mobley v. Workday proved AI vendors can be held liable
  • NYC Local Law 144 requires bias audits and disclosure
  • Colorado AI Act mandates risk management and impact assessments
  • EU AI Act classifies employment AI as high-risk with severe penalties
  • EEOC actively supports workers challenging AI discrimination

The platforms want you to believe algorithms are objective, inevitable, and unchallengeable. They're wrong on all counts.

Your action plan:

  • Know your rights under federal, state, and local law
  • Document everything from the first suspicious metric drop
  • Request your data under CCPA, GDPR, or equivalent laws
  • Find others affected by the same patterns
  • File complaints with EEOC and state agencies
  • Demand arbitration or pursue legal action
  • Consider class action if discrimination is widespread
  • Never accept unexplained algorithmic decisions

The era of unchecked algorithmic discrimination is ending. Make sure you're part of ending it.

---

This guide provides general information and does not constitute legal advice. For specific situations, consult an attorney experienced in algorithmic discrimination and employment law.

Sources: EEOC, EU AI Act, NYC DCWP, Colorado AG, Algorithmic Justice League, AI Now Institute

Last updated: December 2025

Tags

Algorithmic Discrimination
AI Bias
Employment Law
Gig Economy
Worker Rights
EU AI Act
EEOC
Class Action
Platform Work

Fight Unfairness with AI-Powered Support

Join thousands who've found justice through our global fairness platform. Submit your case for free.