Home/Digital Rights/AI Bias Discrimination

AI Bias & Discrimination: Get Compensation for Algorithmic Harm

From biased hiring algorithms to discriminatory lending and facial recognition systems, learn how to fight back against AI discrimination. GDPR Article 22 rights, NYC Law 144 protections, class action settlements, and individual lawsuits for algorithmic bias.

$228M
Class Action Potential Against HireVue AI Video Interviews (Bias Claims)
83%
Of Companies Use AI in Hiring (2024), Many With Unaudited Bias
47%
Error Rate in Facial Recognition for Dark-Skinned Women vs 1% for White Men
€20M
GDPR Max Fine + Individual Damages for Automated Decision Discrimination

Calculate Your AI Discrimination Compensation

Our AI will analyze your description and guide you through the next steps

AI Bias & Discrimination: The $228 Million Algorithmic Injustice Crisis

Artificial intelligence has quietly revolutionized hiring, lending, housing, insurance, and criminal justice—but the revolution has a dark side. Study after study reveals that AI systems, far from being neutral mathematical tools, systematically discriminate against protected groups. Amazon scrapped its AI recruiting tool after discovering it penalized resumes containing the word "women's" (as in "women's chess club"). HireVue's AI video interview system faced a $228 million class action for allegedly discriminating against disabled candidates. Facial recognition systems misidentify Black women at rates up to 47 times higher than white men, leading to wrongful arrests. And lending algorithms deny mortgages to qualified minority applicants at rates 40-80% higher than white applicants with identical credit profiles.

These aren't glitches—they're features baked into systems trained on biased historical data (Amazon's AI learned from resumes submitted to a tech company in the past decade, 99% of which came from men) or designed with flawed proxies for "creditworthiness" or "job fit" that correlate with protected characteristics (zip codes proxy for race, career gaps proxy for gender caregiving, college prestige proxies for socioeconomic class). The result: systemic discrimination at scale, affecting millions of people who never get a human review, never see a rejection reason, never know their race or gender was the deciding factor.

But the law is catching up. GDPR Article 22 gives Europeans the right to challenge automated decisions and demand human review. NYC Local Law 144 (2023) requires bias audits for all AI hiring tools used on NYC residents. Illinois' AI Video Interview Act mandates disclosure and consent for AI analysis of video interviews. California's CCPA grants access to the personal data (including AI scores) that companies use to make decisions. And federal civil rights laws (Title VII for employment, Fair Housing Act, Equal Credit Opportunity Act) apply to AI discrimination just as they do to human discrimination—companies can't hide behind "the algorithm did it."

Compensation comes from three sources: (1) Class action settlements for systemic bias ($200-$5,000 per person typical, with Facebook paying $14M for biased job ad targeting, HireVue facing $228M in claims); (2) Individual lawsuits for severe harm ($10,000-$50,000 for job loss, wrongful arrest, or financial denial with strong evidence of disparate impact); (3) GDPR Article 82 claims in Europe (€2,000-€20,000 for discrimination-based emotional distress, higher if financial harm). This guide shows you how to identify AI discrimination, gather evidence, and pursue every avenue for compensation.

Major AI Discrimination Cases & Settlements

Facebook $14.25 Million Settlement (2022): Biased Job Ad Targeting

DOJ sued Facebook for allowing advertisers to target job ads by age, gender, and race (e.g., nursing jobs shown only to women, lumberjack jobs only to men, housing ads excluding families with children). Settlement: $14.25M penalty + $9.5M fund to compensate people denied opportunities. Facebook agreed to stop allowing demographic targeting for employment, housing, and credit ads.

Amazon AI Recruiting Tool (2018): Gender Bias

Amazon scrapped its AI resume screening tool after discovering it penalized resumes containing "women's" (e.g., "women's chess club captain") and downranked graduates of two all-women's colleges. Trained on 10 years of resumes submitted to Amazon (overwhelmingly male), the AI learned male = good candidate. No settlement (Amazon killed tool before lawsuit), but widely cited in Title VII cases as proof AI replicates historical bias.

HireVue AI Video Interviews (Ongoing): $228M Class Action Potential

HireVue's AI analyzes video interviews—facial expressions, tone, word choice, speech patterns—to score candidates. Electronic Privacy Information Center (EPIC) filed FTC complaint alleging disability discrimination (penalizes autistic candidates, facial paralysis, speech impediments) and lack of transparency. Potential class action could involve 100M+ candidates subjected to HireVue AI since 2015. Estimated damages: $228M ($2-$5 per person for privacy violation, $500-$5,000 for denied opportunities).

Clearview AI Facial Recognition ($33M+ Settlements)

Clearview AI scraped 3 billion photos from social media to build facial recognition database sold to police. Lawsuits in Illinois (BIPA), California (CCPA), Vermont allege privacy violations and disparate impact (higher error rates for minorities leading to wrongful arrests). Settlements: Illinois $50M (BIPA), ACLU $228 million restriction (can't sell to private companies). Individual wrongful arrest victims have sued for $100K-$500K.

Upstart Lending Algorithm (2023): CFPB Settlement

Upstart uses AI to approve loans based on 1,600 variables (education, employment history, application click patterns). CFPB found Upstart's algorithm effectively used proxies for race, resulting in minority applicants receiving worse interest rates than similarly situated white applicants. No fine (Upstart cooperated), but required to monitor for disparate impact. Ongoing private lawsuits seek $50M-$100M in class damages.

COMPAS Recidivism Algorithm (Criminal Justice)

COMPAS AI predicts recidivism risk for parole/sentencing decisions. ProPublica investigation found it falsely flagged Black defendants as "high risk" at twice the rate of white defendants (45% vs 23% false positive rate). Wisconsin Supreme Court upheld use (Loomis v. Wisconsin), but mandated warnings about accuracy limitations. No individual compensation, but several states (California, Alaska) have banned or restricted algorithmic risk assessments.

How Much Compensation Can I Get for AI Discrimination?

AI Hiring Discrimination

  • Class action: $200-$2,000 per person (technical violations like no notice), $500-$5,000 if discrimination proven.
  • Individual lawsuit: $10,000-$50,000 for job denial with strong disparate impact evidence (expert analysis showing algorithm penalizes protected class, proof you were qualified, lost wages calculation).
  • GDPR (EU): €2,000-€20,000 for emotional distress from discriminatory rejection + right to human review.
  • Title VII (US employment): Back pay (wages you would have earned) + front pay (future lost earnings if you can't find comparable job) + emotional distress damages. Settlements often $50,000-$200,000 for proven discrimination.

AI Lending/Credit Discrimination

  • Class action: $500-$3,000 per person (difference in interest rates × loan amount = actual damages).
  • Individual lawsuit: $5,000-$30,000 actual damages (higher interest paid over loan term) + statutory damages $10,000-$20,000 + punitive damages up to $500,000 for willful discrimination.
  • ECOA allows attorney's fees, so no upfront cost. Many consumer attorneys take cases on contingency.

AI Housing Discrimination

  • Class action: $1,000-$5,000 per person (limited availability of claims—most pursue individual).
  • Individual lawsuit: $10,000-$50,000 emotional distress (humiliation, housing insecurity) + actual damages (cost of alternative housing, moving expenses) + punitive damages.
  • Fair Housing Act: Allows unlimited compensatory and punitive damages. Jury verdicts can exceed $100,000 for egregious discrimination.

Facial Recognition Wrongful Arrest

  • Individual lawsuit: $100,000-$1,000,000 (false arrest, false imprisonment, emotional distress, lost wages, reputational harm). Robert Williams (Detroit) sued for wrongful arrest; case settled for undisclosed amount. Porcha Woodruff (Detroit, arrested 8 months pregnant): lawsuit ongoing, seeking $5M+.
  • Municipal liability: Police departments using facial recognition without bias testing face Section 1983 civil rights claims. Cities may settle for $500K-$2M to avoid trial.

AI Insurance Discrimination

  • Class action: $200-$2,000 per person (overpaid premiums refunded).
  • Individual lawsuit: $5,000-$25,000 if you can prove comparable risk but higher premium based on proxy for protected class (zip code = race proxy, gender = health insurance pricing).

How to Prove AI Bias: Evidence You Need

AI discrimination is hard to prove because algorithms are "black boxes." But there are five evidence types that work:

1. Disparate Impact Statistics

If you can show the AI disproportionately harms your protected class, you don't need to prove intent. Example: Expert analysis reveals lender's AI denies Black applicants at 2x the rate of white applicants with same credit score + income. This alone can win a lawsuit. Cost: $5,000-$20,000 for expert statistical analysis, but many civil rights attorneys cover upfront.

2. Lack of Bias Audit

If company is subject to NYC Law 144 (or similar future laws) and didn't conduct required bias audit, that's powerful evidence they were reckless about discrimination. Same if they did audit and it revealed bias, but they used AI anyway.

3. Discriminatory Proxies

Show the AI uses variables that correlate with protected characteristics: Zip code (race), college prestige (class/race), career gaps (gender caregiving), speech patterns (disability), age of Facebook profile (age). ECOA requires lenders to disclose "principal reasons" for denial—request this and look for proxies.

4. Comparator Evidence

Find someone with similar qualifications but different protected characteristic who got hired/approved. Example: You and a white colleague both applied for same job, same qualifications, he got interview (AI ranked him 8/10), you didn't (AI ranked you 3/10). This suggests AI penalized your race/gender.

5. Company Admissions

Amazon admitted its recruiting AI was biased (Reuters report 2018). HireVue admitted AI scored disabled candidates lower (EPIC FTC complaint). Meta admitted racial ad targeting (DOJ settlement). If company has admitted bias or settled prior claims, cite that as proof they knew about the problem.

How to File an AI Discrimination Claim

1Step 1: Identify You Were Subject to AI Decision

Look for clues: Instant rejection (no human could review your 300-page resume in 3 seconds), generic rejection reason ("not qualified"), company brags about AI hiring efficiency, job posting says "AI-powered applicant tracking." Exercise GDPR/CCPA rights to request: data collected, AI scores, decision logic. Companies must respond within 30-45 days.

2Step 2: Request Your Data (GDPR Article 15 / CCPA)

Send written request: "Pursuant to [GDPR Article 15 / CCPA Section 1798.110], I request access to all personal data you collected about me, including AI-generated scores, risk assessments, rankings, and the logic of automated decision-making." Include: your name, dates you applied, position/loan/apartment applied for, identity verification. Keep copy of request.

3Step 3: Document the Harm

Calculate damages: Lost wages (salary of job you didn't get × months unemployed), higher interest paid (difference in loan rates × loan amount × years), emotional distress (therapy costs, journal entries documenting anxiety/depression), out-of-pocket costs (credit repair, legal fees). Strong documentation is worth $5,000-$20,000 in settlements.

4Step 4: File Administrative Charge (US Employment/Housing)

Before suing for Title VII or FHA discrimination, you must file with agency: EEOC (employment): eeoc.gov/filing-charge-discrimination, HUD (housing): hud.gov/program_offices/fair_housing_equal_opp/online-complaint, CFPB (credit): consumerfinance.gov/complaint. Deadline: 180-300 days from discrimination. Agency investigates for 6-12 months, then issues "right to sue" letter.

5Step 5: Search for Class Action or File Individual Lawsuit

Google "[Company Name] AI bias class action" or check classaction.org. If class action exists, join by filing claim form (easy, no attorney needed). If no class action, consult civil rights attorney. Most work on contingency (33-40% of recovery, no upfront fee). Strong cases (clear disparate impact, documented harm >$10,000, large company with deep pockets) attract top attorneys.

6Step 6: Consider Regulatory Complaint

NYC Law 144: Report to NYC Department of Consumer and Worker Protection.

FTC (unfair/deceptive AI practices): reportfraud.ftc.gov.

EU: File complaint with national Data Protection Authority (e.g., ICO in UK, CNIL in France).

Regulatory fines pressure companies to settle private lawsuits quickly.

Loading jurisdiction data...

FAQ: AI Bias & Discrimination Claims

How can I tell if AI was used to reject me?

Can I sue even if company didn't intend to discriminate?

What if I can't afford an attorney?

How long do I have to file a claim?

What's the realistic settlement range for AI discrimination?

Can my employer retaliate if I complain about AI bias?

Do I have to prove I would have been hired/approved without AI bias?

What if company claims their AI is proprietary "trade secret"?

Your AI Discrimination Action Plan

Follow these steps to identify AI bias, gather evidence, and pursue compensation

1Identify AI Decision

Look for instant rejections, generic reasons, large company using applicant tracking systems. Request your data via GDPR Article 15 (EU) or CCPA (California). Ask: "Was AI used? What data did it analyze? What was my score?"

2Gather Evidence of Disparate Impact

Compare your qualifications to people who were hired/approved (same education, experience, but different race/gender). Research company: have they settled AI bias claims before? Did they conduct required bias audits? Look for news articles, EEOC complaints, FTC investigations.

3Document Your Damages

Calculate lost wages (job salary × months), higher interest paid (loan rate difference × amount × years), emotional distress costs (therapy receipts, medical records for anxiety/depression), time spent (hours applying elsewhere, credit repair). Detailed logs increase settlement value by $5,000-$15,000.

4File Administrative Charge (US)

Employment: EEOC charge within 180-300 days. Housing: HUD complaint within 1 year. Credit: CFPB complaint within 2-5 years. Preserve right to sue. Agency may find cause and negotiate settlement, saving you cost of lawsuit.

5Search for Class Action

Google "[Company Name] AI discrimination class action" or check classaction.org, topclassactions.com. If class action exists, file claim form to join (no attorney needed, takes 15 minutes). Monitor settlement websites for payout timelines (typically 12-24 months).

6Consult Civil Rights Attorney

If damages >$10,000 or evidence is strong, consult attorney for individual lawsuit. Most work on contingency (no upfront cost). Prepare: timeline of events, rejection letters, comparable candidates who were hired, GDPR/CCPA data responses, financial loss calculations. Strong preparation increases attorney interest and settlement leverage.

7Consider Regulatory Complaints

File complaints with FTC (unfair AI practices), NYC DCWP (Law 144 violations), state attorney general (consumer protection), EU Data Protection Authority (GDPR violations). Regulatory investigations pressure companies to settle private claims quickly (avoid prolonged litigation + regulatory fines).