Home/Digital Rights/Platform Algorithm Harm
Digital Rights

Platform Algorithm Harm: Your Rights Against Harmful AI Systems

Algorithmic systems on social media and tech platforms can cause serious harm through discriminatory decisions, addictive design, mental health impacts, and amplification of harmful content. Know your rights under EU and UK digital regulations.

2,100+
Active lawsuits against social media platforms
€1.2B
Largest GDPR fine for algorithmic violations
40+ states
Filed lawsuits against Meta & TikTok
Nov 2025
First bellwether trial starts

Check If You Have a Valid Claim

Our AI will analyze your description and guide you through the next steps

Historic multidistrict litigation consolidating hundreds of lawsuits against Meta (Facebook, Instagram), TikTok, YouTube, and Snapchat. Plaintiffs allege platforms knowingly designed addictive algorithms that harm adolescent mental health, causing anxiety, depression, eating disorders, self-harm, and suicide. Over 40 U.S. states filed lawsuits in October 2024 against Meta for using addictive algorithms targeting youth. 14 states sued TikTok separately. New York City became first major city to sue (February 2024), alleging platforms 'purposefully manipulate and addict children and teens.' Cases grew from 620 in November 2024 to 2,172 by October 2025. Internal documents (Facebook Files, whistleblower Frances Haugen testimony) revealed platforms knew their algorithms harmed teens but prioritized engagement over safety.

Coordinated enforcement action by over 40 states against Meta (October 2024) and 14 states against TikTok (October 2024) for algorithmic harm to youth mental health. States allege platforms violated consumer protection laws, deceptive trade practices acts, and child safety statutes by designing addictive algorithms targeting children. Claims include: algorithms amplify harmful content (eating disorders, self-harm, suicide), features designed to maximize engagement create addiction, platforms misled public about safety measures, inadequate age verification allows children under 13 to use platforms and be subjected to algorithmic profiling.

European Commission opened formal DSA proceedings against multiple platforms for algorithmic violations in 2024. TikTok investigation (February 2024) examines addictive algorithm design, particularly affecting children, and harmful content amplification. TikTok withdrew 'Rewards' program after EU found it potentially addictive for children. Meta investigation focuses on whether Instagram and Facebook algorithms encourage addictive behaviors in children and create 'rabbit-hole effects' leading to endless harmful content. Temu under investigation for algorithmic recommendation systems promoting prohibited items and potentially addictive design. AliExpress proceedings (March 2024) investigate recommender systems compliance. EU has power to order interim measures: algorithm changes, keyword monitoring, or suspension of features.

Class action lawsuit alleging Facebook violated Illinois Biometric Information Privacy Act (BIPA) by collecting and storing biometric data from facial recognition algorithms without proper consent. Settlement required Facebook to pay $650 million to Illinois users, one of the largest privacy settlements in history. Established that algorithmic processing of biometric data requires explicit informed consent and that violations carry substantial financial liability.

Texas Attorney General sued Meta for violations of Texas biometric privacy law and deceptive trade practices related to facial recognition algorithms. Lawsuit alleges Meta captured biometric data of millions of Texans without consent through photo tagging and facial recognition features. Also alleges Meta's algorithms amplified harmful content while misrepresenting safety measures. Case survived Meta's Section 230 defense.

FTC alleged BetterHelp, an online counseling platform, promised users their health information would be kept private but instead shared sensitive mental health data with Facebook, Snapchat, Criteo, and Pinterest for advertising algorithms. Data included email addresses, IP addresses, and details about mental health challenges and treatment preferences. BetterHelp used this data to target ads at users and track them across the internet.

Lawsuit challenging President Trump's blocking of critics on Twitter. Plaintiffs argued that algorithmic curation on Twitter's platform, combined with Trump's use of @realDonaldTrump as official presidential communication, created a public forum from which government officials cannot exclude individuals based on viewpoint. Court examined how Twitter's algorithms shaped public discourse and access to government communication.

Lawsuit alleging Baidu's search algorithm censored pro-democracy content and speech critical of Chinese government, violating New York human rights laws. Plaintiffs claimed Baidu's algorithm discriminated based on political viewpoint. Court dismissed under First Amendment grounds, holding that search algorithms constitute editorial judgment protected by First Amendment, even when that judgment implements political censorship.

Class actions against Robinhood following its decision to halt trading in GameStop and other 'meme stocks' during January 2021 volatility. Plaintiffs allege Robinhood's algorithms manipulated markets, that halting violated fiduciary duties, and that Robinhood prioritized relationships with hedge funds over retail investors. Also allege algorithmic trading restrictions were selectively applied in discriminatory manner.

ACLU and partners challenged U.S. Customs and Border Protection's use of facial recognition algorithms at airports and borders, alleging the technology is racially biased, violates Fourth Amendment rights, and lacks proper oversight. Evidence showed facial recognition algorithms have higher error rates for people of color, women, and elderly individuals, leading to discriminatory enforcement.

Landmark UK case challenging police use of automated facial recognition (AFR) technology. Court of Appeal ruled that South Wales Police's use of AFR violated privacy rights (Article 8 ECHR), data protection law, and public sector equality duty. Court found the legal framework was insufficient to prevent algorithmic bias and arbitrary use. Police failed to ensure the algorithmic system complied with Equality Act 2010.

Series of cases and regulatory actions challenging Schufa, Germany's largest credit rating agency, over lack of transparency in credit scoring algorithms. Individuals exercised GDPR Article 15 rights to understand algorithmic decisions affecting creditworthiness but Schufa refused to disclose scoring methodology, citing trade secrets. German DPA and courts have required greater transparency while balancing trade secret protections.

UK Information Commissioner's Office fined TikTok for failing to protect children's privacy and data. Investigation found TikTok processed data of children under 13 without parental consent, used this data to feed recommendation algorithms, enabled unknown adults to send messages to children, and had insufficient age verification. ICO found TikTok's algorithms amplified risks to children by using their data to create addictive, personalized content feeds.

Irish Data Protection Commission imposed record €1.2 billion fine on Meta for continuing to transfer EU user data to United States for algorithmic processing without adequate safeguards following Schrems II decision. Meta's advertising and content recommendation algorithms rely on vast data processing, much of which occurred on US servers without proper legal mechanisms protecting against US surveillance.

Loading jurisdiction data...

Frequently Asked Questions

How do I know if an algorithm caused me harm?

What's the difference between the EU Digital Services Act (DSA) and the AI Act?

Can I sue Facebook, TikTok, or YouTube for algorithm-driven mental health harm?

What can I do if a platform's algorithm keeps showing me harmful content I've reported?

How long does it take to get compensation for algorithmic harm?

Are there organizations that can help me fight algorithmic harm?

What should I do if I can't afford a lawyer to pursue my algorithmic harm claim?

What if a job recruitment algorithm discriminated against me?

Ready to Challenge Algorithmic Harm?

Don't let platforms hide behind 'the algorithm.' You have rights, and we can help you exercise them.