Home/Digital Rights/Platform Algorithm Harm
デジタル権利

プラットフォームアルゴリズム被害:有害なAIシステムに対するあなたの権利

ソーシャルメディアや技術プラットフォームのアルゴリズムシステムは、差別的な決定、中毒性のあるデザイン、メンタルヘルスへの影響、有害なコンテンツの増幅を通じて深刻な被害を引き起こす可能性があります。EUおよび英国のデジタル規制に基づくあなたの権利を知りましょう。

€500M+
DSA違反の最大罰金
80%
のティーンエイジャーがアルゴリズム被害を報告
6+か国
強力なデジタル権利法を持つ国
2024-2025
DSAの新しい執行期間

有効な請求権があるか確認する

Our AI will analyze your description and guide you through the next steps

Historic multidistrict litigation consolidating hundreds of lawsuits against Meta (Facebook, Instagram), TikTok, YouTube, and Snapchat. Plaintiffs allege platforms knowingly designed addictive algorithms that harm adolescent mental health, causing anxiety, depression, eating disorders, self-harm, and suicide. Over 40 U.S. states filed lawsuits in October 2024 against Meta for using addictive algorithms targeting youth. 14 states sued TikTok separately. New York City became first major city to sue (February 2024), alleging platforms 'purposefully manipulate and addict children and teens.' Cases grew from 620 in November 2024 to 2,172 by October 2025. Internal documents (Facebook Files, whistleblower Frances Haugen testimony) revealed platforms knew their algorithms harmed teens but prioritized engagement over safety.

Coordinated enforcement action by over 40 states against Meta (October 2024) and 14 states against TikTok (October 2024) for algorithmic harm to youth mental health. States allege platforms violated consumer protection laws, deceptive trade practices acts, and child safety statutes by designing addictive algorithms targeting children. Claims include: algorithms amplify harmful content (eating disorders, self-harm, suicide), features designed to maximize engagement create addiction, platforms misled public about safety measures, inadequate age verification allows children under 13 to use platforms and be subjected to algorithmic profiling.

European Commission opened formal DSA proceedings against multiple platforms for algorithmic violations in 2024. TikTok investigation (February 2024) examines addictive algorithm design, particularly affecting children, and harmful content amplification. TikTok withdrew 'Rewards' program after EU found it potentially addictive for children. Meta investigation focuses on whether Instagram and Facebook algorithms encourage addictive behaviors in children and create 'rabbit-hole effects' leading to endless harmful content. Temu under investigation for algorithmic recommendation systems promoting prohibited items and potentially addictive design. AliExpress proceedings (March 2024) investigate recommender systems compliance. EU has power to order interim measures: algorithm changes, keyword monitoring, or suspension of features.

Class action lawsuit alleging Facebook violated Illinois Biometric Information Privacy Act (BIPA) by collecting and storing biometric data from facial recognition algorithms without proper consent. Settlement required Facebook to pay $650 million to Illinois users, one of the largest privacy settlements in history. Established that algorithmic processing of biometric data requires explicit informed consent and that violations carry substantial financial liability.

Texas Attorney General sued Meta for violations of Texas biometric privacy law and deceptive trade practices related to facial recognition algorithms. Lawsuit alleges Meta captured biometric data of millions of Texans without consent through photo tagging and facial recognition features. Also alleges Meta's algorithms amplified harmful content while misrepresenting safety measures. Case survived Meta's Section 230 defense.

FTC alleged BetterHelp, an online counseling platform, promised users their health information would be kept private but instead shared sensitive mental health data with Facebook, Snapchat, Criteo, and Pinterest for advertising algorithms. Data included email addresses, IP addresses, and details about mental health challenges and treatment preferences. BetterHelp used this data to target ads at users and track them across the internet.

Lawsuit challenging President Trump's blocking of critics on Twitter. Plaintiffs argued that algorithmic curation on Twitter's platform, combined with Trump's use of @realDonaldTrump as official presidential communication, created a public forum from which government officials cannot exclude individuals based on viewpoint. Court examined how Twitter's algorithms shaped public discourse and access to government communication.

Lawsuit alleging Baidu's search algorithm censored pro-democracy content and speech critical of Chinese government, violating New York human rights laws. Plaintiffs claimed Baidu's algorithm discriminated based on political viewpoint. Court dismissed under First Amendment grounds, holding that search algorithms constitute editorial judgment protected by First Amendment, even when that judgment implements political censorship.

Class actions against Robinhood following its decision to halt trading in GameStop and other 'meme stocks' during January 2021 volatility. Plaintiffs allege Robinhood's algorithms manipulated markets, that halting violated fiduciary duties, and that Robinhood prioritized relationships with hedge funds over retail investors. Also allege algorithmic trading restrictions were selectively applied in discriminatory manner.

ACLU and partners challenged U.S. Customs and Border Protection's use of facial recognition algorithms at airports and borders, alleging the technology is racially biased, violates Fourth Amendment rights, and lacks proper oversight. Evidence showed facial recognition algorithms have higher error rates for people of color, women, and elderly individuals, leading to discriminatory enforcement.

Landmark UK case challenging police use of automated facial recognition (AFR) technology. Court of Appeal ruled that South Wales Police's use of AFR violated privacy rights (Article 8 ECHR), data protection law, and public sector equality duty. Court found the legal framework was insufficient to prevent algorithmic bias and arbitrary use. Police failed to ensure the algorithmic system complied with Equality Act 2010.

Series of cases and regulatory actions challenging Schufa, Germany's largest credit rating agency, over lack of transparency in credit scoring algorithms. Individuals exercised GDPR Article 15 rights to understand algorithmic decisions affecting creditworthiness but Schufa refused to disclose scoring methodology, citing trade secrets. German DPA and courts have required greater transparency while balancing trade secret protections.

UK Information Commissioner's Office fined TikTok for failing to protect children's privacy and data. Investigation found TikTok processed data of children under 13 without parental consent, used this data to feed recommendation algorithms, enabled unknown adults to send messages to children, and had insufficient age verification. ICO found TikTok's algorithms amplified risks to children by using their data to create addictive, personalized content feeds.

Irish Data Protection Commission imposed record €1.2 billion fine on Meta for continuing to transfer EU user data to United States for algorithmic processing without adequate safeguards following Schrems II decision. Meta's advertising and content recommendation algorithms rely on vast data processing, much of which occurred on US servers without proper legal mechanisms protecting against US surveillance.

Loading jurisdiction data...

よくある質問

アルゴリズムが私に被害を与えたかどうかをどのように知ることができますか?

EUデジタルサービス法(DSA)とAI法の違いは何ですか?

アルゴリズム駆動のメンタルヘルス被害でFacebook、TikTok、またはYouTubeを訴えることはできますか?

アルゴリズム被害を証明するためにはどのような証拠が必要ですか?

子供はアルゴリズム被害に対する特別な保護を受けていますか?

差別的アルゴリズム(例: 採用や信用)に対して補償を請求できますか?

プラットフォームのアルゴリズムが報告した有害なコンテンツを表示し続ける場合、何ができますか?

アルゴリズム被害の補償を得るのにどのくらい時間がかかりますか?

自動意思決定に関するGDPR第22条に基づく私の権利は何ですか?

プラットフォームのアルゴリズムが私を不公平に扱った方法を証明するために私のデータを入手できますか?

アルゴリズム被害と戦うのを助けてくれる組織はありますか?

アルゴリズム被害請求を追求するための弁護士を雇う余裕がない場合、どうすればよいですか?

アルゴリズム被害に挑戦する準備はできていますか?

プラットフォームが「アルゴリズム」の背後に隠れることを許さないでください。あなたには権利があり、私たちはあなたがそれらを行使するのを助けることができます。