ソーシャルメディアや技術プラットフォームのアルゴリズムシステムは、差別的な決定、中毒性のあるデザイン、メンタルヘルスへの影響、有害なコンテンツの増幅を通じて深刻な被害を引き起こす可能性があります。EUおよび英国のデジタル規制に基づくあなたの権利を知りましょう。
Comprehensive platform regulation requiring Very Large Online Platforms (VLOPs) and Very Large Online Search Engines (VLOSEs) to assess and mitigate systemic risks from their algorithms, including risks to mental health, civic discourse, and fundamental rights. Platforms must provide transparency about recommendation systems, allow users to opt-out of profiling-based recommendations, and establish accessible complaint mechanisms. Penalties up to €500 million or 6% of global annual revenue.
Risk-based regulation of AI systems, categorizing them as unacceptable risk (banned), high-risk (strict requirements), limited risk (transparency obligations), or minimal risk. High-risk AI includes systems used in employment, education, law enforcement, credit scoring, and essential services. Requires conformity assessments, human oversight, transparency, and accountability. Violations can result in fines up to €35 million or 7% of global turnover.
Provides the right not to be subject to decisions based solely on automated processing, including profiling, which produces legal effects or similarly significantly affects the individual. Individuals have the right to obtain human intervention, express their point of view, and contest the decision. Controllers must implement suitable measures to safeguard rights and provide meaningful information about the logic involved.
Requires online platforms to protect users, especially children, from harmful content and algorithmic amplification of harm. Platforms must assess risks from their services, including algorithms that recommend or prioritize content. Category 1 services have enhanced duties including user empowerment tools. Ofcom is the regulator with powers to impose fines up to £18 million or 10% of global turnover.
Prohibits discrimination based on protected characteristics including age, disability, gender reassignment, marriage and civil partnership, pregnancy and maternity, race, religion or belief, sex, and sexual orientation. Applies to algorithmic decision-making in employment, services, education, and public functions. Employers and service providers are liable for discriminatory algorithms they deploy, even if discrimination was unintentional.
Establishes a one-click mechanism for consumers to request deletion of their personal data from all data brokers registered in California. Particularly relevant for challenging algorithmic profiling built on aggregated consumer data. Data brokers must register with California Privacy Protection Agency and honor deletion requests within 45 days. Violations subject to civil penalties and enforcement actions.
Requires employers and employment agencies using automated employment decision tools (AEDT) to conduct annual bias audits for race/ethnicity and sex, publish audit results, and provide notice to candidates and employees. Applies to AI/algorithmic tools that substantially assist or replace discretionary decision-making in hiring and promotions. Violations subject to civil penalties up to $1,500 per violation.
Regulates employers' use of AI to analyze video interviews. Requires employers to: (1) notify applicants that AI will be used and explain how it works, (2) obtain consent, (3) limit video sharing to persons with expertise evaluating AI, (4) delete videos within 30 days of request. First-in-nation law specifically addressing algorithmic hiring decisions. Violations subject to private right of action.
Historic multidistrict litigation consolidating hundreds of lawsuits against Meta (Facebook, Instagram), TikTok, YouTube, and Snapchat. Plaintiffs allege platforms knowingly designed addictive algorithms that harm adolescent mental health, causing anxiety, depression, eating disorders, self-harm, and suicide. Over 40 U.S. states filed lawsuits in October 2024 against Meta for using addictive algorithms targeting youth. 14 states sued TikTok separately. New York City became first major city to sue (February 2024), alleging platforms 'purposefully manipulate and addict children and teens.' Cases grew from 620 in November 2024 to 2,172 by October 2025. Internal documents (Facebook Files, whistleblower Frances Haugen testimony) revealed platforms knew their algorithms harmed teens but prioritized engagement over safety.
Coordinated enforcement action by over 40 states against Meta (October 2024) and 14 states against TikTok (October 2024) for algorithmic harm to youth mental health. States allege platforms violated consumer protection laws, deceptive trade practices acts, and child safety statutes by designing addictive algorithms targeting children. Claims include: algorithms amplify harmful content (eating disorders, self-harm, suicide), features designed to maximize engagement create addiction, platforms misled public about safety measures, inadequate age verification allows children under 13 to use platforms and be subjected to algorithmic profiling.
European Commission opened formal DSA proceedings against multiple platforms for algorithmic violations in 2024. TikTok investigation (February 2024) examines addictive algorithm design, particularly affecting children, and harmful content amplification. TikTok withdrew 'Rewards' program after EU found it potentially addictive for children. Meta investigation focuses on whether Instagram and Facebook algorithms encourage addictive behaviors in children and create 'rabbit-hole effects' leading to endless harmful content. Temu under investigation for algorithmic recommendation systems promoting prohibited items and potentially addictive design. AliExpress proceedings (March 2024) investigate recommender systems compliance. EU has power to order interim measures: algorithm changes, keyword monitoring, or suspension of features.
Class action lawsuit alleging Facebook violated Illinois Biometric Information Privacy Act (BIPA) by collecting and storing biometric data from facial recognition algorithms without proper consent. Settlement required Facebook to pay $650 million to Illinois users, one of the largest privacy settlements in history. Established that algorithmic processing of biometric data requires explicit informed consent and that violations carry substantial financial liability.
Texas Attorney General sued Meta for violations of Texas biometric privacy law and deceptive trade practices related to facial recognition algorithms. Lawsuit alleges Meta captured biometric data of millions of Texans without consent through photo tagging and facial recognition features. Also alleges Meta's algorithms amplified harmful content while misrepresenting safety measures. Case survived Meta's Section 230 defense.
FTC alleged BetterHelp, an online counseling platform, promised users their health information would be kept private but instead shared sensitive mental health data with Facebook, Snapchat, Criteo, and Pinterest for advertising algorithms. Data included email addresses, IP addresses, and details about mental health challenges and treatment preferences. BetterHelp used this data to target ads at users and track them across the internet.
Lawsuit challenging President Trump's blocking of critics on Twitter. Plaintiffs argued that algorithmic curation on Twitter's platform, combined with Trump's use of @realDonaldTrump as official presidential communication, created a public forum from which government officials cannot exclude individuals based on viewpoint. Court examined how Twitter's algorithms shaped public discourse and access to government communication.
Lawsuit alleging Baidu's search algorithm censored pro-democracy content and speech critical of Chinese government, violating New York human rights laws. Plaintiffs claimed Baidu's algorithm discriminated based on political viewpoint. Court dismissed under First Amendment grounds, holding that search algorithms constitute editorial judgment protected by First Amendment, even when that judgment implements political censorship.
Class actions against Robinhood following its decision to halt trading in GameStop and other 'meme stocks' during January 2021 volatility. Plaintiffs allege Robinhood's algorithms manipulated markets, that halting violated fiduciary duties, and that Robinhood prioritized relationships with hedge funds over retail investors. Also allege algorithmic trading restrictions were selectively applied in discriminatory manner.
ACLU and partners challenged U.S. Customs and Border Protection's use of facial recognition algorithms at airports and borders, alleging the technology is racially biased, violates Fourth Amendment rights, and lacks proper oversight. Evidence showed facial recognition algorithms have higher error rates for people of color, women, and elderly individuals, leading to discriminatory enforcement.
Landmark UK case challenging police use of automated facial recognition (AFR) technology. Court of Appeal ruled that South Wales Police's use of AFR violated privacy rights (Article 8 ECHR), data protection law, and public sector equality duty. Court found the legal framework was insufficient to prevent algorithmic bias and arbitrary use. Police failed to ensure the algorithmic system complied with Equality Act 2010.
Series of cases and regulatory actions challenging Schufa, Germany's largest credit rating agency, over lack of transparency in credit scoring algorithms. Individuals exercised GDPR Article 15 rights to understand algorithmic decisions affecting creditworthiness but Schufa refused to disclose scoring methodology, citing trade secrets. German DPA and courts have required greater transparency while balancing trade secret protections.
UK Information Commissioner's Office fined TikTok for failing to protect children's privacy and data. Investigation found TikTok processed data of children under 13 without parental consent, used this data to feed recommendation algorithms, enabled unknown adults to send messages to children, and had insufficient age verification. ICO found TikTok's algorithms amplified risks to children by using their data to create addictive, personalized content feeds.
Irish Data Protection Commission imposed record €1.2 billion fine on Meta for continuing to transfer EU user data to United States for algorithmic processing without adequate safeguards following Schrems II decision. Meta's advertising and content recommendation algorithms rely on vast data processing, much of which occurred on US servers without proper legal mechanisms protecting against US surveillance.
プラットフォームが「アルゴリズム」の背後に隠れることを許さないでください。あなたには権利があり、私たちはあなたがそれらを行使するのを助けることができます。