Court Upholds Wrongful Death Case Against Meta and TikTok

By Ashley Belanger – Updated Jul 15, 2025
Overview
In a landmark decision on July 14, 2025, New York State Supreme Court Judge Paul Goetz declined Meta Platforms and TikTok owner ByteDance’s motions to dismiss a wrongful death lawsuit brought by Norma Nazario, mother of 17-year-old Zackery, who died attempting a “subway surfing” stunt. The ruling opens discovery into how recommendation algorithms may have actively targeted dangerous content to a minor, potentially setting a new legal precedent for algorithmic liability.
Background of the Case
Zackery Nazario unlocked a moving train door, climbed atop the subway car, and gazed at his girlfriend when he struck his head on the Williamsburg Bridge beam. He fell between cars and was fatally crushed. His mother alleges the fatal stunt was inspired by dozens of similar videos he encountered via curated recommendations on Instagram, Facebook (Meta), and TikTok.
Key Legal Claims
- Section 230 Immunity: Defendants argued that Section 230 of the Communications Decency Act shields them from liability. Judge Goetz found the claim survives because Nazario alleges the platforms went beyond “passive hosting” by actively identifying underage users at risk.
- First Amendment: Defendants asserted that algorithmic recommendations are protected speech. The court rejected broad immunity for algorithmic curation when there is sufficient evidence of purposeful targeting of minors.
- Negligence & Duty of Care: The complaint asserts that both companies designed “addictive and dangerous” features without adequate warnings, breaching a duty to protect underage users from harm.
Technical Examination of Recommendation Algorithms
Modern social media platforms use multi-stage recommendation pipelines combining:
- Data Ingestion: Real-time event streaming (e.g., user likes, shares, watch times) via Apache Kafka or similar systems.
- Feature Engineering: Extraction of behavioral features—session duration, interaction depth, engagement velocity—processed through Spark or Flink clusters.
- Model Inference: Deployment of deep learning models (e.g., transformer-based architectures) served via TensorFlow Serving or PyTorch TorchServe.
- Ranking & Personalization: Gradient-boosted decision trees or reinforcement learning modules rank candidate videos optimized for “dwell time” and “share probability.”
Experts estimate these pipelines can deliver new content within 200–300 milliseconds after each user action, leaving little human oversight over potentially harmful recommendations.
Additional Context: Regulatory and Industry Developments
Following the ruling, U.S. legislators have revived discussions around algorithmic transparency bills. The Children’s Online Safety Act (COSA) was reintroduced in June 2025, requiring platforms to provide regulators with logs of recommendation decisions affecting minors. The Federal Trade Commission has also signaled forthcoming guidelines for AI-driven content moderation and recommendations under Section 5 of the FTC Act.
Legal Precedents and Comparative Jurisdictions
“This case tests the bounds of Section 230 in an era of opaque AI-driven personalization,” says Professor Dana McKenzie of NYU Law. “If discovery shows these companies knew exactly which content would most deeply engage teens, they may face significant liability for foreseeable harm.”
In the European Union, the proposed Digital Services Act (DSA) already mandates risk assessments for algorithmic systems. Platforms must now report the number of minors exposed to high-risk content, potentially influencing U.S. courts to demand similar disclosures.
Potential Implications for Platform Design
- Algorithmic Audits: Companies may implement third-party audits of ML models to ensure they don’t amplify high-risk behaviors disproportionately among underage users.
- Age-Gradient Filtering: Enhanced age verification and tiered recommendation pipelines that throttle or remove “challenge” content for registered minors.
- Real-Time Risk Mitigation: Integration of real-time anomaly detection (e.g., via AWS Rekognition or Google Cloud AI) to flag and demote potentially dangerous videos.
Next Steps in Litigation
The court has granted Nazario’s team leave to issue subpoenas and demand internal data on how Zackery’s profile was scored for risk. Meta and ByteDance must now produce:
- Model training logs and feature weightings for “challenge” content.
- Aggregate age-segmented recommendation statistics.
- Internal memos discussing the growth metrics for trending challenges.
Discovery will determine if the platforms merely provided a neutral service or engaged in tortious conduct by “weaponizing” recommendation AI against vulnerable users.
Conclusion
Judge Goetz’s ruling underscores growing judicial scrutiny of AI-driven personalization in social media. As the lawsuit progresses, it may chart new territory on platform accountability, algorithmic transparency, and the legal limits of Section 230 immunity.