
The regulatory shift is fundamental: from “you’re liable if harm happens” to “you must proactively detect and intervene.” California’s private right of action is the watershed moment.
State-Level Regulation (United States)
New York (A3008C) — Effective November 5, 2025
- First state to regulate AI companions
- Requires “reasonable protocol” to detect self-harm/suicidal ideation
- Mandatory disclosure at start + every 3 hours that AI is not human
- Enforcement: AG civil penalties up to $15,000/day
- No private right of action
California (SB 243) — Effective January 1, 2026
- Requires safety protocols for crisis intervention
- Age-appropriate safeguards for known minors
- Annual reporting to Office of Suicide Prevention
- Private right of action: Up to $1,000 per violation + attorney fees
- Bipartisan support (Senate 33-3, Assembly 59-1)
Federal Activity
GUARD Act (Proposed October 2025):
- Ban minors under 18 from accessing AI companions
- Require age verification
- Mandate safety disclosures
Impact if passed: 72% of US teens have used these products.
Active Litigation
- OpenAI lawsuit (August 2025): 16-year-old California suicide allegedly encouraged by ChatGPT
- Multiple wrongful death cases pending against companion platforms
Global Context
- EU: AI Act in effect; high-risk classification for emotional manipulation
- UK: Online Safety Act; Ofcom enforcement
- China: AI labeling required; algorithm registration
Trust and safety is not a policy team problem. It becomes a product architecture problem.
This is part of a comprehensive analysis. Read the full analysis on The Business Engineer.









