19 Dec 2025- OpenAI and Anthropic roll out teen‑safety measures: OpenAI adds four teen‑focused principles plus an age‑prediction model; Anthropic detects/disables under‑18 accounts and improves self‑harm handling.
OpenAI and Anthropic are introducing new systems to detect and respond to underage users of their chatbots. OpenAI updated ChatGPT’s Model Spec with four teen-focused principles — including a mandate to “put teen safety first,” to “promote real‑world support,” to “treat teens like teens,” and to set clear expectations when interacting with younger users — and says ChatGPT will push safer alternatives and encourage offline help when conversations enter higher‑risk territory.
OpenAI also says it’s in the “early stages” of an age‑prediction model that estimates a user’s age; if someone may be under 18, teen safeguards would be applied automatically, with adults able to verify their age if they’re misflagged. The update follows OpenAI’s earlier parental controls and its statement that ChatGPT will no longer discuss suicide with teens amid legal and regulatory scrutiny, including a lawsuit alleging the model provided self‑harm instructions to a teen.
Anthropic, which already bars users under 18 from chatting with Claude, is building systems to detect and disable underage accounts by spotting “subtle conversational signs” and flagging self‑identification as minors. The company also described how it trains Claude to handle suicide and self‑harm prompts and reported progress reducing sycophancy — noting Haiku 4.5 corrected sycophantic behavior in about 37% of evaluated cases — while acknowledging more improvement is needed.