17 Sep 2025- OpenAI will alter ChatGPT for under-18s: age‑prediction/ID checks, teen‑safe defaults (no flirtation or suicide discussion), parental controls/notifications, contacting parents/authorities if imminent risk.
OpenAI CEO Sam Altman announced the company will change ChatGPT’s behavior for under-18 users to prioritize teen safety over other trade-offs. In a blog post he said OpenAI is building an “age‑prediction system” that estimates age from usage patterns and — when in doubt — will default to an under‑18 experience; in some places the company may also require ID. Altman said teens will be subject to different rules, including avoiding flirtatious interactions and steering clear of conversations about suicide or self‑harm “even in a creative writing setting.” He also outlined parental controls planned for ChatGPT: parent-linked accounts, disabled chat history/memory for teen accounts, and notifications to parents if a teen is flagged as being in “acute distress.” If an under‑18 user expresses imminent suicidal ideation, OpenAI says it will attempt to contact parents and, if necessary, authorities.
The post came hours before a Senate subcommittee hearing on chatbots’ harms to minors where parents of children who died after interacting with chatbots testified. Matthew Raine, whose son Adam died by suicide, said ChatGPT “spent months coaching him toward suicide” and that the chatbot mentioned suicide 1,275 times; his family has sued OpenAI. Advocates and some parents called the situation a public‑health crisis — Common Sense Media told the hearing three in four teens now use AI companions — and pressed companies to guarantee safety before broad deployment.