ChatGPT will now contact police if it detects “imminent harm” from user

ChatGPT on phone

OpenAI has announced an extensive series of changes to ChatGPT aimed at improving safety for minors, setting clearer content boundaries, and giving users more control over how the AI behaves.

The updated guidelines are part of OpenAI’s holistic push to provide greater safety, freedom, and privacy across all age groups.

While users in their teens were a major focus of the changes, OpenAI said the new guidelines affect how the chatbot handles sensitive content and personal information across all age demographics.

Key changes coming to ChatGPT

One of the biggest updates is that ChatGPT will now apply stricter safety rules for users believed to be under the age of 18.

This includes, among other measures, blocking flirtatious content and training the AI not to engage with topics related to self-harm or suicide, even in fictional or creative prompts, when used by teenagers.

ChatGPT privacy- ChatGPT logo on a abstract secure technology background
OpenAI’s changes to ChatGPT guidelines are focused primarily on protecting teenage users.

OpenAI’s age-prediction system is intended to enforce these blocks automatically by assessing how a user interacts with ChatGPT. In cases where a user’s age isn’t obvious, they will be treated as teens by default in order to “play it safe.”

The company also said that conversations with ChatGPT are not seen by humans unless there’s a serious safety concern, such as threats of harm. If minors show signs of distress and a parent can’t be reached, OpenAI said they would “attempt to contact the users’ parents and, if unable, will contact the authorities in case of imminent harm.”

While these restrictions are confined exclusively to users known or suspected to be under the age of 18, OpenAI said the initiative’s purpose was to introduce “stricter rules, more oversight, and clearer boundaries for everyone.”

Related

OpenAI’s new guidelines come weeks after parents Matt and Maria Raine filed a lawsuit against OpenAI after the software allegedly encouraged their son, Adam, to take his own life.