Local NewsNational

Actions

OpenAI announces new safety measures for teens and users in crisis on ChatGPT

The company said it is improving its models to better recognize signs of mental and emotional distress
1712172986_6VsFEl.jpg
Posted
and last updated

OpenAI announced Tuesday it is implementing new safeguards for teenagers and people in crisis using ChatGPT, as the artificial intelligence company faces a wrongful death lawsuit from a California family.

The company said it is improving its models to better recognize signs of mental and emotional distress. OpenAI added that work is already underway, with some changes moving quickly, while others will take more time.

"You won't need to wait for launches to see where we're headed," the company said in a statement posted to its website.

RELATED STORY | ChatGPT's dark side: New report details alarming responses to teens seeking help

The focus areas will include expanding interventions, making it easier to reach emergency services, and strengthening protections for teens, according to OpenAI.

The changes come as the AI giant faces a wrongful death lawsuit brought by the family of a California teenager who died by suicide.

The lawsuit alleges the teen was able to bypass the chatbot's current guardrails, with the system occasionally affirming self-destructive thoughts that included suicidal ideation.

This story was reported on-air by a journalist and has been converted to this platform with the assistance of AI. Our editorial team verifies all reporting on all platforms for fairness and accuracy.