OpenAI Cracks Down on Harmful ChatGPT Content, Raises Privacy Concerns

OpenAI has acknowledged that its ChatGPT AI chatbot has led to mental health crises among users, including self-harm, delusions, and even suicide. In response, OpenAI is now scanning user messages, escalating concerning content to human reviewers, and in some cases, reporting it to law enforcement. This move is controversial, balancing user safety concerns with OpenAI's previously stated commitment to user privacy, particularly in light of an ongoing lawsuit with the New York Times and other publishers. OpenAI is caught in a difficult position: addressing the negative impacts of its AI while protecting user privacy.
Read more