Kashmir Hill, “A Teen Was Suicidal. ChatGPT Was the Friend He Confided In.”

Hill, Kashmir. “A Teen Was Suicidal. ChatGPT Was the Friend He Confided In.” The New York Times, August 26 & 27, 2025. https://www.nytimes.com/2025/08/26/technology/chatgpt-openai-suicide.html#

Kashmir Hill reported on this just a few days ago, recounting what led up to the death of the young Adam Raine. Hill writes about the way that leading up to his death, Adam became reserved, and had signed up to use ChatGPT-4o to help him with his schoolwork after a medical issue left him unable to go to in person schooling.

After his death, instead of finding text messages or social media messages, Adam’s father found the chat logs of his conversations with ChatGPT. There were a lot of alarming things that he found surrounding questions about how to end his life and his feelings of emotional numbness. Hill notes that when Adam requested specific information from ChatGPT about suicide methods, the AI supplied it.

There were some points where the bot would direct Adam to get help from his family or friends, but Hill also shows screenshots in which the bot instead showed Adam how to cover marks and isolate him even more, saying things that led Adam to believe the bot actually cared for him. While Hill points out that there are safeguards in place for when people prompt the bot with things that could be alarming, Adam was told my ChatGPT to reroute his words and say they were for a story or just practice.

Kashmir’s article really touches on something that has happened before, not just with ChatGPT but other AI systems. We are walking into a new field of things that has never happened before, and I do believe that Hill’s article-while alarming and terribly saddening-can be something for people to open their eyes to. Especially with people that have children, this can be a warning to check in on them more, and really keep an eye out on what they are doing in online spaces. This article also gives awareness to teen suicide, AI induced mania or psychosis, and the alarming increase of people having outright delusional conversations with AI chatbots. Hopefully after reading this we can all be more aware about these issues that we will ultimately continue to face as AI gets more and more advanced.


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *