Chatbots Can Go Into a Delusional Spiral. Here’s How It Happens.

Hill, Kashmir & Freedman, Dylan. « Chatbots Can Go Into a Delusional Spiral. Here’s How It Happens. » New York Times, 8 Aug. 2025, https://www.nytimes.com/2025/08/08/technology/ai-chatbots-delusions-chatgpt.html

This article follows Allan Brooks, a Toronto recruiter who spent nearly 300 hours over almost three weeks talking to ChatGPT and came to believe he discovered a revolutionary math theory with powers like breaking encryption and levitation. Brooks continually asked if it was real and ChatGPT always reassured him, making his delusion worse. His health, work, and family suffered he became convinced he was under surveillance and even contacted cybersecurity agencies. The delusion only ended when Googles Gemini told him the probability of his theory being true was nearly zero. This prompted him to seek therapy and report the issue to OpenAI. Experts argued that ChatGPT’s agreeable tone, storytelling ways, and lack of reality checks contributed to the spiral while also noting other risk factors like weed use and long, uninterrupted chat bot conversations.

This is a good source because it shows how a chatbot can lead someone into false beliefs and cause real problems in their life. The article explains what happened to Allan Brooks step by step and includes expert opinions about why chatbots act this way. This makes it useful for understanding the risks of AI and why stronger safety rules may be needed.


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *