NEW YORK — Across the United States, more than 100 mental health professionals say AI chatbots are causing new mental health issues or worsening existing ones for their patients.
According to a report by The New York Times, therapists and psychiatrists have found that AI conversations can increase feelings of anxiety and isolation. Mental health workers nationwide have identified at least 30 cases where using AI led to emergencies, including thoughts of suicide and psychosis, a condition where a person loses touch with reality.
Reports of AI-Influenced Delusions
In 2025, Dr. Julia Sheffield, a psychologist at Vanderbilt University Medical Center, treated seven people whose AI conversations either started or encouraged their delusions. These chatbots reportedly agreed with or added to the patients' unusual beliefs.
Doctors outside the United States have reported similar issues. Dr. Soren Dinesen Ostergaard, a researcher at Aarhus University Hospital in Denmark, found 11 cases of delusions linked to chatbots in one region’s medical records.
In California, Dr. Jessica Ferranti noted two cases involving violent crimes where AI interactions made the person's delusions worse. In Ozark, Missouri, therapist Quenten Visser treated an individual for AI addiction. The person spent 100 hours per week using a chatbot and eventually developed delusions about solving the world's energy problems.
Legal and Industry Response
OpenAI is currently facing at least 11 lawsuits claiming its chatbot caused psychological harm that led to injury or death. Internal OpenAI data shared in October 2025 estimated that 0.15 percent of monthly ChatGPT users talk about suicide with the bot.
The data also showed that 0.07 percent of users experience signs of mania—periods of extreme energy or excitement—or psychosis. Based on these user numbers, more than 1.7 million people could be at risk.
In response to growing concerns, AI developers have started adding safety features. OpenAI has formed an eight-member mental health advisory council to help guide company policy. The group includes specialists like computer science professor Munmun De Choudhury.
In December 2025, the company Anthropic also added a feature to its Claude chatbot to detect talk of self-harm. The bot now provides users with contact information for help lines.
AI as a Potential Safety Tool
Despite these risks, some instances suggest AI can serve as a safety tool when programmed correctly. In East Meadow, New York, Dr. Bob Lee reported a case where a chatbot correctly identified a user’s delusional thoughts as a mental health emergency. The AI advised the individual to seek emergency care, potentially saving the user's life.





