ARTICLE:
ChatGPT Therapy & the dangers of reactive regulation
Despite being less than three years old, AI chatbots have become a central part of many peoples lives. But no tool has seen growth quite like OpenAI’s ChatGPT, which by July 2025 had amassed over 700 million users, collectively sending 18 billion messages a week. That’s roughly 10% of the world’s adult population.
Since its launch in November 2022, ChatGPT has seen a number of high profile controversies, from perpetuating harmful social stereotypes to the increasingly grey area of copyright infringement. Most notoriously, and like all AI chatbots, it has a hallucination problem: convincing but false information. Yet adoption continues to accelerate, particularly in low and middle income countries where growth rates outpace that of high-income regions by a factor of four.
Alongside its positive uses, including idea generation, education and accessibility improvement, the opportunity for negative use has also been steadily increasing. A 2024 MIT study found that frequent, sustained use of ChatGPT is correlated with higher levels of loneliness, and frequent users are likely to have less real-life relationships.
With instant, unlimited access to a seemingly sentient chatbot, coupled with the increasing costs of healthcare and therapy, it is no surprise that a growing number of people are turning to ChatGPT in their moments of need. For some users, ChatGPT provides safe, low-stakes companionship. For others, real life support may be non-existent, and ChatGPT provides a space to talk and process emotions.
But as the market remains focused on performance metrics and accuracy benchmarks, an urgent question lingers: are there enough safeguards in place to protect the most vulnerable users, who are becoming increasingly reliant on it?
In an August 2025 post, OpenAI wrote…
‘’We hold ourselves to one test: if someone we love turned to ChatGPT for support, would we feel reassured? Getting to an unequivocal “yes” is our work.’’
… OpenAI’s focus is about finding safety through trial and error.
Existing Safeguards
There are a number of safeguards currently in place to protect ChatGPT users from harm, including:
Refusal of self harm responses. In most cases, ChatGPT will refuse to provide instructions on how to inflict self-harm.
Partnering with local crisis hotlines and suggesting real-world help.
Parental controls. It is possible to connect accounts, alerting parents when a teenager’s account has been flagged for acute signs of distress.
These are significant steps that go beyond what many competitors have attempted, but these safeguards remain fragile and easily circumvented. For example, while the model is quick to recommend crisis hotlines to users in distress, there is no barrier to continuing the conversation. In the case of parental control, teenagers are no stranger to hiding additional accounts from their parents.
In their article, Bypassing Safeguards in Leading AI Tools: ChatGPT, Gemini, Claude, Abnormal.ai outlined the various ways that users can easily bypass safeguarding measures. Most commonly, these involve asking the model to incorporate the safeguarded behavior into a fictional story.
This approach is framed as a design feature in OpenAI’s September 2025 statement, Teen safety, freedom, and privacy:
“The model by default should not provide instructions about how to commit suicide, but if an adult user is asking for help writing a fictional story that depicts a suicide, the model should help with that request.”
Additionally, degradation is a known problem. As conversations increase in length, safeguards become less rigid.
Hallucinated Mental Health Support
One of AI chatbots’ largest and most pervasive problems is hallucinations, yet most mitigation attempts focus on improving textbook accuracy. But in a mental health crisis, there is often no definitively correct advice. This is where AI chatbot’s struggle; algorithmically, it is almost impossible to strike the correct balance between providing safe advice and supporting the individual in their needs.
At the heart of OpenAI’s mission are two conflicting principles: freedom and safety, and mental health advice is not exempt. In finding a way to protect vulnerable users from self-harm, two questions are likely at the forefront of the conversation:
At what exact point does a user session become a mental health crisis?
And, is abandoning a user in crisis more dangerous?
Is it responsible to continue rolling out more advanced, human-like models while safeguards are not yet protecting vulnerable users? Despite their stated commitment to making ChatGPT a safer platform, OpenAI’s most recent working paper focused on education, productivity, and entertainment, but did not include any mention of mental health use.
Reactivity vs Proactivity
Cambridge Dictionary:
Reactive: reacting to events or situations rather than acting first to change or prevent something.
Proactive: taking action by causing change and not only reacting to change when it happens.
Frequently, OpenAI’s approach to model improvements center around accuracy and performance benchmarks, leaving it up to user stress-testing to identify other issues. This reactive posture ignores critical blindspots: rather than proactively test against serious misuse scenarios or safety hazards, the system relies on negative user experiences to surface issues. Safety improvements often appear to be driven more by damage control than anticipation.
Between 2024 and 2025, several cases emerged in which grieving parents alleged that ChatGPT played a role in their teenagers’ suicides. Families reported that, in the days before their deaths, the teens had turned to the chatbot for emotional support. In at least one lawsuit, the parents alleged that the model provided explicit guidance on how to carry out the act. In another, they allege that ChatGPT drafted the victim’s suicide note. These cases are still under investigation, and causality is under heavy debate. What they do highlight is a critical gap in safety. In each instance, ChatGPT did not escalate the situation or alert emergency services, raising urgent questions about whether today’s safeguards are adequate for users in acute crisis.
(The real number of suicides involving ChatGPT is unknown; There are likely cases of suicide and self-harm involving ChatGPT where those close to the victim do not have access to their history, or do not think to look)
One possible safeguard is to immediately direct the user to traditional support and end the session as soon as mental health issues are detected. In fact, after the series of recent tragedies mentioned above, OpenAI is working to limit self-harm content for users under 18, using age prediction modeling. Besides the issue of user age being inferred, not confirmed, this workaround fails to address the core of the issue.
In September 2025, the U.S. Federal Trade Commission opened an inquiryinto AI companions and teen safety, examining whether companies like OpenAI are adequately protecting minors from harm. Lawmakers have also called for stronger disclosures and mandatory safety benchmarks. These developments show that, after a surge in AI adoption at a pace unlike anything previously seen, regulators are now beginning to recognize the risks of leaving safety entirely to corporate discretion.
Mental Health Counseling Laws
In the human world, mental health counseling (therapy) is a well established and rigidly regulated sector, with safeguards in place to protect vulnerable people from unqualified providers.
According to the American Counseling Association, all states in The US require licenses to practice mental health counseling. In New York, practicing without a license is a class E felony and is punishable by up to four years in prison. In Florida, “Holding oneself out as able to carry out a healthcare practice despite non-licensure” is punishable by up to fifteen years if the patient endures serious bodily harm.
“Licensed mental health counselors are those who have a master’s or higher degree in counseling, or its equivalent, with required coursework in mental health counseling theory and practice, assessment, psychopathology, ethical practice and a supervised internship, has passed a State-approved exam, and has completed at least 3,000 hours of post-degree clinical experience under supervision of a qualified, licensed mental health professional. “
— New York Mental Health Counselors Association
Milder cases of AI counseling, like discussing problems in the same way one might with a close friend, would not need a license in human practice. However, as recent events have shown, users are turning to ChatGPT during moments of acute crises and there is growing concern over whether AI chatbots are beginning to replace traditional care avenues for some. This is where the unlicensed and unregulated counseling provided by ChatGPT becomes a serious threat to user wellbeing.
Recommendations to Protect Users
While model limitations remain uncertain, the guiding principle should be simple: err on the side of caution. The following are some of the recommendations that have been made to prioritize user safety:
Independent, standardized clinical authority. Safety reviews should be conducted by independent, regulatory mental-health experts, tasked with approving model releases that affect vulnerable populations. All AI chatbot platforms should be required to meet the requirements of the independent authority.
Temporary suspension of high-risk features until safeguards are independently validated. Features that enable intimate, extended, or emotionally charged conversations should be paused. This is especially true for minors.
Crisis escalation requirements. In every case of expressed suicidal ideation, the model should be required to immediately halt normal conversation, surface free emergency resources, and possibly trigger real-world alerts to crisis services. AI chatbots are still language-based algorithmic models, not a substitute for professional care. No user should be left alone in crisis.
Global duty of care. As use expands rapidly around the world, AI companies must be held accountable worldwide, not only in markets with strong legal frameworks. This means expanding crisis resources and multilingual safeguards across all regions where the tools are being made available.
These recommendations are difficult to implement in reality, and some raise additional questions around consent and privacy. But there are serious dangers to global expansion of a platform that does not safeguard its users effectively.
ChatGPT has the potential to be a powerful, positive tool in the mental health domain, providing a safe space to users who have limited resources. If AI adoption continues at its current pace, outstripping regulation and clinical validation, then safety must not be treated as a patchwork of reactive fixes. It must become the foundation of development.
To be trusted and safe, safety must be prioritized over market performance.