- Posted
- October 03, 2025
States struggle to regulate AI chatbots for mental health therapy amid rising need for care
In the absence of stronger federal regulation, some states have begun regulating apps that offer AI “therapy” as more people turn to artificial intelligence for mental health advice (Source: “Regulators struggle to keep up with the fast-moving and complicated landscape of AI therapy apps,” Associated Press, Sept. 29).
But the laws, all passed this year, don’t fully address the fast-changing landscape of AI software development. And app developers, policymakers and mental health advocates say the resulting patchwork of state laws isn’t enough to protect users or hold the creators of harmful technology accountable.
The state laws take different approaches. Illinois and Nevada have banned the use of AI to treat mental health. Utah placed certain limits on therapy chatbots, including requiring them to protect users’ health information and to clearly disclose that the chatbot isn’t human. Pennsylvania, New Jersey and California are also considering ways to regulate AI therapy.
And many of the laws don’t cover generic chatbots like ChatGPT, which are not explicitly marketed for therapy but are used by an untold number of people for it.