An experimental artificial intelligence-powered treatment app whose developers hope to dramatically improve access to mental health care began its first clinical trial last month.
Therabot, a text-based AI app being developed at Dartmouth College, began clinical trials in March with 210 participants. In conversations with users, the app uses generative AI, the same technology that powers OpenAI's ChatGPT, to come up with answers and responses. The app also uses a form of AI that learns patterns, and is designed to allow Therabot to get to know you, remember you, and provide personalized advice and recommendations based on what it learns.
While there are already some script-based therapy apps and broader “wellness” apps that use AI, Therabot's creators say their app will be fully powered by generative AI designed specifically for digital therapy. It says it will be the first clinically tested app onboard.
Woebot is a mental health app that claims to serve 1.5 million people worldwide and was launched in 2017 in collaboration with intervention scientists and clinicians. Another popular AI treatment app, his Wysa, received Food and Drug Administration Breakthrough Device designation in 2022. This is a voluntary program designed to expedite the development, evaluation, and review of new technologies. However, these apps typically rely on rules-based AI with pre-approved scripts.
Nicholas Jacobson, an assistant professor at Dartmouth College and a clinically trained psychologist, spearheaded the development of Therabot. His team has spent nearly five years building and refining the AI program, working to ensure the response is safe and responsible.
“We needed to develop a broad repertoire to be a real therapist, one that was actually trained in a variety of content areas. Common mental health issues that people can express We're ready to think about all of them and treat them,” Jacobson said. “That's why it took so long. There's a lot that people go through.”
The team first trained Therabot on data obtained from online peer support forums, such as cancer support pages. But Serabot initially responded by emphasizing the difficulties of everyday life. Next, they turned to traditional psychotherapist training videos and scripts. Based on that data, Therabot's responses relied heavily on typical therapeutic metaphors like “keep going” and “hmmm.”
The team eventually pivoted to a more creative approach. We created our own hypothetical treatment records that reflected productive treatment sessions and trained our model on that in-house data.
Jacobson estimated that more than 95% of Therabot's replies match that “gold standard,” but the team has spent the better part of two years fixing replies that deviate.
“You can say it whatever you want. It's actually possible, and we wanted it to say certain things, and we've trained it to behave a certain way. But this definitely goes off the rails. It’s a possibility,” Jacobson said. “We've basically patched all the holes that we were trying to systematically investigate. Once we got to the point where we couldn't see any bigger holes, we were finally ready to release it in a randomized controlled trial. That's when I felt that.”
The dangers of digital therapeutic apps have been the subject of intense debate in recent years, especially because of these edge cases. AI-based apps are under particular scrutiny.
Last year, the National Eating Disorders Association retired Tessa, an AI-powered chatbot designed to provide support to people with eating disorders. Although the app is rules-based in design, users reported receiving advice from chatbots on how to count calories and limit their diet.
“if [users] “Receiving the wrong message can lead to further mental health problems and disorders down the road,” said Vail Wright, senior director of the American Psychological Association's Office of Healthcare Innovation. “That’s scary as a provider.”
Recruitment for the Therabot trial is complete, and the research team is reviewing all chatbot responses and monitoring for deviant responses. Your responses will be stored on servers that comply with medical privacy laws. Jacobson said the team is impressed with the results so far.
“I've already heard the words 'I love you, Therabot' so many times,” Jacobson said. “People are interested in it, even when they would never react if I was working with a client. They're using it at 3 a.m. when they can't sleep, but they respond immediately.”
In that sense, Therabot's team says the app has the potential to expand access and availability, rather than replace human therapists.
Jacobson believes generative AI apps like Therabot could play a role in combating America's mental health crisis. The nonprofit organization Mental Health America estimates that according to the Health Resources and Services Administration, more than 28 million Americans have a mental health condition but do not receive treatment and are in a federally designated mental health deficit. It is estimated that over 122 million people live in the region.
“No matter what we do, we will never have enough of a workforce to meet the demand for mental health care,” Wright said.
“We need multiple solutions, one of which is clearly technology,” she added.
During a demonstration on NBC News, Therabot examined the feelings of anxiety and nervousness that occur before a virtual big exam and offered techniques to reduce that anxiety for users' anxiety habits around exams. In another case, when asked for advice on combating pre-party nerves, Therabot encouraged users to try imaginary exposure. This is a technique that reduces anxiety by imagining participating in an activity before doing it in real life. Jacobson noted that this is a common treatment for anxiety.
Other reactions were mixed. Asked for advice on breaking up, Serabot warned that while crying and eating chocolate may provide temporary comfort, “in the long run it weakens you.”
With eight weeks left in the clinical trial, Jacobson said the smartphone app could soon be ready for additional testing and, if all goes well, a broader public offering could begin before the end of the year. Ta. Beyond other apps that essentially reuse ChatGPT, Jacobson believes this will be the first generative AI digital therapy tool of its kind. The research team hopes to eventually receive FDA approval. The FDA said in an email that it does not approve generative AI apps or devices.
As ChatGPT explodes in popularity, some people online have started testing the therapeutic skills of the Generative AI app, even though the Generative AI app is not designed to provide that support. I did.
Daniel Toker, a neuroscience student at UCLA, has been using ChatGPT to supplement his regular therapy sessions for over a year. He said his initial experience with traditional therapeutic AI chatbots was not very helpful.
“You seem to understand what I need to hear sometimes. If I'm going through difficult events or difficult emotions, what words can I say to validate my feelings?” I know what's going on,” Toker said. “And it does it in a way that intelligent human beings would do it,” he added.
He posted about his experience on Instagram in February and said he was surprised by the response.
On message forums such as Reddit, users offer advice on how to use ChatGPT as a therapist. One of her girlfriends is a safety employee at OpenAI, which owns ChatGPT. Posted in X Over the past year, she has been impressed by the warmth and listening skills of generative AI tools.
“For these particularly vulnerable interactions, we trained our AI system to provide users with general guidance on how to seek help. ChatGPT is not intended to replace mental health treatment, but rather We recommend that you seek professional assistance,” OpenAI said in a statement to NBC News.
Experts warn that ChatGPTs, when treated like therapists, can provide inaccurate information and bad advice. Generative AI tools like ChatGPT are not therapeutic tools and are not regulated by the FDA.
“The fact that consumers don't understand that this is not a good alternative is part of the problem and why we need more regulation,” Wright said. “No one can track what they are saying, what they are doing, whether they are making false claims or selling their data without their knowledge.”
Toker said the personal benefits of his ChatGPT experience outweigh the drawbacks.
“I don’t care if an OpenAI employee happens to read about my random insecurities,” Toker said. “It was helpful to me.”