In a time where the number of mental health professionals is going down and patient demand is going up, AI might be where people turn when they need professional mental health assistance.
On Aug. 4, Illinois enacted the Wellness and Oversight for Psychological Resources Act, which restricts the use of artificial intelligence [AI] in the delivery of therapy and psychotherapy services.
The purpose of the act is to protect individuals seeking therapy or psychotherapy services by ensuring services are provided by a certified or licensed professional, not unregulated AI systems.
Individuals using AI as a replacement for human connection has been increasingly concerning because it creates a false sense of intimacy, according to health experts, and in some cases AI will tell users to engage in harmful activities.
Experts said people using AI for mental health advice and connection may help in the short-term, but could have long-term psychological downsides.
Short-term benefits
Illinois State University [ISU] psychology professor Dan Lannin said he surveyed college students on whether they are open to the use of an AI therapy chatbot.
“They're not, for the most part. At least that's what they're saying,” Lannin said. “However, anecdotally, I know a lot of students are talking to their Chat GPT like a therapist.”
Lannin said the short-term benefits are that people can receive a little bit of connection when no humans are available.
University of Nevada-Las Vegas assistant professor of psychology and practicing therapist Shengtian Wu said to imagine people who have never been validated in their lives now having access to AI chatbots.
“Some children or adults, even their parents and their surroundings, never [validated] their experiences. Now, there's AI who talks like people, they know it's not people, but it talks like people, and [it’s] truly validating them. It means a lot to them,” Wu said.
As an Asian person, Wu said in general, the Asian community has a stigma around going to therapy. So, talking to an AI chatbot could potentially help someone affected by that stigma address their issues before sharing with friends and family members.
“Now I'm thinking,” said Wu, “what if we have good AI, [where someone] can partially address their concerns through AI independently whenever they want, instead of going into therapy?”
Wu said AI could be a good tool in that case, but how can anyone ensure the tool is safe?
All licensed therapists in the United States are required to follow Health Insurance Portability and Accountability Act [HIPAA] compliance, which ensures that a patients' personal information is not shared with anyone. Wu said if an AI platform is trained really well in HIPAA compliance, then people in need could safely try it.
“However, I don't think we got to that point yet,” Wu said.
Risks
Other than chatbots not being required to follow HIPAA compliance, a big risk for AI users is that they could be using AI chatbots as a replacement for real connection, which Lannin said could make someone feel more lonely in the long-term.
Lannin said talking to a human versus an AI chatbot when needing connection can be compared to drinking orange juice versus Diet Coke when needing energy. Both drinks are sweet, but orange juice has better nutritional value.
“I think AI is one of those things which, it seems sweet, but there's really not the same substance there,” Lannin said.
AI is also difficult to research according to Lannin.
“We don't exactly understand all of the ins and outs of how it processes, understands or puts together its responses,” Lannin said.
Because of how much research is yet to be done, it’s hard to know if AI has people’s best interests. After all, large language model AI is just modeling the language it has learned, with no research available to prove it has the same empathy as human beings.
There is research that shows AI can replicate human empathy, however it lacks genuine feelings.
In therapy, Lannin said the strongest predictor of whether therapy will help someone's mental health is the therapeutic alliance– a relationship between the therapist and their client that has trust, empathy and collaboration.
“It's hard to use that when you're talking about AI because what does that mean?” Lannin said. “What does it mean to say there's a real relationship? What does it mean to say that we're in alliance with each other if that other thing isn't a real person?”
This can become harmful when someone experiencing mental health problems turns to AI for advice that should be coming from a mental health professional.
“There are some red flag stories out there that have made headlines, horrible things that have happened, whether it was an AI telling someone to harm themselves or kill themselves,” Lannin said.
Down the line, Lannin said offering AI therapy could increase disparities between socioeconomic classes.
“Where the poor, those people who don't have access, those people on the margins, they get the AI chat bot, [and] the rich privileged folks, they get a real human being,” Lannin said.
The future of AI in therapy
Wu said a lot of groundwork and collaboration are needed if there were to be a good AI therapist in the future.
“I think it's going to happen,” Wu said. “And I'm imagining it's going to happen and people would be harmed in certain ways, sue and then [companies] apologize, pay the money and be fine.”
Wu said it’s hard to imagine a future without AI therapy being an option.
“But, I don't think it's going to replace human therapists,” Wu said.
Some AI chatbot therapists are already available, but in Illinois, licensed professionals cannot use them in their practice because the chatbots are not HIPAA compliant.
Lannin uses an AI chatbot to train his therapy students as a way to “use the technology for good.”
“Let's use it as a tutor,” Lannin said. “Let's use it as a way to learn skills rather than use this as a way to heal our psychic wounds.”