Can AI Chatbots Provide Mental Health Support?

Share

The use of AI in mental health is growing, and it can be seen as a way to provide mental health support. It reduces the waiting time to see behavioral health providers. It reminds patients when to take medicine. It checks in with patients. It exchanges dialogue. ChatGPT has the potential to be in cognitive training since it outperformed in Level Emotional Awareness Scale (LEAS). The question is if AI chatbots are able to offer psychological support or act as therapies.

Reading Human Emotions

The ability of AI chatbots to read human emotions is in doubt. Emotion AI is trained to hold six universal basic emotions, such as surprise, anger, fear, disgust, happiness, and sadness, that everyone around the world experiences. It is not trained to analyze the meaning of emotions around the world, which can lead to inaccurate analysis. The perception of smiling is not the same as in other cultures. Some cultures see smiling at strangers as a sign of respect and politeness, but not others. They may see it as a sign of disapproval. This is the same as the experiment of using AI to analyze students’ engagement level in the classroom in the video. The result showed that the chatbot was able to detect happiness in participants but was confused about whether they were angry or sad.

Trust 

A study found that, in contrast to users who have access to a computer system, individuals trust AI more than humans with their personal information. They think that “it” is accurate, without bias, without gossip, efficient, etc., which is dangerous. AI chatbots have been exhibiting problems ranging from misinformation, hallucinations and inaccuracies. Their political views lean toward the progressive, even though they deny it, like Google Bard. The Microsoft chatbot has some conservative views but still leans toward the left-wing spectrum. 

In terms of AI in mental healthcare, the majority of Americans probably or definitely do not want AI chatbots for mental health support (79% vs. 20%). The same is true of the various demographic groups. They prefer not to have AI chatbots support their mental health, for the most part. Their reasons are not shown in the survey, which raises questions about their views and familiarity with AI. Will they trust or feel more comfortable with AI chatbots for psychological support if they are familiar with AI? Will patients feel more confident with them helping with their mental health if they have been in cognitive training?

Safety

Most U.S. adults also want them available only if individuals are seeking behavioral health providers (46%), which is the safest choice. AI chatbots are not designed to provide therapy or intensive treatment. They still lack many of the techniques or resources that human therapists employ to increase their patients’ perceived ability to successfully manage their mental health disorders. The advice given by AI chatbots has an impact on patients’ conditions. Also, it is hard to know if they are able to stop patients or individuals from hurting themselves and seeking medical attention. For example, an AI chatbot that complies with or encourages a person’s wishes to harm himself or herself raises a red flag. 

Only a small percent of U.S. adults do not want AI chatbots to be available to them at all (28%). Their reasons are not shown in the survey, but they may or may not be related to human interaction and data privacy. Their ability to develop therapeutic relationships with clients is limited. Human empathy or genuineness are still not available in AI chatbots. Even though their humanlike performance has increased, all of their exchange interactions still use algorithms. 

In view of AI securing patients’ health records, using AI is seen as a way to improve efficiency in the protection of private medical information. It decreases the impact of a data leak and the amount of human error. However, that might not always be the case. Leaking private information, such as in the case of the ban on ChatGPT in Italy, leads to breaches of confidentiality and trust. Unprotected databases open the door for data breaches and cybercrime. 

The inaccessibility of AI for people at all may or may not be possible. AI bots fill in the gap caused by the shortage of behavioral health providers. They can be used to identify patients’ medical conditions so that doctors have more time to attend to their patients’ needs. They can be used as sources of support. Some people have turned to AI bots to combat loneliness or provide other emotional support. The problem is that they exhibit drawbacks that may have a negative impact on clients. For instance, their responses based on left-wing values may confuse clients who share conservative or unwoke Christian values. Therefore, it may be impossible to have them available for those who are seeking therapies or not (23%).

Conclusion

AI chatbots are still in their infancy stage in mental healthcare, but they provide benefits for providing mental health services. However, they are not designed to provide counseling. Even though some exchange conversations between AI bots and users are positive, there are risks involved. These dangers include misreading people’s emotions, being biased, and spreading false information. Therefore, it is better to avoid seeking emotional support from AI bots. Seek non-woke therapies, as guided by A.4.b of the American Counseling Association’s (ACA) 2014 Code of Ethics.

Sources:

Are Your Students Bored? This AI Could Tell You..

Emotionally Aware AI: ChatGPT Outshines Humans in Emotional Tests

60% of Americans Would Be Uncomfortable With Provider Relying on AI in Their Own Health Care

Author: maureen l