AI controversy: Use of ChatGPT in Counseling Sessions is Raising Questions

AI controversy: Use of ChatGPT in Counseling Sessions is Raising Questions
Published on

Using ChatGPT in counseling landed Koko in soup for good. The AI therapist has a few pointers for user safety

Finding the right therapist for many is a daunting task. The first of the many red flags is if one can invest enough trust in the therapist. It is quite ironic that people flock to mobile apps for this very missing factor only to be fooled. The recent AI controversy around Koko's therapy bot brings out the grave dangers the AI apps or unregulated tech-mediated healthcare carry. Koko, a free mental health service platform, a peer-to-peer support community known for integrating bot technology with the human interface has tried chatGPT to understand if the AI chatbot can be effective in generating appropriate responses. The AI therapist lets users post their mental issues for other users to propose some suggestions for resolving them. It is a kind of cognitive behavioral therapy and was making a huge difference in the lives of lakhs of people. This implies generative AI in general and chatGPT, in particular, will prove indispensable for IRL therapists, but only after the practice of using ChatGPT in counseling crosses the legal and ethical hurdles.

When Rob Morris, the founder, and MIT graduate tweeted about the experiment Koko indulged in, wherein chatGPT assisted over 30,000 requests for 4,000 users, he didn't expect the technology would invite such criticism. According to what he tweeted, the AI-composed messages were more in-tune than the ones composed by humans, with response time reduced to nearly 50%. And later he acknowledges that his brouhaha was a misunderstanding and that shouldn't have posted it on Twitter. After a few days of experimentation, it had to shut down as they came to realize the emptiness of simulated empathy. Morris explaining the whys and hows of it says the inauthentic experiences of a bot that does not have lived experiences are no match to human responses. As people got to know that a chatbot is generating the responses, it lost its authenticity. Twitter users came down heavily on Morris for using user information without their consent and called the experiment unethical for exploiting the user data for the benefit of the company.

Morris, however, is of opinion that his tweet was taken out of context. In the tweet that goes like "Messages composed by AI (and supervised by humans) were rated significantly higher than those written by humans on their own," which invited the ire of Twitter users, he says the word 'human' implies he and his team and not the unsolicited users. He further adds, the users were well aware of the responses are not sheerly human-generated but AI-collaborated ones and were informed about the process during the onboarding process.

Given the nature of expertise and its implications on how healthcare is manipulated by a chatbot technology brings in ethical concerns for the lawmakers and regulators,  how much ever the users are informed about the technology in use. Food and Drug Administration, the regulatory body for healthcare practices in the USA stipulates clearly that researchers should run their experiments through IRB, a review board that ensures safety for experiments falling under the purview of health and medicine. The safety measures include getting people's informed consent. But, the mushrooming technological solutions that interfere with traditional ways of medical care pose a great challenge to regulators, thanks to ill-defined laws and hazy boundaries between what constitutes ethical and what doesn't. When experiments are held outside the conventional setting, the tech developers are not liable for the consequences and Koko is one typical case where AI – a black box technology – is involved. Koko's approach might be novel to addressing complicated mental health issues and yield some real benefits but given the fact that the services are not tested for certain benchmarks, they cannot replace real psychiatric counseling. On a positive note, one can say, Morris's tweet only generated awareness among the users about the potential threats of resorting to AI therapy bots and it was also the sole intention of Morris in posting the tweet.

Join our WhatsApp Channel to get the latest news, exclusives and videos on WhatsApp

                                                                                                       _____________                                             

Disclaimer: Analytics Insight does not provide financial advice or guidance. Also note that the cryptocurrencies mentioned/listed on the website could potentially be scams, i.e. designed to induce you to invest financial resources that may be lost forever and not be recoverable once investments are made. You are responsible for conducting your own research (DYOR) before making any investments. Read more here.

Related Stories

No stories found.
logo
Analytics Insight
www.analyticsinsight.net