AI bot as a therapist: US mental health platform using ChatGPT in counselling leads to controversy- Technology News, Firstpost
Mehul Reuben DasJan 12, 2023 19:45:54 IST
Mental health is a rather tricky subject to deal with, even with the best of intentions. Trust, in both the counsellor and in the process, is very important. So how do Artificial Intelligence and machine learning fit into all this? An American mental health platform recently conducted an experiment to find out how AI, specifically ChatGPT can be used in counselling. Unfortunately for them, the experiment gave birth to more problems than it solved.
Koko is a nonprofit mental health platform that connects teens and adults who need mental health help to volunteers through messaging apps like Telegram and Discord. On Friday, Koko co-founder Rob Morris announced on Twitter that his company ran an experiment to provide AI-written mental health counselling for 4,000 people without informing them first, to see if they could discern any difference.
Critics have called the experiment deeply unethical because Koko did not obtain informed consent from people seeking counselling.
Koko works through a Discord server users sign in to the Koko Cares server and send direct messages to a Koko bot that asks several multiple-choice questions like “What’s the darkest thought you have about this?”. It then shares a person’s concerns—written as a few sentences of text—anonymously with someone else on the server who can reply anonymously with a short message of their own.
During the AI experiment, which applied to about 30,000 messages, volunteers providing assistance to others had the option to use a response automatically generated by OpenAI’s GPT-3 large language model, the model upon which ChatGPT is based, instead of writing one themselves.
After the experiment, Morris put up a thread on Twitter, which explained the experiment they had conducted. This is where things turned ugly for Koko. Morris says that people rated the AI-crafted responses highly until they learned they were written by AI, suggesting a key lack of informed consent during at least one phase of the experiment.
Morris received many replies criticizing the experiment as unethical, citing concerns about the lack of informed consent and asking if an Institutional Review Board (IRB) approved the experiment.
The idea of using AI as a therapist is far from new, but the difference between Koko’s experiment and typical AI therapy approaches is that patients typically know they are not talking with a real human.
In the case of Koko, the platform provided a hybrid approach where a human intermediary could preview the message before sending it, instead of a direct chat format. Still, without informed consent, critics argue that Koko violated prevailing ethical norms designed to protect vulnerable people from harmful or abusive research practices.
{n.callMethod? n.callMethod.apply(n,arguments):n.queue.push(arguments)}
; if(!f._fbq)f._fbq=n;n.push=n;n.loaded=!0;n.version='2.0'; n.queue=[];t=b.createElement(e);t.async=!0; t.src=v;s=b.getElementsByTagName(e)[0]; s.parentNode.insertBefore(t,s)}(window,document,'script', 'https://connect.facebook.net/en_US/fbevents.js'); fbq('init', '259288058299626'); fbq('track', 'PageView');
For all the latest Technology News Click Here