Can ChatGPT fight depression? This Startup Tried It… and Caused a Bad Buzz

ChatGPT has just found itself at the center of a controversy. An American start-up actually used OpenAI’s artificial intelligence to support people suffering from depression… without their consent. The results of the experiment are mixed.

Some days ago, Rob Morris, founder of Koko, an online platform designed to support individuals with mental health issues, has made a startling revelation. The leader reveals that he used ChatGPT, OpenAI’s artificial intelligence, to provide “mental health support for around 4,000 people”.

Designed in 2015, Koko allows ordinary users to offer psychological assistance people with depressive disorders. These volunteers are connected using a rudimentary chatbot through a Discord server, Koko Cares. After answering a series of basic questions, a person in difficulty can discuss with a volunteer. The latter offers free personalized support.

Also Read: Don’t Install This Fake ChatGPT App On iPhone, It’s Scam

ChatGPT’s “weird” empathy

As part of the experiment, Koko offered its volunteers the use of responses generated by ChatGPT to communicate with users. The use of artificial intelligence was purely optional. If the ChatGPT response was deemed inappropriate, the volunteer would take over. For testing, the chatbot was integrated directly into the startup’s Discord server.

“We use a ‘co-pilot’ approach, with humans overseeing the AI ​​as needed. We did this on about 30,000 posts.”says Rob Morris.

First, the experiment gave encouraging results. According to Rob Morris, the messages written by ChatGPT were more appreciated than the texts written by the volunteers themselves:

“Postions composed by AI (and supervised by humans) were rated significantly higher than those written only by humans”.

The icing on the cake, Koko’s Discord ChatGPT integration made it possible cut response time in half, which is essential in the most urgent cases. Unfortunately, the results changed completely as soon as “people learned that messages were co-created by a machine”. Once the use of artificial intelligence has been proven, “simulated empathy felt strange, empty”. That’s why Koko chose to disable ChatGPT.

“Machines don’t have a lived human experience, so when they say ‘that sounds harsh’ or ‘I get it’ it sounds inauthentic. […] They don’t take time out of their day to think about you. A chatbot response generated in 3 seconds, no matter how elegant, somehow looks cheap”explains the founder of Koko, a graduate in psychology from Princeton University and a former researcher at the MIT Media Lab.

ChatGPT is at the center of a controversy

Morris’s announcement quickly sparked a controversy. On Twitter, netizens believe that Koko should have informed its users about this experimental program. This is particularly the opinion of several lawyers and ethics experts, who point out that the United States closely monitors these practices. It is actually prohibited to conduct research on human subjects without obtaining explicit consent on US soil. The law provides for a legal document to be signed by all participants in a research program.

In response to the Twitter thread, Daniel Shoskes, former chairman of the United States Board of Research Ethics, the Institutional Review Board (IRB), even recommends that Morris contact an attorney:

“you have conducted human research in a vulnerable population without IRB approval or waiver (YOU cannot decide for yourself)”.

In addition to the legal aspect, some experts point the finger the danger of experimentation. Asked by Vice, Emily M. Bender, a professor of linguistics at the University of Washington, believes that the risks of “harmful suggestions” in the most explosive situations are very real:

“Linguistic models lack empathy, they don’t understand the language they produce, they don’t understand the situation they are in. But the text they produce sounds plausible, and therefore people are likely to ascribe meaning to it.”

Furthermore, the use of ChatGPT may violate explicit promises made by Koko in its terms of use. In its Discord, the organization in fact ensures that the Internet user is put in contact with real people who really understand you”that are not therapists, not counselors, just people like you”.

To defend himself, Rob Morris specifies that Koko Cares users were not never linked to ChatGPT. As promised, the internet user looking for help could talk to a human being. On the other hand, his responses were potentially generated by artificial intelligence. In this context, ChatGPT served more as an aid to the volunteers in order to improve their efficiency. The founder of Koko adds that the start-up was not obliged to comply with the rules in force regarding research, arguing that no personal data were used and that the publication of the results is not planned.

Note that Koko is not new to artificial intelligence. Since its inception, the platform has used AI to organize and categorize different types of users seeking support. Koko particularly relies on machine learning to identify individuals at risk. Koko then configures appropriate resources according to the distress perceived by the AI.

It is only thanks to artificial intelligence that the start-up managed to do well and get funding. In 2016, Fraser Kelton, co-founder of Koko, explained that Koko Cares was designed to match collective intelligence to artificial intelligence to improve people’s emotional well-being ». Given its initial goals, it’s not surprising that Koko quickly thought about integrating ChatGPT into its platform.

In recent months, generative artificial intelligences have sparked colossal enthusiasm among tech giants, everyday users and cybercriminals. The ChatGPT-led revolution is also making life difficult for teachers and professors in the face of an explosion of AI-written assignments.

Source:

addiction

Leave a Comment