Company using ChatGPT for mental health support raises ethical issues

  • A digital mental health company is drawing ire for using GPT-3 technology without informing users.
  • Koko co-founder Robert Morris told Insider that the experiment is “exempt” from the informed consent law due to the nature of the test.
  • Some medical and technology professionals said they believed the experiment was unethical.

As ChatGPT’s use cases expand, a company is using AI to experiment with digital mental health care, shedding light on ethical gray areas around the use of the technology.

Rob Morris – co-founder of Koko, a free, non-profit mental health service that partners with online communities to find and treat at-risk individuals – wrote in a Discussion on Twitter on Friday that his company used GPT-3 chatbots to help develop responses to 4,000 users.

Morris said in the thread that the company has been testing a “co-pilot approach with humans supervising the AI ​​as needed” in messages sent through Koko’s peer support, a platform it described in an accompanying video as “a place where you can get help from our network or help someone else.”

“We make it very easy to help other people, and with GPT-3 we are making it even easier to be more efficient and effective as a care provider,” Morris said in the video.

ChatGPT is a variant of GPT-3, which creates human-like text based on prompts, both created by OpenAI.

Koko users were not initially told that the replies had been developed by a bot, and “once people learned that the messages were co-created by a machine, it didn’t work,” Morris wrote on Friday.

“Simulated empathy feels weird, hollow. Machines have no lived, human experience, so when they say ‘this looks hard’ or ‘I get it,’ it sounds inauthentic,” Morris wrote in the thread. “A chatbot response generated in 3 seconds, no matter how elegant it looks, feels somehow cheap.”

Saturday, however, Morris tweeted “some important clarifications.”

“We weren’t pairing people up to chat with GPT-3, unbeknownst to them. (In retrospect, I might have worded my first tweet to better reflect this),” the tweet said.

“This feature went live. Everyone knew about the feature when it went live for a few days.”

Morris said Friday that Koko “pulled this off our platform pretty quickly.” He noted that AI-powered messages were “rated significantly higher than those written by humans alone” and that response times have decreased by 50% thanks to the technology.

Ethical and legal concerns

The experiment sparked outcry on Twitter, with some public health and tech professionals calling out the company over the violated claims Informed Consent Act, a federal policy that requires human subjects to provide consent prior to research engagement.

“This is deeply unethical,” media strategist and author Eric Seufert he tweeted on Saturday.

“Wow, I wouldn’t admit it publicly,” Christian Hesketh, who describes himself as a clinical scientist on Twitter, he tweeted on Friday. “Participants should have given informed consent and this should have gone through an IRB [institutional review board].”

In a statement to Insider on Saturday, Morris said the company was “not pairing people to chat with GPT-3” and said the option to use the technology was removed after realizing it “felt like an inauthentic experience.” “.

“Rather, we were giving our peer advocates an opportunity to use GPT-3 to help them compose better responses,” he said. “They were getting tips to help them write more support responses faster.”

Morris told Insider that Koko’s study is “exempt” from the informed consent law, and cited previous research published by the company that was also exempt.

“Every individual must provide consent to use the service,” Morris said. “If this was an graduate study (which it isn’t, it was just an explored product feature), it would fall into an ‘exempt’ research category.”

He continued, “This has imposed no additional risk on users, no deception, and we do not collect any personally identifiable information or personal health information (no email, phone number, IP, username, etc).”

A woman sits on a couch with her phone

A woman searches for mental health support on her phone.

Beatriz Vera/EyeEm/Getty Images



ChatGPT and the mental health gray area

However, the experiment is raising questions about the ethics and gray areas surrounding the use of AI chatbots in healthcare in general, having already caused turmoil in academia.

Arthur Caplan, a professor of bioethics at New York University’s Grossman School of Medicine, wrote in an email to Insider that using AI technology without informing users is “grossly unethical.”

“ChatGPT surgery is not standard of care,” Caplan told Insider. “No psychiatric or psychological groups have tested its effectiveness or exposed the potential risks.”

He added that people with mental illnesses “require special sensitivity in any experiment,” including “careful review by a research ethics board or institutional review board before, during, and after the intervention.”

Caplan said using GPT-3 technology in such ways could have a broader impact on its future in healthcare.

“ChatGPT could have a future as well as many AI programs like robotic surgery,” he said. “But what happened here can only delay and complicate that future.”

Morris told Insider that his intention was to “highlight the importance of being human in the discussion between man and artificial intelligence”.

“I hope she doesn’t get lost here,” she said.

Leave a Comment

Your email address will not be published. Required fields are marked *