One of the key ways to protect people from AI could make things even worse, research says

One important way to minimize harm is to artificial intelligence above us mental health In fact, it could make things worse, new research warns.

Amid growing concerns about how chatbots can lead to mental distress and even psychosis, there have been suggestions that chatbots should regularly remind users that they are not humans and are talking to a chatbot.

But now researchers say it could actually make the harm worse, exacerbating the psychological distress of already vulnerable people.

“It would be a mistake to think that mandatory reminders would significantly reduce the risk of users knowingly seeking out a chatbot for a conversation,” Linnea Laestadius, a public health researcher at the University of Wisconsin-Milwaukee, said in a statement. “Reminding people who already feel isolated that it’s not humans who make them feel supported rather than alone can backfire by making them feel even more alone.”

The warning comes amid reports that chatbots have been linked to both murders and suicides. Due to the obligatory nature of this system and its still relatively unknown and unpredictable nature; A.I. Rather than helping people, chatbots have been accused of promoting delusions and mental illness.

Some have suggested that in situations like this, it might be helpful to remind people that they are talking to a chatbot, and that chatbots are incapable of feeling human emotions. But that’s not what the research shows, the authors of the new study suggest.

“It may seem intuitive that if users remember that they are talking to a chatbot and not a human, they would be less attached to the chatbot and not manipulated by the algorithm, but there is currently no evidence to support this idea,” Laestadius said.

The researchers suggest that precisely because people are not human, they may be speaking to these systems about psychological distress. “The belief that non-humans, unlike humans, won’t criticize, make fun of, or antagonize the entire school or workplace encourages self-disclosure and, in turn, attachment to chatbots,” says author Celeste Campos-Castillo, a media technology researcher at Michigan State University.

Additionally, reminders can simply add more distress on top of existing concerns. Researchers can be unsettled by the reminder that not only what causes them to talk to a chatbot, but also that the chatbot is fundamentally different and separate from what it is confiding in.

“Finding how to best remind people that chatbots are not humans is an important research priority,” Laestadius said. “To best protect your users’ mental health, you need to identify when to send reminders and when to pause.”

The research is described in a new paper published in the journal Chatbots Are Not Human: Reminders Are Dangerous. Trends in cognitive science.

Latest Update