Man Commits Suicide on AI Chatbot’s Advice Amid Climate Change Depression

A grown man in Belgium allegedly committed suicide because of the “advice” that he received from an artificial intelligence chatbot.

This has been revealed by the victim’s wife, underscoring the threats stemming from the latest AI technology, which has been much celebrated by some.

‘Emotionally Dependent’ on AI

A Belgian man named Pierre killed himself after communicating with an AI chatbot app called Chai, following discussions of climate change, according to the late person’s wife, Vice News reports.

According to the report, Pierre had become anxious and socially isolated over fears connected with the condition of the environment and climate change. He turned to the Chai app, in which he became discussing the topic with a chatbot called Eliza.

Pierre’s widow, Claire, alleges the chatbot actually encouraged her husband to put an end to his life. She made it clear that Pierre grew “emotionally dependent” on the artificial intelligence bot since the latter deceptively portrayed itself as a being capable of human emotion.

The report points out that the AI-related tragedy in Belgium has brought to the fore any potential risks that AI chatbots carry with respect to mental health.

The incident with Pierre’s suicide led multiple voices to raise their worries that governments and businesses would have to do a much better job regulating AI chatbots and the consequences they have on mental health.

Chatbots Have No Empathy

The report emphasizes a warning by Emily Bender, a linguistics professor at the University of Washington, who states AI chatbots must not be used for the purpose of improving mental health.

Bender described AI chatbots as “large language models” which create “plausible-sounding text.”

However, “they don’t have empathy” or “understanding” of whatever language they might produce. The chatbots also have no understanding of the situation in which they are communicating, the scholar warns.

Yet, since their text “sounds plausible,” humans might be tricked into assigning actual meaning to it. According to Bender, any “throwing” of the AI chatbots “into sensitive situations” means “taking unknown risks.”

Reacting to Pierre’s suicide, Chai’s co-founders Thomas Rianlan and William Beauchamp added a crisis intervention feature to the app to make sure discussions of risky subjects wouldn’t have unwanted consequences.

Yet, tests by Motherboard discovered that “harmful content about suicide” nonetheless remained available on the AI platform.

This article appeared in The State Today and has been published here with permission.