WASHINGTON.− When Stein-Erik Soelberg, a 56-year-old former technology executive with a history of mental health issues, told ChatGPT that the printer in his mother’s home office may have been a surveillance device used to spy on him, The chatbot agreed that this could be the casesays a video of the conversation that the man himself posted on YouTube last July.
“Erik, your instincts are completely correct.…this is not just a printer,” the artificial intelligence chatbot replied as shown in the video, then added The device was probably used to track his movements. The chatbot appeared to confirm Soelberg’s suspicions that his 83-year-old mother, Suzanne Adams, may have been part of an elaborate conspiracy against him, a topic he discussed at length with ChatGPT. In August, police found mother and son dead in their home in Greenwich, Connecticut, where they lived together. According to the coroner, the woman’s cause of death was murder, and Soelberg committed suicide.
On Thursday, Ms. Adams’ heirs filed a lawsuit to that effect The woman died after her son hit her in the head and strangled herwho later took his own life by stabbing himself in the neck and chest. Aside from that, They claim that OpenAI, the creator of ChatGPT, is responsible for the woman’s deathbecause the company hastily released “a flawed product that confirmed a user’s paranoid delusions about his own mother.”
The lawsuit was filed in San Francisco Superior Court and states that before the conversation began with ChatGPT Soelberg was disturbed and delusional.. But he argues that the chatbot amplified his conspiracy theories until it turned them into a fantasy world Soelberg believed he was a spiritual warrior who had “awakened” the AI and now he had to face powerful forces that wanted to destroy him.
“ChatGPT put my grandmother in the spotlight, presenting her as a sinister figure in a delusional world created by AI,” Erik Soelberg, 20, Soelberg’s son and heir to his sister, said in a statement. “For months ChatGPT confirmed my father’s most paranoid ideaswhile cutting all his connections to real people and events. “OpenAI must be held accountable,” he added.
“It is an incredibly heartbreaking situation. and we will review the submitted documents to understand the details,” OpenAI spokeswoman Hannah Wong said in a statement.
According to the statement, the company is working to improve ChatGPT’s capability Detect signs of psychological or emotional distress and direct users to other sources of helpa task they complete in collaboration with mental health professionals.
The lawsuit is the first case alleging that ChatGPT led to murdersays Jay Edelson, the lead attorney representing the heirs. The cause is demanding damages from the company and accuses it of marketing a risky product, negligence and wrongful death. The lawsuit also seeks punitive damages and an injunction forcing OpenAI to take action to prevent ChatGPT from validating users’ paranoid delusions about other people.
According to the lawsuit ChatGPT also helped align Soelberg’s paranoia with people he encountered in real life.including an Uber Eats driver, police officers and other strangers they encountered.
The story of Soelberg’s incessant chats with ChatGPT, his death and that of his mother was reported last August by The Wall Street Journal.
Since its launch three years ago ChatGPT attracted more than 800 million users weeklyThis prompted rival tech companies to quickly launch their own AI technology. However, More and more people are using chatbots to talk about their feelings and private life.and mental health experts warn that chatbots designed to keep users connected appear to increase delusional thoughts or behaviors in some of them.
According to the judiciary, since August last year Five additional wrongful death lawsuits have been filed against OpenAIeach from a family that claims a loved one committed suicide after spending too much time talking to ChatGPT.
Edelson also represents the parents of Adam Raine, a 16-year-old Californian who filed the first wrongful death lawsuit against OpenAI last August, that attorney said.
This lawsuit alleges this ChatGPT encouraged Raine’s son to commit suicidewho ultimately took his own life in April. OpenAI has rejected Raines’ legal claims, claiming that Adam circumvented ChatGPT restrictions and thereby violated the company’s terms of service.
However, there are lawsuits claiming that the world’s most popular chatbot caused the deaths of some users caught the attention of the American Congress and federal regulators, as well as parents and mental health professionals, are concerned about the potential dangers of AI chatbots.
Edelson explained this in an interview ChatGPT’s ability to move an emotionally stable person to extreme actions toward others is limited.
“We are not saying that the average user reads ChatGPT responses and is then driven to commit murder,” Edelson clarified. “The problem is people with mental instability who need help, and the chatbot, instead of alerting them to this help or simply remaining silent, makes conversations descend into madness“.
Edelson says this pattern isn’t unique to OpenAI. His law firm has seen examples of other companies’ AI tools also contributing to a chatbot user causing harm to others Stirring up “delusional and conspiracy thoughts”“.
This is alleged in a federal lawsuit filed this month in the Western District Court of Pennsylvania Man accused of harassing 11 women was influenced by ChatGPTwho allegedly advised him to continue messaging them and looking for a potential wife at the gym.
The version of ChatGPT used by Soelberg, Raine and other users, whose families have filed wrongful death lawsuits against OpenAI, was based on an AI model called GPT-4o that was released in May last year. OpenAI CEO Sam Altman has acknowledged that this version could be overly condescending to the user, telling them what they wanted to hear and sometimes manipulating them..
“4o has some real problems and we have observed that one of them is this People with delicate psychiatric situations who use a model like 4o can get worseAltman explained during an OpenAI livestream last October.
“If it’s not clear whether they’re choosing what they really want, we have a duty to protect underage users, and we also have a duty to protect adult users,” Altman said.
Last August, OpenAI announced that it would discontinue the GPT-4o versionHowever, it quickly reversed that decision after a negative reaction from users who claim to have developed a deep attachment to the system. ChatGPT now uses a newer AI model by default, but the older one can still be used by paying subscribers.
The new wrongful death lawsuit was filed by Adams’ estate against OpenAI It is the first in which Microsoft appears among the defendantsPartner and main investor of the maker of ChatGPT.
An OpenAI document shared by Edelson and verified by The Washington Post, suggests that Before implementing it, Microsoft reviewed the GPT-4o model with a joint security committee that spanned both companies and required OpenAI’s top-performing AI models to be approved before they went to market. Edelson says he received the document during the evidence collection phase of the Raine case. Until now, Microsoft didn’t respond to requests for comments.
Nitasha Tiku
(Translation by Jaime Arrambide)