Defamation accusations include tools such as Gemini, ChatGPT, and Llama; The debate is about who should pay for the damage done
In the past two years, companies in the United States have filed at least six defamation lawsuits, alleging losses caused by artificial intelligence creating and disseminating false information about these organizations. According to the complainants, in some cases, even after the major technology companies responsible for the technologies were notified, false disclosures continued online.
Such is the case with Wolf River Electric Solar in Minnesota. Company officials noticed an unexplained increase in canceled contracts. An investigation into former customers revealed that they canceled the service after finding out via Google that “the company had reached a legal agreement with the state attorney general regarding deceptive business practices,” the newspaper article said. New York Times. However, this never happened.
When checking searches about the organization, they noticed that the results displayed at the top by Gemini, Google’s artificial intelligence, referred to the fake agreement mentioned. Subsequent attempts to correct the incorrect data, using the online services giant’s tools, failed.
“When customers see an alert like this, it’s practically impossible to get them back,” Justin Nielsen, founder of Wolf River Electric, said in an interview with the newspaper.
In response, Google spokesman Jose Castañeda acknowledged that “mistakes can happen” with any new technology. However, despite noting that problems discovered were being quickly corrected there, until the article was published in the New York Times, a search for “complaint about Wolf River Electric” returned the result that the Minnesota company disputed.
AI-generated content: a legal challenge
Jurist Eugene Volokh, of the University of California, Los Angeles (UCLA), devoted an issue of the Journal of Free Expression Law to thinking about defamation generated by artificial intelligence systems.
As he told the New York Times, there is no doubt that this technology creates harmful information. “The question is: Who is responsible for this?” He asked.
In 2023, for example, radio host and gun rights advocate Mark Walters sued OpenAI, the company responsible for ChatGPT, alleging that the AI responded to a journalist that Wlaters was accused of embezzling funds.
“Frankenstein can’t create a monster that goes around killing people and then claim he had nothing to do with it,” said John Monroe, the radio host’s lawyer. The case never went to court and ended up being dismissed last May – which is not uncommon, because as of writing, no defamation lawsuit by Amnesty International has reached court.
Since then, the only positive finding for a whistleblower has been for right-wing influencer Robbie Starbuck, who claimed to have found false information about him on social network X, formerly Twitter, which was generated through Llama, Meta’s chatbot. In August, the Mark Zuckerberg-led company struck a deal with Starbuck to hire him as an AI moderation policy advisor. “Meta has made great strides to improve the accuracy of Meta AI and reduce ideological and political biases,” the company said at the time in a statement.
According to Nina Brown, a professor of communications at Syracuse University, few cases like the ones mentioned in the article will be prosecuted. “I suspect that if an AI defamation suit arises in which the defendant is at risk, it will go away — and companies will seek a settlement,” the professor told the New York Times. “They don’t want to take that risk.”
