In October, a video circulated on TikTok that appeared to show a woman being interviewed by a television reporter about the U.S. food stamp program.
The women weren’t real. The conversation never happened.
The video was generated by AI (artificial intelligence).
Still, many users thought it was a real interview about exchanging food stamps for money, which would be a crime.
In the comments, many reacted as if the scene was real. Despite some discreet signs suggesting it was a simulation, hundreds of people called the woman a criminal — some with explicit racism — while others criticized government aid programs at a time when a national debate was underway over President Donald Trump’s planned cuts to the program.
Videos like this fake interview, created with Sora, a new OpenAI application, show how public opinions can be easily manipulated by tools capable of producing alternative realities from simple commands.
In the two months since Sora launched, deceptive videos have increased on TikTok, X, YouTube, Facebook and Instagram, according to experts who monitor this type of content. The flood has sparked warnings about a new generation of misinformation and digital hoaxes.
Although major platforms have policies requiring disclosure of AI use and prohibiting content intended to deceive, these rules have proven clearly insufficient given the technological leap represented by OpenAI tools.
While many videos are just silly memes or cute, even fake, images of babies and animals, others seek to promote the type of resentment that often characterizes online political debates. They have appeared in foreign influence operations before, such as Russia’s current campaign to damage Ukraine’s image.
Researchers who track deceptive uses say it’s now up to platforms to do more to ensure the public knows what’s real and what’s not.
“Could they moderate disinformation content better? Yes, that’s clearly not the case,” said Sam Gregory, executive director of Witness, a human rights organization that focuses on the risks of technology. “Could they do better to proactively search for and label AI-generated content? The answer is also yes.”
The food stamp video is one of several that have emerged as the budget impasse has disrupted U.S. government operations, affecting food aid program recipients who struggled to feed their families.
Fox News fell into a similar trap, treating another fake video as an example of public outrage over alleged abuses of the program, in a report that was later retracted. A spokesperson for the channel confirmed the removal, but did not explain the reason.
These misleading videos were used to mock not only the poor, but also Trump. A video on TikTok showed the White House with audio that sounded like Trump berating his office for releasing documents about Jeffrey Epstein. According to NewsGuard, the video, without AI identification, was viewed by more than 3 million people in just a few days.
Until now, platforms have largely relied on the goodwill of creators to disclose fictional content, but they rarely did so. Although platforms like YouTube and TikTok have ways to detect when a video has been made using AI, they don’t always alert users immediately.
“They should be prepared,” said Nabiha Syed, executive director of the Mozilla Foundation, a digital security organization that runs the Firefox browser, referring to social networks.
The companies behind the AI tools say they are trying to clarify what computer-generated content is. Sora and Veo, a competing Google tool, insert visible watermarks into videos. Sora, for example, displays the app name in the corner. Both companies also include invisible, machine-readable metadata that identifies the origin of videos.
The idea is to inform the public that the content is not real and give platforms digital signals to automatically detect it.
Some platforms already use this technology. TikTok, apparently in response to concerns about the veracity of the videos, last week announced stricter rules for disclosing the use of AI. It also promised tools to let users choose how much AI-created content — as opposed to actual content — they want to see.
YouTube uses Sora’s invisible watermark to automatically add a warning that the video has been “edited or created by AI.”
“Viewers increasingly want more transparency about content,” said Jack Malon, a YouTube spokesperson.
But labels sometimes only appear after thousands or even millions of people have already seen the video. In some cases they do not appear.
Bad actors have already figured out how to get around the rules: some simply ignore the identification requirement; others edit the video to remove visible marks. The Times found dozens of Sora videos on YouTube without automatic tagging.
Several companies have emerged offering to remove logos and watermarks, and editing or sharing videos can end up stripping away the metadata that indicated their origin.
Even when the logo is visible, many users may not notice it during quick browsing.
According to a New York Times analysis, which used AI tools to classify comments, nearly two-thirds of the more than 3,000 comments on the TikTok video about food stamps reacted as if they were real.
In a statement, OpenAI said it prohibits the use of Sora for misleading or deceptive purposes and takes action against those who violate its policies. The company said its app is just one of dozens of similar tools capable of producing increasingly realistic videos, many of which don’t use any type of protection or usage restrictions.
“AI-generated videos are created and shared through many different tools, so combating misleading content requires an ecosystem-wide effort,” the company said.
A spokesperson for Meta, which owns Facebook and Instagram, said it is not always possible to label all AI-generated videos, especially as the technology evolves rapidly. The company, according to the spokesperson, is working to improve content tagging systems.
X and TikTok did not respond to requests for comment on the rise of fake videos.
Alon Yamin, chief executive of Copyleaks, a company that detects AI-produced content, said social media platforms have no financial incentive to restrict the distribution of these videos as long as users continue to click on them.
“In the long term, when 90% of the content traffic on your platform is driven by AI, some questions arise about the quality of the platform and the content,” Yamin said. “So maybe in the long term there will be more financial incentives to moderate AI content. But in the short term it’s not a big priority.