Extremist groups are beginning to train in the use of artificial intelligence, posing a new global threat
While the rest of the world rushes to harness the power of artificial intelligenceExtremist groups are also experimenting with technology, even if they don’t know exactly what to do with it.
For extremist organizations AI could be a powerful tool for recruiting new memberscreate realistic deepfake images and perfect their cyber attacks, national security experts and spy agencies warn.
Last month, someone on a pro-ISIS website urged other sympathizers to incorporate AI into their operations. “One of the best things about AI is how easy it is to use.”the user wrote in English.
“Some intelligence agencies fear that AI will contribute to recruitment,” the user continued. “So make your nightmares come true.”
ISIS, which seized territory in Iraq and Syria years ago but is now a decentralized alliance of groups that share a violent ideology, recognized years ago that social media could be a powerful tool for recruitment and disinformation. Therefore, it is not surprising that the group is testing AIsay national security experts.
For poorly organized and resource-poor extremist groups, or even a malicious individual with an internet connection, AI can be used to spread propaganda or deepfakes on a large scale. expand its reach and influence.
ISIS recognized years ago that social media could be a powerful recruiting tool. Reuters photo“For any adversary, AI really makes things a lot easier,” said John Laliberte, a former vulnerability researcher at the National Security Agency and now CEO of cybersecurity firm ClearVector. “With AI, even a small group that doesn’t have a lot of money can still make a difference.”
How extremist groups experience it
Radical groups began using AI as soon as programs like ChatGPT became widely available. In the following years they increasingly used generative AI programs to create realistic looking photos and videos.
Combined with social media algorithms, this fake content can help attract new believers, confuse or scare haters Spreading propaganda on a scale that was unimaginable just a few years ago.
Two years ago, such groups spread false images of the war between Israel and Hamas showed bloodied babies left in bombed-out buildings. The images sparked outrage and polarization and overshadowed the true horrors of the war. Violent groups in the Middle East used the photos to recruit new members, as did anti-Semitic hate groups in the United States and elsewhere.
Something similar happened last year after an attack claimed by an ISIS affiliate Almost 140 people were killed at a concert in Russia. In the days following the shooting, AI-created propaganda videos spread across discussion forums and social media in an attempt to find new recruits.
ISIS has also created deepfake audio recordings of its own leaders reciting scriptures and has used AI to quickly translate messages into multiple languagessaid researchers at SITE Intelligence Group, a company that tracks extremist activity and has studied ISIS’s increasing use of AI.
At the moment it’s just a wish.
Such groups are lagging behind China, Russia or Iran and so on For them, the use of AI is “just a wish,” According to Marcus Fowler, a former CIA agent who is now CEO of Darktrace Federal, a cybersecurity company that works with the federal government.
ISIS posters have been uploaded to social networks celebrating the burning of Notre Dame in Paris./Twitter @Rita_KatzBut the risks are too high to ignore and are likely to increase as the use of cheap and powerful AI increases, he said.
Hackers are already using synthetic audio and video for phishing campaigns They try to impersonate business or government leaders to access sensitive networks. You can also use AI to write malicious code or automate some aspects of cyberattacks.
Even more worrying is the possibility that Extremist groups are trying to use AI to create biological or chemical weaponswhich compensates for the lack of technical experience. This risk was considered in the Department of Homeland Security’s updated national security threat assessment released earlier this year.
“ISIS got on Twitter early and found ways to use social media to their advantage,” Fowler explained. “They’re always looking for the next thing to add to their arsenal.”
Taking action against a growing threat
Lawmakers have proposed several initiatives and said there is an urgent need for action.
For example, Senator Mark Warner of Virginia, the top Democrat on the Senate Intelligence Committee, said that the United States should make it easier for AI developers Share information about how your products are used by malicious actors, be they extremists, criminal hackers or foreign spies.
“Since late 2022, with the release of ChatGPT, it has been clear that the same fascination and experimentation with generative AI that the public has had would also apply to a number of bad actors,” Warner said.
House Democrats learned this recently at a hearing on extremist threats ISIS and Al-Qaeda have held training workshops to help supporters learn how to use AI.
A bill passed by the House of Representatives last month requires national security officials to assess the AI risks posed by such groups every year.
Protect yourself from the malicious use of AI nothing more than preparation for more conventional attackssaid Rep. August Pfluger, R-Texas, the bill’s sponsor. “Our policies and capabilities must keep pace with the threats of tomorrow,” he said.