
The debate over the impact of artificial intelligence (AI) on elections has highlighted – rightly – the risks posed by propaganda using fraudulent images, audio and video, known as deepfakes. A less discussed aspect, but whose damage can also be enormous, is the interference of robots such as ChatGPT, Gemini and other chatbots. Two recent studies published in the world’s leading scientific journals – British Nature and American Science – give an idea of the challenge they represent for electoral authorities. Given the political inclinations of large digital platforms and their reluctance to assume responsibilities as content producers, it is essential that the Superior Electoral Tribunal (TSE) is aware of the risks posed by chatbots.
- Editorial: Rigor is essential when regulating the use of AI in elections
Both studies concluded that AI tools are effective in changing voters’ opinions. In the experiment “Persuading Voters Using Human-AI Dialogues,” published in Nature, researchers tasked a chatbot with changing users’ opinions about presidential candidates. Randomly chosen participants were asked about their initial position, began interacting with the machine through written messages and, at the end, were asked again about their voting intention. The experiment was carried out during the 2024 US elections, Canadian elections in April and Polish elections in June.
- Editorial: TSE is right in setting standards for the use of AI in this year’s elections
In the United States, the pro-Kamala Harris robot changed the vote of Donald Trump supporters by 3.9 percentage points. On the other hand, 1.5 points. In Canada and Poland, the gap is larger, up to ten points. Chatbots have been programmed to use common persuasion tactics, with good manners and presentation of evidence. When researchers banned the use of data presented as fact, effectiveness plummeted. The researchers verified the veracity of the information and verified that, overall, it was accurate. But bots programmed to defend right-wing candidates made more inaccurate statements than left-wing ones.
The second study, “Political persuasion levers with conversational AI”, published in Science, measured the change in opinion of around 77,000 Britons on 707 subjects. The study also examined the veracity of more than 466,000 AI-generated claims. Researchers have found that the most effective strategy for increasing persuasion is to ask chatbots to use as much information as possible in the argument. The most successful models changed participants’ opinions by 25 percentage points. The more convincing the model, the less precise the information provided. There is a suspicion that, under pressure, the chatbot has nowhere to get information and begins to invent.
AI has sparked apocalyptic predictions or naive optimism. Its effects will be what society decides. Hence the need to identify problems and address risks. In the case of electoral authorities, studies show that there is a lot of work to be done.