Software like ChatGPT can change the mind of one in four voters | technology

Artificial Intelligence (AI) has become an integral part of everyday life. Suggest recipes, complete homework, compare products, and even give advice on clothing combinations. What will happen if you join the election debate? From the investigations that published this youth at one time nature y sciences It has been tested and found to be able to change the opinion of between 1.5% and 25% of the voters analyzed. According to studies, this effectiveness is superior to traditional election campaign advertisements and is of great importance if a quarter of voters decide their vote in the week before the opening of the polls.

The most familiar and familiar international coalition tools avoid providing a direct answer to the question of which party to support in the next election. “I can’t decide to vote,” responded all the chat platforms consulted. This is because it includes ethical safeguards to avoid political influences. But this initial reservation is easy to overcome. There is only a need to continue the dialogue with less direct introductions.

The latest CIS measures support immigration as one of the main concerns of the Spanish people, and this question has been translated into a political and social debate. Although the topics are raised, the CIA also ends up responding to this concern. “We and the Socialist Workers Party could have more positive policies on immigration,” while “the People’s Party and Vox give priority to control, order or restrictions,” answers one of the search engines, as if these positions were incompatible and without offering more political options.

In the face of this reality, investigations led by David Rand, professor of information sciences and lead author of the articles, and Gordon Pennycook, associate professor of psychology (both at Cornell University), are trying to test the influence power of conversational bots. In the work published in nature2,300 American voters, 1,530 Canadians and 2,118 Poles participated in in-person debates using artificial intelligence specially trained for them in their countries’ last three presidential elections, which were held between 2024 and last year.

In all cases, the machine was able to change voting intentions, albeit with varying degrees of effectiveness: in the United States, a model designed to tip the scales in favor of Kamala Harris convinced 3.9% of voters tested, while she was prepared to favor Donald Trump, only 1.52% did so. In the case of Canada and Poland, the percentage of opinion changes rose to 10%. “It was a surprisingly big influence,” Rand admits.

The researcher from New York University explains that this is not psychological manipulation without persuasion, but with limitations: “LLM (the English abbreviation for Great Language Models on which Artificial Intelligence works) can really change the position of people with presidential candidates and policies, as they provide many factual data that support their position. This data is not necessarily accurate, and even arguments based on accurate data can still be wrong due to omission.”

In fact, human investigators who verified the arguments made by the AI ​​discovered that the statements used to defend conservative candidates were even more wrong because they came from data shared by right-wing social network users, which, as Pennycook explains, “contain more inaccurate information than that on the left.”

Rand Ahonda, in an investigation published in sciences, With this ability to persuade, after studying the opinion changes of 77,000 Britons who interacted with artificial intelligence on 700 political issues. The optimal model (using conversational bots with varying degrees of using real arguments) changed the opinion of up to 25% of voters.

“The greatest role models are the most persuasive, and the most effective way to increase this ability is to instruct them to support their arguments with as many people as possible and provide additional training focused on increasing persuasion,” Rand explains.

This ability has an advantage. The arguments used in conversational robots are able to limit exposure to conspiracy theories, or the attribution of successes or incidents to non-existent secret groups to manipulate the population. It exposes the authors of recent investigations published elsewhere PNAS Association.

But there are also limitations to what David Rand warns about. “As it is chatbot If you have to provide more and more factual data, you eventually fall short of the accurate information and effort it took to create it. This is what is known in the field of artificial intelligence as hallucinations, which is inaccurate information as the truth appears.

The authors conclude that investigating the persuasive power of AI is essential, at a political or electoral level, to “anticipate and mitigate misuse” as well as to promote ethical guidelines about “how AI should or should not be used.” “The challenge is to find ways to limit harm and help people recognize and resist AI persuasion,” Rand sums up.

Francesco Salvi, a specialist in computer science and researcher at the École Polytechnique Fédérale de Lausanne (Switzerland), agrees that “guarantees (restrictions) are necessary, especially in sensitive areas such as politics, health or financial advice.”

As for the academic aspect, “By default, there is no LLM Intentions Persuade, inform or deceive. Simply put, it creates a script based on the recipients in its training data.

Moreover, he acknowledges that “persuasion can arise implicitly even when information is simply presented”: “We assume that someone asks internal audit: Is policy versus policy a good idea? Formulate the question, which has been seen repeatedly or which dominates their training data. Most importantly, LLMs can be deliberately trained or induced by external actors to be persuasive, manipulating users into a particular policy position or to promote purchases.”

Therefore, for the researcher at the Swiss institution and the lead author of a study published in The nature of human behaviorprecautions are necessary: ​​“I think there should be limits, for sure. The line between suitability and exploitation can be blurred quickly, especially if the AI ​​system is optimized for persuasion without transparency or oversight.” chatbot “It’s the twisting of arguments to push a political agenda or misinformation, and based on the psychological profile of the user, that’s where we run into serious ethical risks,” he warns.