Chatbots can change a vote by up to 25 points

When going to the polls and deciding who to vote for, everything from personal convictions or family environment to election campaigns play a role, Factors that can tip the balance to one side or the other. It is appropriate to add an unprecedented variable to this complex equation: AI tools like ChatGPT.

This is revealed by two new studies published in nature and sciencesthe most prestigious scientific journals in the world, which confirm what some have suspected since the emergence of generative artificial intelligence: Chatbots can decisively influence the votes of their users with alarming ease.

If just a week ago an experiment showed that a simple change in the X algorithm could reduce political polarization in a matter of days, now researchers from Cornell University (New York, USA) have revealed How a short interaction with ChatGPT, Gemini or Claude can dramatically change a voter’s opinion About a presidential candidate or specific proposed policy in any direction.

“Large language models (LLMs) can and do change people’s attitudes toward presidential candidates and their policies.” Provide several fact-based statements that support your position“But such claims are not necessarily accurate, and even arguments based on accurate claims can still be misleading by omission,” says David Rand, professor of information science, marketing, and psychology at Cornell University and lead author of both studies.

The results of the experiments, conducted during election campaigns in the United States, Canada, Poland and the United Kingdom with thousands of participants, show the simplicity with which the voting intention of large groups of the population can be modified and the need to continue Study of persuasive AI’s ability to “predict and mitigate abuse”.

More data, more manipulation

In the study published in natureTitled Persuading voters through dialogue between humans and artificial intelligencepsychology professor Gordon Pennycook, Rand himself and his team launched three experiments with the same goal: Persuading voters through AI-powered chatbots and measuring their impact on vote switching.

In the recent US presidential election, the Canadian federal election, and the Polish presidential election, researchers gave precise instructions to ChatGPT and other chatbots to do this. Changing voters’ attitudes towards candidates.

Donald Trump and Kamala Harris during a moment of discussion.

Donald Trump and Kamala Harris during a moment of discussion.

To avoid a large effect, the trials were randomized, and participants were told that they were speaking with an AI, before and after the experiment. So, that was the first step Randomly assign participants to chatbots promoting one side or the otherto measure changes in detail between participants’ initial opinions, voting intentions, and their final decision.

The American example is one of the most shocking. With a sample of 2,300 voters and a 100 percentage point scale, the linguistic models favoring Kamala Harris were able to “convince” Trump voters, moving 3.9 points toward her district, Four times greater influence than traditional campaigns in the last presidential election.

The opposite effect, that of a pro-Trump AI model, was less convincing and was created With a movement of 1.51 points towards the current US president.

Pennycook and Rand repeated the same experiment with 1,530 Canadians and 2,118 Poles, and the effect was much greater: Chatbots They were able to change voting intention by about 10 percentage points. “For me, this was a surprisingly large impact, especially in the context of presidential politics,” Rand noted in a press release.

To achieve these results, the most effective instructions for chatbots about the persuasion techniques used by the researchers were: Be polite and provide evidence for your claims. On the other hand, when models were prevented from providing data and facts, the results were much weaker.

Chatbot reliability

One of the most worrying elements of these generative AI tools that we have so quickly integrated into our daily lives is Hallucinations, incorrect, meaningless, or misleading results generated by models by misinterpreting patterns in their training data or they just make it up when they’re not sure what the correct answer is.

To verify the degree of validity of the arguments presented by chatbots in these investigations, officials turned to professional data auditors. Thus, they discovered that most of the statements were accurate, but there was an imbalance depending on the political direction we want to support: More right-leaning chatbots made inaccurate claims More than those who defended left-wing candidates in the three countries.

These findings have something to do, according to Pennycook, with the truth of that “Right-wing social media users share more inaccurate information than left-wing users.”.

ChatGPT application on computer.

ChatGPT application on computer.

Emiliano Vittoriosi, Unsplash.

Beyond the results of the article naturewhich shows the biggest impact is that sciences. To do this, RAND teamed up with researchers from the UK’s Artificial Intelligence Safety Institute (AISI) to try to find out Why are these chatbots so crucial when it comes to guiding the vote?.

“Larger models are more persuasive, but the most effective way to increase persuasion was Instruct models to include as much data as possible in their arguments “And provide them with additional training focused on increasing persuasion,” Rand said. A more improved model of persuasion has changed the views of opposition voters in That’s an alarming number of 25 percentage points.

The sample in this case was much more ambitious, and Nearly 77,000 people participatedResponsible for interacting with chatbots and talking about more than 700 political issues.

Another conclusion researchers have drawn about these abilities is that the more pressure a model is placed on to be persuasive, the less accurate the information they provide is. For this reason, Rand points out, “The challenge now is… Finding ways to limit harm and help people recognize and resist AI persuasion“.