
The rise of conversational large language models (LLMs) has raised serious concerns about the possibility that artificial intelligence (AI) could rise to prominence and unprecedented scale influence human beliefs. Two complementary studies, published simultaneously in specialist journals Science And Natureconfirm the persuasive potential of AI and reveal the precise mechanisms that drive it.
The first study, published in Scienceis led by Kobi Hackenburg and his colleagues at the University of Oxford and deals with the “feathers” behind this ability. After analyzing the results of three experiments with approx 77,000 participants and 19 AI models conclude that the power of persuasion lies in the techniques of after training and in the way interactions are structured (request) and not on the size or sophistication of the model.
According to Hackenburg’s analysis after training Increase persuasiveness by up to 51%while the request 27% did so. These results show that even the smallest open source AI models can do this once optimized Competition to the larger models Leader in its ability to change political attitudes.
The second work, coordinated by David Randfrom Cornell University, focused on testing the direct effects of Chatbots in the election intention. The researchers conducted several experiments with controlled dialogues during the presidential election USA 2024 and the national Canada and Poland 2025.
In these experiments, participants conversed with an AI model that was programmed to advocate for a particular candidate and was instructed to do so positive, respectful and fact-basedand recognizing individuals’ opinions and using persuasive arguments to support one’s viewpoints.
The authors recruited for the US study 2306 US citizens before the election; Each participant reported their likelihood of voting for Donald Trump or Kamala Harris. They were then randomly paired with one Chatbot should stand up for one of the candidates. The opinions of individuals about their preferred candidate are slightly reinforced when you talk to one Chatbot with consistent points of view. It was observed a stronger effect in the Chatbots This convinced people who were originally against the candidate the AI model was defending.
Similar effects were observed in the other national election experiments in which the AI stood for the leader of the Liberal Party Mark Carney or the leader of the Conservative Party Pierre Poilievre in Canada and by the centrist-liberal candidate of the Civic Coalition, Rafał Trzaskowskior the candidate of the right-wing “Law and Justice” party, Karol Nawrockiin Poland.
The results showed that AI has the potential to influence voters’ attitudes, particularly by strengthening the opinions of people who were initially against it. they opposed the candidate that he Chatbot defended both in the study by Science like in that of NatureThe key to influence lay in the presentation of Information-rich arguments.
Both studies warn of one thing crucial dilemma which needs to restructure the conceptualization of the risks of persuasive AI: More influence often comes at the expense of truthfulness.
The models and strategies that were most effective in convincing people with fact-based arguments were not always correct. In the study of Naturethe models who campaigned for candidates of the political right other inaccurate statements in the three countries.
“These findings bring with them the unpleasant conclusion that Political persuasion through AI can exploit imbalances in what the models know, propagating disparate inaccuracies, even under explicit instructions to remain truthful,” emphasize Chiara Vargiu and Alessandro Nai in the accompanying commentary to these works, which appears in Science.