
A team of American researchers has proven that the order in which messages of a political nature appear on platforms only affects polarization, which is one of the most controversial questions since the eruption of social networks and the rupture that we are witnessing in many of our societies. The phenomenon strongly correlates with independence from the user’s political orientation, as academics highlighted in an article published today in the magazine sciences.
Social networks are an important source of political information. For hundreds of millions of people around the world, the main channel for their politics is included: in this way they receive and share messages with political content and express their opinions. Given the importance of networks in this field, it will be necessary to know the working of the algorithms that run on these platforms, but ambiguity is the rule in this industry. Therefore, it is very difficult to estimate the extent to which this influences the decision of which contents each user highlights more in terms of forming his or her political thoughts.
How were researchers able to sort through the opacity of algorithms to change the order of messages received by network users? Tiziano Picardi, of Stanford University, and his colleagues have developed a browser extension that can intercept and reorder in real time He feeds (Chronology of posts, in English) for some social networks. The tool uses a large linguistic model (LLM) to assign a score to each content, which normalizes to a score that includes “anti-democratic actions and partisan hostility” (AAPA). Once the comments are recorded, the comments are reordered in one direction or another, all without the platform needing to cooperate (and on the fringes of its algorithm).
The experiment was conducted with 1,256 participants duly warned about it. Decide to focus on it
Participants in the experiment were randomly exposed to one of the types of… Nourishes: One has a lot of polarizing content (AAPA) and the other has very little. “We measure the effects of these interventions on instrumental polarization (that is, the feelings individuals express toward the political group in question) and on affective experience (boredom, sadness, excitement, or calm) through surveys administered during and after the experiment,” Picardi and his colleagues say.
The results were compared with the results of the control group whose members did not intervene. He feedsThe result is that rearranging contents “significantly affected emotional polarization,” with no significant differences in terms of political preferences. “Manipulating the algorithm He feeds “It elicited changes in participants’ negative emotions during the experiment, but afterward.”
The experiment also shows that it may be possible to reduce the temperature or polarization of networks simply by rearranging posts, so that posts with anti-democratic content become less visible. Michael Bernstein, a professor of information technology at Stanford University and co-author of the study, believes the tool could furthermore “open new paths to promoting greater social trust.”
Adapting to new platforms
In recent years, social networks have experienced relevant changes affecting the dissemination of political content. Content moderation equipment, tasked with filtering out toxic, illegal or hate-promoting messages, has been scaled back, as in the case of Meta, or eliminated entirely, as happened recently with X. This work has been delayed due to so-called community feedback. The risk that problematic content will be heard is too great, and according to several investigations, reducing filters increases the hate and harassment that spreads on platforms.
On the other hand, the dynamics of social networks have changed a lot. Before we have ever seen it in our lives Nourishes It was the most commented or most liked of our contacts, and now it gives full priority to the algorithm, which decides what each person sees and, therefore, can or may not go viral. Hence the importance of being able to measure the influence of the algorithm on the formation or support of political ideas.
“Researchers face unprecedented constraints as social networks choose not to share data,” says Jennifer Allen, a professor in the Department of Technology, Operations and Statistics at New York University, who was not involved in the studio. “This is why it is important that Picardi and colleagues introduce a study methodology that does not require explicit collaboration with the platforms.”
Allen also believes that the model proposed by Picardi and his team can be replicated on other social networks, in addition to repeating experiments over different time periods to prove its validity. The approach taken by researchers led by Picardi “is a form of creative research with a methodology that adapts to the current moment,” says Allen.