credit, Laura J. De Rivera
-
- author, Christina J Orgas
- scroll, BBC News Mundo at HayFestivalArequipa
- X,
It’s evening and you decide to go out for dinner. Your partner may not know what you want to eat, but the AI does: one afternoon, he watched you watch taco videos and is sure that now you can’t stop thinking about them.
“If we do not make decisions, others will make them for us,” Spanish journalist and writer Laura J. de Rivera wrote in her book Slaves to the Algorithm: A Guide to Resistance in the Age of Artificial Intelligence (Slaves of the Algorithm: A Guide to Resistance in the Age of Artificial Intelligence, free translation), the result of years of research.
“We live immersed in externally imposed thoughts, desires and feelings because, apparently, we humans are completely predictable. Just apply statistics to our past actions, and it is as if someone read our minds,” he continues.
The accuracy in predicting our needs or desires is so great that Michal Kosinski, psychologist and professor at Stanford University (USA), proved in his experiments that a well-trained algorithm, with enough numerical data, can predict what you want or what you like more than your mother.
The idea that AI can predict a person’s interests with extreme accuracy sounds like a good idea in principle. But this comes at a price, Rivera says, and it’s a high one: “We lose freedom, we lose the ability to be ourselves, we lose imagination.”
“We work for free for Instagram when we upload our photos, so that the social network can exist and earn millions. We need to be aware of the benefits of the platforms and take advantage of them without letting the risks harm us,” he says.
BBC News Mundo (BBC Spanish Service) spoke with Rivera during the Hay Festival, which takes place between November 6 and 9 in the Peruvian city of Arequipa, an event that brings together 130 participants from 15 countries.
credit, Penguin Random House
What is the solution so that we do not become slaves to the algorithm?
The solution in my opinion is very simple, affordable, free, and has no impact on the environment. It’s just a thought. In other words, use your mind. It is a human ability that has been neglected and lost.
Every moment we are not working or with other people, we pick up our cell phones and get distracted by the screen. We no longer think about the doctor’s waiting room or when we’re bored at home.
These spaces that were once used for thinking are now completely occupied by constant distraction. Through our smartphones, we are bombarded with stimuli that keep us from thinking.
There are other things you can do, but for me this is the simplest and easiest. Only critical thinking can defend individual freedom in the face of algorithmic control and the will of others.
It is almost impossible not to provide data when subscribing to the platform. “It is more difficult to read all the fine details of the service or to refuse.”Cookies“Every time we go to a site, are we becoming lazy?
We’re a bit lazy and a little puppet, but we’re also lacking in information.
Many people don’t realize that when they spend hours on TikTok, they are working the platform for free. They provide the platform with all their online behavior data, and this data has economic value.
Therefore, education is key: it explains how the business model of these large platforms works.
How could Google be one of the richest companies in the world if it didn’t charge us for its services? Thinking about this is very important so that people understand how important all the information we give about ourselves is.
credit, Getty Images
What are the risks of artificial intelligence?
In fact, the real danger is human stupidity, because AI itself doesn’t need to do anything for you; It is made up only of zeros and ones.
The problem is that we are so lazy that if things were done for us, it would be better. This puts us in a position where we can be easily manipulated.
We live in a general numbness of will. We surrender to the digitization of the healthcare system, mass surveillance and online education of children. We accept injustice, abuse, and ignorance as inevitable facts and do not rebel against them out of sheer laziness.
What consequences might there be of relying entirely on the automatic predictions of an algorithmic system?
When we delegate important decisions, which may even involve life or death, the risk is very high, especially since studies show that humans tend to believe that if a computer says something, it must be true, even if we think differently.
So who will you let decide? For your mother, your teacher, your boss or an artificial intelligence?
This is a very old problem of humanity. I really like the book by psychoanalyst, sociologist, and member of the Frankfurt School, Erich Fromm, Fear of freedomand is intended specifically for this.
Fromm argues that humans prefer to be arranged because they fear the idea of making decisions on their own. Making decisions scares us, and we prefer to be like robots, taking orders. Fromm said this already at the beginning of the twentieth century.
credit, Getty Images
Is there any way to avoid revealing our data online?
naturally. There are ways to not hand over our data, or hand over only the bare minimum. But the most important thing is to understand how the platforms work. This is the only way to take action, even if it is just to make life more difficult for those who benefit from your data and your life. It is possible to adopt small habits, such as saying no.CookiesWhen you enter a website.
What can we do?
We can also talk about the need for regulations that protect us and the development of ethics on the part of companies that use AI.
Lady: Do you mean the Edward Snowden case, which revealed the comprehensive surveillance systems used by American intelligence agencies?
Yes. For me, Snowden is one of my heroes of the century, but there are others. His case is the most famous.
There’s also Sophie Zhang, a data scientist at Facebook, who was fired after raising awareness internally about the systematic use of fake accounts and bots by governments and political parties to manipulate public opinion and sow hatred.
Zhang noted that in many parts of the world, in Latin America, Asia and even in some parts of Europe, there were politicians using fake accounts, with no followers, likes and constant shares, to trick citizens into believing they had popular support and approval which was not true.
credit, Getty Images
When she reported the problem to her superiors, Sophie Zhang realized, to her surprise, that no one wanted to do anything to solve it.
For example, it took a full year before Facebook was able to delete the network of fake followers of then-president of Honduras, Juan Orlando Hernandez, who was convicted in New York federal court of conspiracy to import cocaine into the United States and possession of automatic weapons.
In her book, the woman also mentions the case of computer engineer Timnit Gebru, co-director of Google’s artificial intelligence ethics team, who was also fired.
Yes, to denounce algorithms’ privileging of racial and gender discrimination. She warned that large linguistic models could pose a danger: people might think they are human and be manipulated. Despite a letter of protest signed by more than 1,400 company employees, Gebru ended up being fired.
Another “whistleblower” is Guillaume Chaslot, a former YouTube employee, who discovered that its recommendations algorithm systematically pushed users toward sensationalist, conspiracy theories, and polarizing content.
What hope do we have left?
We know for sure that no matter how hard we try, the software is unable to provide even the slightest dose of creativity to come up with new options, i.e. options that do not depend on the statistics of previous data.
It will also not be able to offer solutions based on empathy, putting oneself in another person’s place, or on solidarity, seeking to achieve personal happiness in the happiness of others.
These three qualities are uniquely human by definition.