
We live in a time of acceleration. Technology is no longer a specialty, but an integral part of everyday life. Every day we make decisions, we inform ourselves, we work and we express our opinions through screens. The Internet and artificial intelligence have changed the way identities are constructed, politics is organized and power is exercised.
Social networks have become the most important public space in modern democracies. They have democratized the word, but also opened a fertile field for manipulation and misinformation. In this scenario, truth no longer competes only with speed or virality, but also with the emotions that drive it. And citizens operate in an environment in which reliable information is mixed with rumors, conspiracy theories and hate speech.
According to data from Voices and WIN, three out of four Argentines believe that misinformation weakens democracy and increases polarization, although the majority do not check what they read. Information fatigue and anxiety make stopping and comparing sources difficult. More than half find the networks aggressive and one in two have experienced digital harassment or violence, especially young people and women.
Platforms strengthen but also exhaust. Eight out of ten Argentines report sleep problems or physical discomfort related to prolonged device use, and the majority admit to spending less time with family or friends. Loneliness and anxiety increase in an environment of constant connection. The paradox is obvious: it has never been so easy to communicate and never has it been so difficult to listen to each other.
The emergence of generative artificial intelligence has exacerbated these dilemmas. What seemed futuristic is now routine: AI writes texts, generates images, diagnoses diseases or writes codes. The benefits are enormous, but also raise ethical questions: Who controls the data? How are prejudices defined? What happens to the work or the authorship?
Knowledge about AI is growing rapidly. According to the Pew Research Center, eight out of ten adults in the world have heard of the topic. But when examining how people feel about the increasing use of AI in daily life, caution prevails: 34% of the global population say they are worried, while 16% are excited. In Argentina, generational differences are clear: only 25% of young people say they are worried, compared to 51% of those over 50.
The focus is on the question of who regulates according to which criteria. Barely a third of Argentines trust the state’s ability to do this, while globally 55% prefer national frameworks rather than remaining in the hands of major powers or corporations. In this context, the European Union is the most reliable actor with data protection laws and standards aimed at balancing innovation and civil rights.
Eight out of ten Argentines defend freedom of expression online as an essential right, but demand transparency in algorithms and greater security against crime, harassment and hate speech. The majority believe that the state should promote digital education and take action against harmful content. Society recognizes that responsibility must be shared between users, companies and governments.
The relationship to technology is ambivalent. The Internet is valued for its usefulness – information, connection, work – but it overwhelms and creates stress. Cybercrime is on the rise and almost half of users fear for their privacy. People trust educational institutions more than governments or politicians, making education a central player in developing critical thinking and promoting responsible behavior.
AI can be a powerful ally in healthcare and education. It is already being used in medicine to detect diseases more precisely and predict personalized treatments. In the classroom, it can facilitate learning, help teachers personalize strategies, and provide access to quality resources. But without an ethical framework, these advances risk concentrating power and widening inequalities.
The impact on mental health proves to be a critical axis. Constant exposure, social comparison and the endless stream of stimuli lead to exhaustion. One in four young people suffer from symptoms of digital stress, and fear of being disconnected is increasing: many experience frustration or sadness when they do not have access to networks. Platforms designed to attract attention, disrupt calm, mood and self-esteem.
The challenge is to regain control. Re-appreciate the pause, the silence and the human encounter. Understand that technology must be at the service of well-being, not the other way around. Digital education is not just about teaching how to use tools, but also about learning to think: distinguishing truth from manipulation, understanding how algorithms work, and cultivating empathy in virtual interaction.
Artificial intelligence amplifies both the best and worst in us. It can democratize knowledge and improve quality of life, but it can also undermine trust and deepen inequality. Therefore, regulation should not be seen as a brake, but as a guarantee for sustainable human development. The challenge is not only technological but also civic: building institutions, ethical frameworks and social practices that center human well-being.
The future will be technological and human at the same time if we succeed in aligning innovations with the common good. Machines can process data, but only humans can understand it. The question is not whether AI will change the world – it already is – but what kind of world we want to build with it.