The continued evolution of artificial intelligence requires human decision – 04/12/2025 – Laura Greenhalgh

The AI ​​boom is real. It affects and transforms countless sectors, as above. The frenzy surrounding this issue does acknowledge that there are differences. Praising what AI can do has become the new normal, because it generally responds well to the instructions it receives. Some of the anxiety must come from your evolutionary process.

There are rumors of a possible bursting of the AI ​​bubble involving major players – big tech companies, hardware companies like Nvidia, the world’s largest chip producer, startups worth billions, and the financial system itself. According to Gita Gopinath, former chief economist at the International Monetary Fund, if the roof collapses, the losses would be in the range of US$20 trillion among Americans alone. And it gives you a global recession.

But today the risks give way to the scale of the challenge. The current race is not even on artificial intelligence, but on superintelligence. Or AGI, which is the English abbreviation for Artificial General Intelligence. Using vast amounts of data processed in massive, inhospitable data centers, scientists are creating increasingly bold language models, heralding the arrival of something that will do everything the human mind can do, and even better. “Coming” announcements are redundant. Financial bets too.

Those who take the morning train ride through California’s Silicon Valley, past cities involved in the world of technology, can see a strange social phenomenon: passengers in their twenties and thirties filling the carriages, without taking their eyes off their mobile phones. On the way, they respond to their boss who requests that a system error be resolved or urgent research be conducted. Tasks that need to be solved before you set foot in the company.

Madhavi Siwak, director of Google’s artificial intelligence lab, DeepMind, justifies this pattern of behavior by saying: “Now there is no time for friends, hobbies, friends.” Everything moves so quickly that it could be replaced at any moment, says Sam Altman, founder and CEO of OpenAI. Networking powerhouse Mark Zuckerberg is offering individual compensation packages worth $200 million to anyone who helps him move closer to superintelligence.

Seven years ago, physicist Jared Kaplan opened a startup in San Francisco. Today he is known as the owner of Anthropic, a billion-dollar company that has already been positioned as a competitor to OpenAI. Kaplan guarantees that artificial intelligence will absorb all administrative work and a large part of intellectual work within two or three years. That her 6-year-old son would grow up knowing that he would never be better than her in countless areas. Human control over technology is threatened.

Since the continued development of artificial intelligence may lead to a point where it creates its own autonomy, the superiority of the species ceases to exist. Is it desirable or not? Kaplan warns that a position on this issue must be taken by 2030. After that, it will be too late.

In the 17 Sustainable Development Goals of the 2030 Agenda, this topic is not mentioned. But since the scientists’ timeframe is the same as that of the UN Charter, it’s worth asking: Is it more urgent to fight world hunger or take a stand on AI autonomy?

Let me put it on the record: These are not exclusive places and AI can help address hunger. The problem is that it can also deepen the deprivation of those who no longer have the ability to buy food. Here’s the dilemma. It has nothing to do with machine learning. It is a human decision.


Current link: Did you like this text? Subscribers can access seven free accesses from any link per day. Just click on the blue letter F below.