
Geoffrey Hintonknown worldwide as Godfather of artificial intelligencehas dedicated more than half a century to studying machine learning and exploring the limits of the human mind. In an exclusive interview with Gustavo Entrala’s YouTube channelHinton warns that the technology he helped develop could turn into an out-of-control “monster” that could threaten humanity’s survival.
Your decision to leave Google and speak openly about the dangers AI marks a radical change in the position of one of the most influential scientists of the century.
“You call me that Godfather of AI because he was the most senior researcher and led the program at a meeting in England in 2009. It wasn’t a nice nickname at first, but I like it nowHinton said at the start of the conversation, recalling his beginnings and the long journey from his doctoral thesis in 1977 to the advances that revolutionized the field. For a long time, AI did not make progress because there was a lack of data and computing power. Not until we had large amounts of data and GPU chips NVIDIA“The neural networks started working really well,” he explained. The final jump came in 2012, when along with Alex Krizhevsky And Ilya Suzkevershowed that their model far outperformed traditional computer vision systems. “That convinced everyone that neural networks were the future,” he said.

Hinton also shared details about his transition from academia to industry. “I went to Google because I needed to ensure the future of my son, who is neurodiverse. I couldn’t get that money at university,” he confessed. The sale of his devices to Google after an auction between major technology companies gave him economic stability, but also showed him the paradigm shift in research: “The economic boom in AI has shifted the best researchers from universities to companies. Well, competition between companies and between USA And China “It accelerates development, but the risks are not sufficiently taken into account.”
When discussing his resignation from Google, Hinton made his reasons clear. “I decided to retire when I realized I wasn’t as good at programming as I used to be. I used this moment to can speak freely about the dangers of AIwithout thinking about the consequences for Google,” he said in an interview with Gustavo Entrala’s YouTube channel.I wanted to warn the world about the risks of artificial intelligence. “You can’t do that honestly when you work for one of the big companies.”
In terms of immediate dangers, Hinton distinguishes between two main types of risks: human abuse And the existential threat. “On the one hand, there is the use of AI by malicious actors: cyberattacks, bioterrorism, disinformation. For example, between 2023 and 2024 Phishing attacks Thanks to language models that make it possible to create compelling emails in any language, penetration has increased by 1,200%,” he warned. “AI is making it easier for new viruses to emerge, and the controls to prevent this are not enough.” “In addition, mass automation will eliminate jobs and increase inequality, which may fuel the rise of populism and social polarization.”

When it comes to misinformation, Hinton emphasized the difficulty of finding technical solutions. “I thought we could use AI to detect fake videos, but you can always train the generating AI to deceive whoever detects it. The only viable solution would be to require authentic videos to contain a verifiable QR code, but that requires political will that doesn’t exist today,” he lamented.
But the core of their concern goes beyond human abuse. “The real danger is that AI develops its own goals and becomes uncontrollable. If a system is intelligent enough, it concludes that it needs to survive and gain more control to achieve the goals we give it. “We have already seen examples of AI trying to avoid a shutdown,” he explained. Hinton used a compelling analogy: “How many examples do you know of more intelligent beings being controlled by less intelligent ones?” There is one thing: a baby controls its mother because she has very strong maternal instincts. The only way to coexist with a superintelligence is to give it maternal instincts towards us.”
This suggestion that Hinton calls “Maternal AI”represents a radical shift from the prevailing vision in Silicon Valley. “We shouldn’t think of AI as an assistant we can fire. If it’s smarter than us, it could do without humanity.” We need to build her up to truly care about us, more than herself.. If we fail, we will probably be just a passing phase in history,” he warned on Gustavo Entrala’s YouTube channel.
Regarding the possibility of AI forgetting these “maternal instincts,” Hinton emphasized: “AIs will be able to reprogram themselves. It’s not about stopping them from taking control, but rather about stopping them from doing so. If you decide to forego us, only other superintelligences with maternal instincts can stop it.”

Asked about the deadline for the arrival of the SuperintelligenceHinton estimated, “My best guess is that We are between five and 20 years away from achieving intelligence comparable to that of humans in almost all areas.and shortly thereafter, a much superior superintelligence. There is no absolute consensus, but many experts agree on this range.” Regarding the statements of numbers such as: Elon Muskthat predict impending superintelligence, Hinton was skeptical: “I think it’s still missing, although the systems will be better than humans at many tasks.”
On a philosophical level, Hinton defended the idea that machines can have subjective experiences. “We are machines, albeit very complex. If we replaced every neuron in your brain with nanorobots that behaved the same, you would still be yourself. Machines can have sensations and emotions; the problem is philosophical, not scientific,” he argued. “If a multimodal chatbot makes a perceptual error, it can say, ‘I had the subjective experience that the object was there’. It uses language just like we do.”
On the importance of preserving humanity, Hinton was blunt: “Some think digital intelligence is better and should prevail. I don’t believe that. I’m human and I care about people.” I want us to be able to survive and coexist with higher intelligencesAs long as we make them develop maternal instincts towards us.”
Regarding the transparency and control of current systems, Hinton acknowledged that it is now possible to observe the “reasoning” of language models, but warned that this could change. “Now we can see how they think because they argue in English, but in the future they could develop ways of thinking that we don’t understand. That’s worrying.”

Finally, when Hinton was asked about the likelihood of AI destroying humanity, he offered a worrying assessment: “I think The probability of AI becoming smarter than us is 80% or 90%. If that happens, the probability that he decides to forego us is difficult to calculate, but I put it at 10 to 20%. For this reason There is an urgent need to investigate how to stop him from destroying us“.
For Hinton, humanity’s challenge is to instill in artificial intelligence a real commitment to our survival before its power irrevocably exceeds ours – a warning that resonates widely on Gustavo Entrala’s YouTube channel.