- Geoffrey Hinton, the “Godfather of AI”, believes there is a 50% chance AI could pose an existential threat to humanity
- His biggest concern is the development of artificial general intelligence (AGI)
- He is calling for global cooperation, regulation, and more serious conversations about the future of AI
Geoffrey Hinton, often dubbed the “Godfather of AI”, has once again voiced grave concerns about the rapid advancement of artificial intelligence. Hinton believes there is a 20% chance that AI could become powerful enough to end humanity as we know it. Even though this isn’t the first time that Hinton has expressed how AI can be a danger to humanity, his statement does raise concerns.
The veteran computer scientist, known for his pioneering work in deep learning, left his role at Google in 2023 to speak more freely about the risks he sees in the future of AI. Now, in a fresh round of interviews and public appearances, he’s made it clear: his fears are growing stronger, not weaker.
“People don’t know what’s coming”
In a recent CBS News interview, Hinton expressed his concerns.“People haven’t got it yet, people haven’t understood what’s coming,” he said and added, “I’m in the unfortunate position of happening to agree with Elon Musk on this, which is that there’s a 10 to 20 per cent chance that these things will take over, but that’s just a wild guess.”
This thought, once considered science fiction, now weighs heavily on the minds of those closest to the development of advanced AI systems.
Hinton’s fear stems from the potential emergence of artificial general intelligence (AGI)—a form of AI that can perform any intellectual task a human can. If AI systems begin to think for themselves, develop goals of their own, or even rewrite their own code, he warns, there may be no turning back.
Smarter than us—and uncontrollable?
In a talk hosted by the Massachusetts Institute of Technology (MIT), Hinton pointed out that AI is progressing faster than even experts expected. Once machines surpass human intelligence, he warned, we may lose the ability to understand, predict or control them.
He highlighted a specific concern: the idea that AI could manipulate humans, much like how adults trick children. “You can imagine a future where AI systems can outsmart us at every turn and won’t necessarily share our values,” he said. In that scenario, it becomes dangerously easy for them to bypass human safeguards.
Central to Hinton’s worry is what researchers call the “control problem”—how do we ensure super-intelligent AI systems remain aligned with human goals? Once machines become capable of rewriting their own code, even their creators might not fully understand how they operate. At that point, ensuring they remain “friendly” becomes nearly impossible.
Not all gloom
Despite the bleak forecast, Hinton isn’t entirely pessimistic. He acknowledges that AI can do immense good, from improving healthcare to helping address climate change. But he insists we must act now to put global safeguards in place. That includes better regulation, ethical standards, and increased public awareness about what’s at stake.
He also called for more global cooperation: “Governments need to come together to manage these risks. It’s not something one country or company can fix alone.”
Hinton isn’t just another voice in the crowd—he is one of the original architects of the technology now powering tools like ChatGPT and Google Gemini. When someone with his credentials expresses fear, the world listens. Or at least, it should.