Michael Wooldridge, a leading computer scientist and author of The Road to Conscious Machines, argues that the concept of the AI singularity is fundamentally flawed.
In a recent interview with Johnathan Bi, Wooldridge called the singularity narrative “bullshit,” criticizing its dominance in AI-related discussions.
The singularity theory suggests that once machines achieve human-level intelligence, they could recursively improve themselves, spiraling out of human control. Wooldridge acknowledged this scenario has gained traction in popular culture, notably in movies like The Terminator. However, he believes this concept is implausible and distracts from genuine risks associated with AI.
“I became frustrated with that narrative,” Wooldridge said. “Whenever it comes up in serious debate about where AI is going and what the risks are, it tends to suck all the oxygen out of the room.”
Wooldridge explained that the singularity theory’s appeal largely stems from its low probability but extreme consequences. While most experts consider runaway AI intelligence unlikely, its catastrophic potential has drawn significant attention, funding, and research efforts. Wooldridge criticized this focus, especially during the public excitement following ChatGPT’s release.
“The debate around this sort of reached slightly hysterical levels,” Wooldridge said. “My sense is the debate has calmed down a little bit and is being more focused on the actualities of where we are and what the risks are.”
Bi compared the fervor surrounding AI risks to religious apocalyptic thinking, suggesting that some are drawn to the narrative for psychological reasons. Wooldridge agreed, linking the fear of AI dominance to deep-rooted anxieties about creation and control.
“At its most fundamental level, it’s the idea that you create something — you have a child — and they turn on you,” Wooldridge said. He pointed to Frankenstein as an early example of this theme, where a man-made creation rebels against its maker.
While Wooldridge acknowledges AI’s potential risks, he emphasizes the need to focus on practical concerns rather than speculative doomsday scenarios.
Bi is a philosopher and entrepreneur who co-founded Opto Investments and has lectured extensively on René Girard’s mimetic theory. He studied philosophy and computer science at Columbia University and has worked with companies like Shogun and 51VR. Currently, he focuses on studying and producing content on the Great Books, sharing his insights through lectures and interviews.