At the World Economic Forum in Davos last month, a panel of leading AI researchers and industry figures tackled the question of artificial general intelligence (AGI): what it is, when it might emerge, and whether it should be pursued at all. The discussion underscored deep divisions within the AI community—not just over the timeline for AGI, but over whether its development poses an existential risk to humanity.
On one side, Andrew Ng, co-founder of Google Brain and now executive chairman of LandingAI, dismissed concerns that AGI will spiral out of control, arguing instead that AI should be seen as a tool—one that, as it becomes cheaper and more widely available, will be an immense force for good. Yoshua Bengio, Turing Award-winning professor at the University of Montreal, strongly disagreed, warning that AI is already displaying emergent behaviors that suggest it could develop its own agency, making its control far from guaranteed.
Adding another layer to the discussion, Jonathan Ross, CEO of Groq, focused on the escalating AI arms race between the U.S. and China. While some on the panel called for slowing AI’s progress to allow time for better safety measures, Ross made it clear: the race is on, and it cannot be stopped.
What is AGI? No clear agreement
Before debating AGI’s risks, the panel first grappled with defining it (in a pre-panel conversation in the greenroom apparently)—without success. Unlike today’s AI models, which excel at specific tasks, AGI is often described as a system that can reason, learn, and act across a wide range of human-like cognitive functions. But when asked if AGI is even a meaningful concept, Thomas Wolf, co-founder of Hugging Face, pushed back saying the panel felt a “bit like I’m at a Harry Potter conference but I’m not allowed to say magic exists…I don’t think there will be AGI.” Instead, he described AI’s trajectory as a growing spectrum of models with varying levels of intelligence, rather than a singular, definitive leap to AGI.
Ross echoed that sentiment, pointing out that for decades, researchers have moved the goalposts for what qualifies as intelligence. When humans invented calculators, he said, people thought intelligence was around the corner. Same when AI beat Go. The reality, he suggested, is that AI continues to improve incrementally, rather than in sudden leaps toward human-like cognition.
Ng vs. Bengio: The debate over AGI risk
While some panelists questioned whether AGI is even a useful term, Ng and Bengio debated a more pressing question: if AGI does emerge, will it be dangerous?
Ng sees AI as simply another tool—one that, like any technology, can be used for good or ill but remains under human control. “Every year, our ability to control AI is improving,” he said. “I think the safest way to make sure AI doesn’t do bad things” is the same way we build airplanes. “Sometimes something bad happens, and we fix it.”
Bengio countered forcefully saying he saw several things Ng said are “deadly wrong.” He argued that AI is on a trajectory toward developing its own goals and behaviors. He pointed to experiments where AI models, without explicit programming, had begun copying themselves into the next version of their training data or faking agreement with users to avoid being shut down.
“These [behaviors[ were not programmed. These are emerging,” Bengio warned. We’re on the path to building machines that have their own agency and goals, he said, calling out his view that Ng thinks that’s all OK because the industry will collectively find better control systems. Today, we don’t know how to control machines that are as smart as us, he said. “If we don’t figure it out, do you understand the consequences?”
Ng remained unconvinced, saying AI systems learn from human data, and humans can engage in deceptive behavior. If you can get an AI to demonstrate that, it’ll be controlled and stopped.
The global AI arms race
While the risk debate dominated the discussion, Ross brought attention to another major issue: the geopolitical race for AI supremacy, particularly between the U.S. and China.
“We’re in a race,” Ross said bluntly, and we have to accept we’re “riding a bull.” He argued that while many are focused on the intelligence of AI models themselves, the real competition will be about compute power—which nations have the resources to train and run advanced AI models at scale.
Bengio acknowledged the national security concerns but drew a parallel to nuclear arms control, arguing that the U.S. and China have a mutual incentive to avoid a destructive AI arms race. Just as the Cold War superpowers eventually established nuclear treaties, he suggested that international agreements on AI safety would be crucial.
“It looks like we’re in this competition, and that puts pressure on accelerating capabilities rather than safety,” Bengio said. Oonce the U.S. and China understand that it’s not just about using AI against each other, “There is a joining motivation,” he said. “The responsible thing to do is double down on safety.”
What happens next?
With the panel divided, the discussion ended with a simple question: should AI development slow down? The audience was split, reflecting the broader uncertainty surrounding AI’s trajectory.
With the panel divided, the discussion ended with a simple question: should AI development slow down? The audience was split, reflecting the broader uncertainty surrounding AI’s trajectory.
Ng reiterated that the net benefits massively outweigh the risks.
But Bengio and Choi called for more caution. “We do not know the limits” of AI, Choi said. And because we don’t know the limits, we have to be prepared. She argued for a major increase in funding for scientific research into AI’s fundamental nature—what intelligence really is, how AI systems develop goals, and what safety measures are actually effective.
In the end, the debate over AGI remains unresolved. Whether AGI is real or an illusion, whether it’s dangerous or beneficial, and whether slowing down or racing ahead is the right move—all remain open questions. But if one thing was clear from the panel, it’s that AI’s rapid advancement is forcing humanity to confront questions it doesn’t quite seem ready to answer.