Yoshua Bengio, one of the godfathers of artificial intelligence, has once again peered into the swirling abyss of our machine-learning future and come back with concerns that might make your smartphone feel a little sinister. Speaking at an event in Montreal, Bengio warned that the very systems he helped develop might soon become not just smarter than us, but better decision-makers too, assuming we define better as uncaringly efficient and possibly devoid of ethics.
Bengio, who helped birth the concept of deep learning and presumably regrets not choosing a less ominous field like stamp collecting, expressed worry that artificial intelligence could surpass human intelligence within the next five to twenty years. In tech terms that means soon, possibly just after your toaster finishes updating its firmware.
The big issue, according to Bengio, is not just that AI might be smarter than us, but that we have not exactly nailed down how to make sure these intelligence-heavy systems also care about things like human values, staying in their metaphorical lanes, or not treating us like inefficient biological code. In his words, we lack “guarantees” that the machines will align with human intent, which is always comforting to hear from someone with Nobel Prize-level expertise in the field.
Still, Bengio remains optimistic that we might get ahead of the curve if we start putting serious resources into AI safety, oversight and alignment research. You know, the kind of things that tend to get less attention than making AI that can generate salad recipes or write teenage vampire fiction. He thinks international cooperation, science-led governance and good old-fashioned regulatory frameworks might give us a fighting chance in the looming intelligence-off.
“We are not doing enough. We need to shift some of our effort from just making AI more powerful to making sure AI does what we want,” said Bengio, who now seems to echo a growing chorus of experts wondering if humanity might be stuck in an arms race it started but cannot quite referee.
He likened the state of AI to climate change twenty years ago, which is always a reassuring comparative timeline for a global crisis that has been expertly studied, warned about and largely ignored. One can only hope that this time someone does read the footnotes.
The machines may be learning fast, but fortunately for us they still have yet to master the true human talent of completely ignoring long-term consequences in favor of shiny upgrades.

