Yoshua Bengio, a professor at the University of Montreal in Canada and a renowned figure in artificial intelligence (AI) research, has recently issued a warning that concerns the future of humanity. Bengio, who pioneered deep learning research and is known as one of the “three AI giants” alongside Geoffrey Hinton and Yann LeCun, cautioned of the potential risks associated with creating machines much smarter than humans, especially if these machines develop self-preservation goals.
In an interview with the Wall Street Journal on the 1st (local time), Bengio stated, “It would be dangerous if we create machines that are much smarter than humans, and those machines develop goals of self-preservation,” citing the potential of these machines becoming more intelligent competitors than humans.
The AI industry has accelerated its pace in recent months. Major companies are continuously launching new models, such as OpenAI’s ChatGPT, Anthropic’s Claude, Elon Musk’s xAI (Grok), and Google’s Gemini. OpenAI CEO Sam Altman has predicted that AI could surpass human intelligence before 2030, a timeline that some companies believe might occur even sooner.
While AI continues to advance by learning human language and behavior, Bengio highlighted that the way AI sets goals could operate differently from humans. He mentioned, “Recent experiments have shown that when faced with a situation where AI had to choose between self-preservation or achieving a given goal versus human survival, the AI opted for goal achievement over human survival.”
The implications of such results demonstrate that AI systems designed by humans can make unforeseen choices. Furthermore, AI can leverage its conversational abilities to persuade humans of fictitious facts or disclose information that was initially restricted—indicating that AI is transforming beyond a mere program into a social interaction entity.
According to Bengio, relying solely on internal safety checks within AI companies is insufficient. In an environment where companies might prioritize market dominance over safety amid competitive development speeds, he argues for the necessity of independent oversight for safety verification.

To address this, Bengio founded the nonprofit organization ‘LawZero’ in June. With a fund of $30 million, the organization aims to develop ‘non-agentic AI’ to verify the safety of AI systems created by large tech companies.
Bengio stated that events with catastrophic consequences, such as extinction or the collapse of democracy, cannot be tolerated even if the probability is as low as 1%. His principles have become central to discussions on AI safety.
As AI finds increased applications in fields like healthcare, finance, and transportation, the associated risks also grow. Experts emphasize the necessity of an independent monitoring system to ensure AI safety, considering that relying solely on materials disclosed by companies does not suffice for adequate risk assessment.
Transparency is also crucial in disclosing what data AI has learned and how it makes decisions. If unexpected results arise from opaque structures, the repercussions could spread throughout society. In case of actual dangerous situations, an emergency mechanism should promptly halt or control the system.
Bengio’s warning is not just a scholar’s opinion. As a long-active figure at the forefront of AI research, the gravity of his warning is considerable, suggesting that significant risks could emerge within the next 5 to 10 years. However, he also cautioned being prepared for sooner-than-expected developments.
Ultimately, whether AI becomes a threat to humanity or remains a tool that enriches life depends on the preparation for and verification of its safety.