Op-ed
|
24.11.2025

The world's top AI scientist warns of the dangers of artificial intelligence. Dismissing him is risky.

First published in:

He has outlined one of the best solution proposals so far.

Download

Ki-generated illustration from Sora.

Main moments

! 1

! 2

! 3

! 4

Content

Last week Oslo got a visit of an artificial intelligence superstar: Ki-godfather Yoshua Bengio. Bengio is in Europe to warn European state leaders about the huge risks of KI: from becoming dependent on China and the US to “catastrophic abuse” and “KI taking over for humanity.”

Norwegian researchers, such as Arnoldo Frigessi (University of Oslo), have the habit to dismiss Such claims are alarmist. But there is reason to take The concern is serious.

We believe Bengio is contributing to one of the most important public conversations of our time -- and that he has outlined one of the best solution proposals so far.

One of us is a research fellow in KI at NTNU and researches the type of models that Bengio wants us to develop. The other has left the research job to work full time with Ki-risk.

We may lose control of development

When one sits and gets frustrated with bland answers from ChatGPT, it's easy to forget how far this technology has come in just a few years.

Two weeks ago Google published a report which describes a digital threat picture that increasingly consists of hacker attacks carried out using KI models.

Last week, the KI company informed Anthropic that their models had been used by Chinese hackers to carry out attacks against “global financial institutions and government agencies”. Although the hackers had only succeeded in a few of their attacks, Anthropic points out that this is “the first large-scale cyber attack carried out without significant human interference.”

In a digitized world marked by rising geopolitical tensions, KI-led hacking is a security risk that must be taken seriously.

Today's KI models as superhackers, on the other hand, are not the biggest concern in the long term. Too much suggests that making better and more capable KI models is also a task as KI models can be better at than us.

Using KI models to do KI research is a field that is still in its infancy. But the development is exponentialand this spring came the first Ki-generated research article which passed peer review.

The potential for AI development that becomes self-reinforcing and thus accelerating out of control, is precisely one of the dangers Bengio pulls forward.

A KI made to be controlled

The solution Bengio outlines, along with its research lab LawZero, is a”Science KI“. This is a type of KI that differs fundamentally from current models.

Simply explained, science KI is designed to create models that have an understanding of cause-effect and can make hypotheses about what will happen if a particular action is performed or a particular event occurs.

The great advantage of these models is that they will be transparent to humans, so they can be inspected to see how the KI “thinks”. It allows us to control what the Ki has allowed to think, and thus also what kind it is allowed to generate. So what Bengio is talking about is an AI model that is built from the ground up to be understood and controlled by humans.

Large language models, such as ChatGPT, are trained on huge amounts of data to learn how to predict the next word in a sentence and thus generate text. But ChatGPT's understanding of the world consists only in learning patterns in large amounts of data. How the thinks, is largely a mystery. We therefore cannot guarantee what it can do or that it cannot be abused.

Bengio envisions its science KI as a kind of gatekeeper that moderates what models like ChatGPT are allowed to do. Precisely to avoid that these models are used for cyber attacks or other dangerous activity.

Opportunity to rethink

Although it is unrealistic to compete with tech companies on money spending, we have an opportunity to rethink. Bengio presents not only a promising idea for safe AI development, but a framework for an KI that understands the world more like we humans do.

In order to ensure the safest possible future, we should invest significant resources into developing Science KI, both with Norwegian and European research funding. A natural place to start is a large-scale joint Nordic initiative on AI security, something Bengio himself pointed out when he visited Oslo. But as he also said: then it needs to be put on some zeros on the KI billion.

Download
We use cookies to provide you with a better user experience. By clicking “Accept”, you consent to our use of cookies. Read more in our Privacy Policy.