Op-ed
|
10.12.2025

AI threats in the short and long term

First published in:
The Evening Post

The fact that KI is causing serious problems today does not mean that we can dismiss the threats of the future.

Download

Ki-generated illustration from Sora.

Main moments

! 1

! 2

! 3

! 4

Content

This is the full version of the post published at Aftenposten.

IN our post November 24 we claimed that UIO professor Arnoldo Frigessi dismissed KI researcher Yoshua Bengio's warnings about catastrophic consequences of an AI development getting out of control. Frigessi Reply in Aftenposten that we “couldn't be more wrong” and highlights his own significant work for safe KI over many years.

We fully agree that Frigessi has contributed and is helping to make today's AI systems more interpretable, fair and sustainable. And in this way safer. Where we and Frigessi disagree is what risk an accelerating AI development could pose in the years and decades to come. Frigessi warns that a focus on KI threats such as may arise in the future “distracts” from the KI threats we know exist today. Our point is that you have to have two thoughts in your head at the same time.

Already today, accelerating AI development contributes to the concentration of power and capital. The technology is used, among other things, for destabilizing disinformation and autonomous cyber attacks. These issues are widely agreed upon, but do not exclude that KI may pose a larger and different threat in the longer term.

Astronomical amount of money invested both in China and Silicon Valley, with the stated aim to creating superintelligence. The technological solutions needed to control such AI do not exist today. Finding them will require targeted research work over many years, in addition to the important work Frigessi and others are doing to safeguard the current AI system.

We maintain our contention that Bengio's warnings about the future of KI should also be taken seriously. The Technology Council's recent report on superintelligence — with scenarios for 2030 — shows that the Storting's advisory body also takes the issue seriously. A natural place to start is to establish a National Research Institute for AI Safety, requiring government appropriations beyond the existing KI billion.

Here, as scientists, we have to work in teams. To solve both current and long-term problems.

Download
We use cookies to provide you with a better user experience. By clicking “Accept”, you consent to our use of cookies. Read more in our Privacy Policy.