AI threats in the short and long term
The fact that KI is causing serious problems today does not mean that we can dismiss the threats of the future.

Ki-generated illustration from Sora.
Main moments
This is the full version of the post published at Aftenposten.
IN our post November 24 we claimed that UIO professor Arnoldo Frigessi dismissed KI researcher Yoshua Bengio's warnings about catastrophic consequences of an AI development getting out of control. Frigessi Reply in Aftenposten that we “couldn't be more wrong” and highlights his own significant work for safe KI over many years.
We fully agree that Frigessi has contributed and is helping to make today's AI systems more interpretable, fair and sustainable. And in this way safer. Where we and Frigessi disagree is what risk an accelerating AI development could pose in the years and decades to come. Frigessi warns that a focus on KI threats such as may arise in the future “distracts” from the KI threats we know exist today. Our point is that you have to have two thoughts in your head at the same time.
Already today, accelerating AI development contributes to the concentration of power and capital. The technology is used, among other things, for destabilizing disinformation and autonomous cyber attacks. These issues are widely agreed upon, but do not exclude that KI may pose a larger and different threat in the longer term.
Astronomical amount of money invested both in China and Silicon Valley, with the stated aim to creating superintelligence. The technological solutions needed to control such AI do not exist today. Finding them will require targeted research work over many years, in addition to the important work Frigessi and others are doing to safeguard the current AI system.
We maintain our contention that Bengio's warnings about the future of KI should also be taken seriously. The Technology Council's recent report on superintelligence — with scenarios for 2030 — shows that the Storting's advisory body also takes the issue seriously. A natural place to start is to establish a National Research Institute for AI Safety, requiring government appropriations beyond the existing KI billion.
Here, as scientists, we have to work in teams. To solve both current and long-term problems.
More from Langsikt

Data centers aren't the problem -- poor prioritization is
Data centers are portrayed as a threat to Norwegian industry and the power system. The figures show that the risks lie in unclear frameworks, not in the data centres themselves.

The world's top AI scientist warns of the dangers of artificial intelligence. Dismissing him is risky.
He has outlined one of the best solution proposals so far.

Norway needs a new national project
Norway and Europe must find a third way in the shadow of China and the United States.

Data centers are the infrastructure of the future — not a sidetrack in energy policy
The debate over data centers is dominated by concerns about power use and nature encroachment. But data centers are becoming a new form of critical infrastructure. The crucial question is not how many we will have, but what terms Norway will govern them according to.