Strümke's hubris
What Inga Strümke calls “futurism” in the KI debate is necessary preparedness towards a possible future.

Ki-generated illustration from Gemini.
Main moments
IN last week's Morgenbladet Inga Strümke draws a clear demarcation line between science and empty speculation. Long term warnings that within the next decade we will be able to get very capable and Hazardous KI systems, she dismisses as unscientific futurism.
It sounds like Strümke is calling for professional humility, but flatly dismissing serious risk from advanced AI systems is an expression of hubris, not humility.
This view stands in clear contrast to some of the world's foremost AI experts. Nobel laureates Geoffrey Hinton and Yoshua Bengio both warn of catastrophic consequences from advanced KI. Bengio is the world's most cited scientist and author of International AI Safety Report. The report provides a significant empirical and scientific basis for the hypothesis that we can quickly obtain extremely capable and dangerous KI systems.
Requiring policy makers to rely more on her judgment than the experts who founded the research paradigm within which modern KI is developed falls on its own unreasonableness.
Strümke is also not a credible source on how the leading KI models will develop. As recently as last year, she wrote in Aftenposten that language models are a hype as just appears as promising because we attribute consciousness to systems that can communicate with us. One year later, we see that the same models Solves unsolved mathematical problems and writes close to 100 percent of the code in the leading KI companies. The models are already at full speed into the research, as Tore Wig has described well in this newspaper.
A more subtle form of hubris is expressed in the willingness to dismiss future scenarios as implausible. That's unwise given how much has happened in just a few years. The film here from 2013 shows a hypothetical future in which humans talk to and fall in love with charming and intelligent AI systems. Today we live in that reality. It should prompt humility at how radically different the world might look in ten years.
The history of science is full of refuted claims about events that scientists believed were impossible or futurism. In 1933, Ernest Rutherford, the man who had himself split the atom, called the idea of extracting energy from atomic nuclei “moonshine”. The following day, Leo Szilard read about the statement in the newspaper and got the idea of a nuclear chain reaction while waiting for the green light at a London intersection. Twelve years later, the atomic bomb fell on Hiroshima.
In the face of an unknown future, one should not be inclined to be certain of what one believes will happen or what one believes will not happen.
Does that mean we're groping in the blind? The No. If we use different sources of knowledge and apply decision-making frameworks designed to deal with uncertainty, we can be more prepared for what is to come. Such triangulation has allowed us to anticipate several of the developments in recent years. For example, since we established the Long Term in 2023, we have warned policy makers about the agentic revolution who are now on the stairs.
A contentious issue is whether to listen to the AI companies when trying to understand technology developments. Strümke warns against borrowing the rhetoric from the companies that make money off inflated expectations about the future of technology.
I agree with Strümke by a long way that we need to take corporate statements with a pinch of salt. Open AI chief executive Sam Altman is clearly no independent or intelligence source. If you want to understand where the technology is heading, you can't ignore all the information that comes from those who are the primary drivers of development.
The job of the person who tries to form the best possible understanding of the development and its danger moments must involve weighting the inside information of the companies' employees with the outside views of independent experts. In this demanding, but necessary, exercise it helps to have technical expertise, but such expertise must be complemented by insights from other disciplines, such as economics, political science and philosophy, to name a few.
To speak about the evolution of technology as a non-technologist does not disqualify one from making good analyses. Strümke should agree with this, as she speaks frequently about what are sensible business strategies, good legislation and the societal consequences of KI, with her background in physics and machine learning.
I would caution Norwegian decision-makers against following Strümke in discarding serious future scenarios as frivolous futurism. They have a responsibility to protect the Norwegian population against all major threats — even those threats that outspoken Norwegian KI researchers do not understand.
More from Langsikt

Disaster from autonomous AI
Real threat or science fiction?

Secure KI requires more than research
Norway needs a government body that can coordinate the work of AI security.
.png)
Someone needs to test the KI models Norway relies on
Norway is lagging behind in efforts to secure artificial intelligence. We propose a national security body.
