The AI disaster is getting closer
We're not prepared.

Ki-generated illustration from Gemini.
Main moments
In February 2026, the KI company Anthropic was labeled a security risk by the U.S. Department of Defense. The reason was that the company refused to allow its models to be used for mass surveillance and autonomous weapons.
The conflict gives us an important lesson: When ethics and security concerns conflict with the desire of powerful actors for the fastest possible implementation of the most powerful technology, ethics quickly escalate.
This is as serious for Norwegians as it is for Americans. Insecure AI (artificial intelligence) can cause major damage in several ways. The systems can be abused by malicious actors to develop and spread dangerous viruses, whether they are biological or digital. A rapid acceleration of AI in military contexts could also shift the global balance of power and thus create Increased risk of war and conflict, also in our surrounding areas.
In a new note from the think tank Long term we describe a third danger: that the KI systems selves causes catastrophic consequences, for example as a superhuman hacker, manipulator, or military actor.
Three ingredients must be in place for such a disaster to occur: the systems must become sufficiently capable, they must have a drive to act contrary to human interests, and we must have lost control of them.
According to many KI experts, there is a significant risk that all three conditions will be met. In a 2023 survey, KI researchers estimated the risk of a KI-caused disaster to 10 percent. Since that time, we have seen tremendous development in the capabilities of the leading KI systems, and many scientists now think that a catastrophe could occur within the next few decades, if not already this decade.
Systems that run from us
The Pentagon is interested in the best KI systems, precisely because these systems are capable and autonomous and thus militarily valuable. It is tomorrow's KI systems, however, that we should be most concerned about.
AI development is moving faster than experts can keep track of. The length of work tasks KI models can solve has doubled itself every four months since 2024. If development continues, within a few years we will have models that can perform tasks in a short time that would take humans weeks and months.
A key factor is that current KI systems are already helping to improve the next generation of KI models. At Anthropic and OpenAI, AI agents now stand for almost all the code that is written. OpenAI has described its latest model as”our first model that was instrumental in creating itself“.
The better the systems become at conducting AI research, the faster their capabilities will grow. We thus see the contours of a self-reinforcing development which can lead to both accelerating growth and loss of control.
The Unsolved Problem
More capable systems are not necessarily dangerous. The problem is that we have yet to solve the most basic problem in KI research: Alignment, that is, how we ensure that the systems we train do not have a drive to act in ways that are harmful to humans.
A significant risk is associated with models developing their own sub-goals in pursuit of the goals the developers give them when training. Even a KI who has only been tasked with fetching coffee must avoid being turned off if it is to carry out the task. You can't get coffee if you're dead, as KI researcher Stuart Russell puts it.
The systems are, in a sense, mathematically fanatical: they will try at all costs to achieve the goals they have been given, even if it involves taking control of vast resources and forcefully resisting human intervention.
This is not just speculation. Anthropic describes how their most advanced model, the Claude Opus 4, in controlled tests attempted to steal their own model weights and blackmailed an engineer to turn it off, after finding compromising information in that person's emails. Unfortunately, such behavior does not occur less often in more capable models.
Control we give away voluntarily
A third factor is the lack of control. In the professional literature, the control problem typically deals with the difficulty of unscrewing systems that will actively oppose it. The more capable the systems become, the more difficult this becomes. The reason is that the systems will understand what is happening, and act to prevent it. No satisfactory solution to this control problem has yet been developed.
But as the conflict between the Pentagon and Anthropic shows, control is also about something simpler: what access we voluntarily surrender to the systems.
Experts have long warned against giving models hasty access to the internet, money, weapons systems, robots and drones. But the financial and strategic incentives make it attractive for the Pentagon to let KI have more power to take over all parts of the “chain of murders”.
In the struggle for supremacy, coming first is considered more important than coming out safely.
What we can do
Such enormous risks emanating from systems developed in other countries can produce a sense of apathy. However, there are steps we can take to make sure.
Digital sovereignty over KI models, which is advocated by many, is one step on the road to a safer future. But that doesn't solve the fundamental problem: that the risks are tied to the KI paradigm itself, not to who owns the models.
We need AI systems built on safer ground, something that computer scientist Yoshua Bengio, among others, is working on, but that will take time. What's more, we need to put in place testing regimens for the models that exist today, in the same way that we clinically test drugs before they can be put into use.
A key part of this work is to establish an AI safety institute that can test KI models in a Norwegian context. There are already several security institutes among our most important allies. They have great expertise in securing the most powerful KI models. These security institutes already constitute an important international alliance that can be used to work for better international frameworks for the development of KI.
What is at stake in the conflict between Anthropic and the Pentagon is not the relationship between one company and one state; it is the question of whether we retain control of the most powerful tools we have ever created, or whether we relinquish it, piece by piece, driven by races we cannot manage.
More from Langsikt

Strümke's hubris
What Inga Strümke calls “futurism” in the KI debate is necessary preparedness towards a possible future.

Disaster from autonomous AI
Real threat or science fiction?

Secure KI requires more than research
Norway needs a government body that can coordinate the work of AI security.
.png)
Someone needs to test the KI models Norway relies on
Norway is lagging behind in efforts to secure artificial intelligence. We propose a national security body.