We need a research institute for AI security
Artificial intelligence will be our next societal infrastructure. In order to trust it, we need to understand how it learns, reasons and influences us back. Norway should establish a research institute for AI security — not as supervision, but as an arena for insight, values and robust societal understanding.

Ki-generated illustration from Midjourney.
Main moments
Artificial Intelligence is in the process of changing the very foundations of how knowledge is created and used. It writes, interprets, analyzes and predicts. In more and more contexts, systems don't just learn on our behalf — they learn for us.
Thus, it also changes what we consider valid knowledge, and how we make decisions.
We're creating the technology, but it's also starting to shape us back. This mutual influence requires a new research look: not just at what KI is capable of, but at what it is.
A Norwegian research institute for KI security should be such a place. Not a new executive body, but a professional environment that connects and explores how artificial intelligence learns, emphasizes and influences society's institutions.
The government's Ki-billion has strengthened applied technology development, but Norway lacks an institute that systematizes and expands research on understanding, values and risks.
Most Ki systems is assessed today on the basis of precision and utility, while the learning process itself remains indistinct. A Norwegian institute must therefore build research in both interpretability and explainability — two complementary perspectives on machine understanding.
Explainability trades about making the model's decisions verifiable for humans. Interpretability is about understanding how the system actually forms patterns and priorities. Such knowledge is a prerequisite for trust — and for responsible use in health, justice and public administration.
The technology we use is rarely developed on Norwegian terms. Most models are trained on data sets and ideals that reflect other societal understandings—about efficiency, competition, and control.
A Norwegian research environment must therefore be able to test and evaluate systems against our own values: transparency, equal treatment, the rule of law and trust. This is a new form of social science -- a research into how values actually take shape in the behavior of algorithms. Here lies a possible research contribution from Norway that other small countries will be able to learn from.
When KI is put into use in health, energy or management, not only the decisions change, but also the people who make them. Roles, responsibilities and judgment are shifted. A research institute needs to be able to study this new division of intelligence between human and machine -- how culture, trust and responsibility change in institutions that learn to collaborate with algorithms.
In a narrow sense, AI security is about robust systems — resistant to error, attack and manipulation.
But in a broader sense it's about robust societies: how we preserve understanding, control and value anchoring in the face of technology evolving faster than our institutions.
A Norwegian research institute should combine short-term risk research with studies of the long-term consequences: how values can be encoded and maintained over time, how societies can build institutions that withstand technological uncertainty, and how we can understand KI's existential risks without rhetoric or fear.
Security is thus not about protecting society from KI, but about protecting KI from losing contact with society.
China develops KI according to state goals. The US lets the market rule. The EU is trying to regulate. Norway can choose a fourth path — a trust-based model in which technology is developed within institutions that mirror society's values and can test them in practice.
A research institute for AI security could become such a model: a place that combines technological insight with social science judgment. Not to slow its development, but to give it direction.
This is how Norway can helping to build technology that not only works -- but that works for us.
More from Langsikt
.png)
Maskinen kan finne snarveier i tanken
En maskin har forbedret en av informatikkens mest grunnleggende ideer – Dijkstras algoritme for korteste vei. Det markerer et vendepunkt, der grensen mellom menneskelig innsikt og maskinell beregning viskes ut.

Ten Commandments for KI
Artificial intelligence has enormous potential to increase human welfare, but also involves unprecedented dangers.

The Ten Commandments of AI
For AI and data to create value for everyone.

Media crisis on the stairs
Artificial intelligence can undermine the media business model.