AI risk is many things — that's why we need multiple conversations at the same time
KI risk is not one thing. When we mix together concrete, political and long-term dangers, debate and policy become demanding.

Ki-generated illustration from Sora.
Main moments
This autumn, AI risk has been the subject of an unusually broad and academically strong debate in Norway. The visit of Joshua Bengio put long-term issues of control and security on the agenda, while discussions on a possible Norwegian KI security institute have raised the need for institutional action.
Several good posts have helped to nuance the picture. Tellef Raabe and Preben Ness have in joint posts raised important issues of risk, preparedness and digital societal security, while Arnoldo Frigessi clearly have distinguished between concrete, operational KI risk and more speculative future scenarios. In addition, Barbara Wasson and Anja Salzmann and other academic contributors pointed out how Norwegian research communities already work broadly and seriously with AI risk — technically, institutionally and socially.
A broad and serious KI landscape is now being built in Norway. They six new research centers for artificial intelligence operationalises KI in various ways — in health, language, industry, society and technology — with safety, quality and accountability as part of the research mission itself. In parallel, there are strong environments in statistics, computer science, law, ethics, social security, foreign policy and cybersecurity. This is not a country characterised by naive techno or KI optimism.
KI risk is not one thing
At the same time, we see that the term “KI risk” is often used as if it were one unified challenge. In reality, it accommodates everything from systemic technological risk and algorithmic biases to societal, democracy and value implications -- requiring different tools, different responses, and often interdisciplinary perspectives. AI risk is many things, and is about different mechanisms, different time horizons and completely different policy responses. When different forms of KI risk are mixed together in one reasoning, both the public debate and the politics that follow from it weaken.
Arnoldo Frigessi has an essential point when he insists on distinguishing between the types of risk. Something is about Operational risk here and now: errors in models, biases in data, lack of robustness, poor testing and use of KI in health, justice and administration without adequate quality assurance. This is concrete risk with concrete consequences. This is where much of the work of reliable and secure KI actually lies.
At the same time, another type of concern is raised in the debate: what we might call control and management risks. When Bengio warns, it's largely about power and institutions: who develops the most advanced systems, who controls the infrastructure, and what societies do we lock into when KI becomes a fundamental part of decisions, economics, and public governance. This is not primarily a technical issue, but a political one.
This is also not the same as existential risk. Existential risk deals with far more hypothetical scenarios, in which extremely advanced KI -- way past current systems -- could in principle emerge from human control. The probability is uncertain and the time horizon long, but the consequences potentially vast. Therefore, it is legitimate to research this as well. But that's a different kind of risk than both the current operational challenges and the current governance issues.
In addition, there is a risk image that often receives too little attention: hybrid risiki. This is where KI meets IT systems, operational technology, automation, robots, sensors and drones - often connected through networks that are also vulnerable to cyber attacks. When KI is used to control physical systems (such as a car, bus, vacuum cleaner or factory), digital errors or onslaught have direct, material consequences. This is not science fiction, but an increasingly important part of the reality of industry, energy, transport and emergency preparedness. Also in our everyday.
This is where AI risk, cybersecurity and operational risk merge. It requires interdisciplinary expertise and new forms of security thinking — not just better algorithms.
Better Sorted Concerns
The point is not to rank these risks against each other, but to separate them. There are more than enough concerns and work for everyone: to those who work with secure AI in practice, to those who analyze governance, power and institutions, to those who research existential scenarios, and to those who deal with cyber and operational security in hybrid systems. Different risks require different means and priorities.
It is also important to highlight the positive, which often disappears in the discussion of risk. KI is already a powerful tool for better diagnostics, more efficient management, smarter energy use and increased value creation. That technology also involves risk is not an argument for pulling back, but for engaging more -- academically, politically and institutionally.
When the biggest players globally -- OpenAI, Anthropic, DeepSeek, Google and Meta -- Racer about increasingly advanced models, they don't just do it for commercial reasons. They also act on the conviction that their technological track is the best guarantee against serious risk in the long term. The race is thus both about security, power and the future shape of society.
Whatever the outcome, we will have to live in interaction with ever more advanced intelligence. Then we need better sorted concerns -- and more people who are also willing to work on the possibilities.
As Yoshua Bengio writes in The Hour, the most complex and manageable KI risk arises at the intersection of technology development, commercial incentives, and political governance. This is where the room for action still exists — but also where it quickly narrows.
Should policy do more than react in the aftermath, it requires new knowledge, institutional capacity and active and wise management. Clearing the concepts around AI risk is therefore not an academic exercise, but a necessary first step in order to be able to shape development while it is still possible.
More from Langsikt

A golden year for AI
2026 will be a golden moment for artificial intelligence. We should enjoy it while it lasts.

What will happen to the jobs?
Framework for understanding KI's effect on the economy.

Data is not like oil. It's better.
Data lacks what we had for oil: an institutional architecture around the resource.

Pseudocode is easy -- politics is hard. The AI Commanders Build the Bridge
if/else solves nothing in an adaptive, complex system like Norway. AI policy requires systems understanding, considerations of nature, security and voter acceptance—and it requires common principles before we can write the concrete features.