Pseudocode is easy -- politics is hard. The AI Commanders Build the Bridge
if/else solves nothing in an adaptive, complex system like Norway. AI policy requires systems understanding, considerations of nature, security and voter acceptance—and it requires common principles before we can write the concrete features.

Ki-generated illustration from Sora.
Main moments
Lars Askvig have made a nice little pseudocode of Long-Sight Ki-Bid, where he makes it look like we should be able to put the entire societal model into an if/else structure and then recompile Norway. That's a good rhetorical move. But it also illustrates something important: even good pseudocode assumes that one understands the architecture in which it is supposed to run.
Askvig starts with: if Long term: talk () else: build () -- a kind of Python-adjacent vibecode. But politics isn't a program that starts working when you add a fictional feature. It's not enough to write define (how) when the variable is called community. In order for something to build (), a broad majority must be in on board with the reprioritization. As Jens Stoltenberg said in his financial speech:
“It's easier to allocate than to cut. But politics is to choose. For some purposes to get more, others must get less. Responsible governance is not about everything that could have been done, but about everything that can actually be done.”
The AI Commandments
Therefore, the Ki commandments do not lie on “how”level. There are a hundred brilliant AI projects Norway could start tomorrow — just as many organisations start fragmented with no direction. But in reality, we can only implement the few that are politically possible, socially justifiable and acceptable within considerations of nature, security, capacity and voter logic. The Ki commandments are not attempting to write apps; they are attempting to define the operating system on which politics and technology will be built -- and a system in which many can actually participate in a transformation that is rapid, brutal and often incomprehensible to most people.
Askvig suggests that proper use of KI should “remove the fog talk.” That's a lovely engineering thought. buts used_right (KI) never returns True if we have not first defined what is proper use, who has responsibilities, which datasets are legitimate, and how risk, oversight and critical infrastructure should be managed. The KI bids precisely establish an API that politics and technology can share, so that KI can be used in the community — not just in a development environment.
Higher-order functions
If we're going to write this as code first, it's more precise to think in higher-order functions than in if/elsebranches. KI policy is not linear control flow; it is optimization under functional constraints:
policy = optimize (
objectives= ["value creation”, “security”, “trust"],
constraints= ["nature”, "budget”, "capacity”, "rights "]
)
Even the most elegant pseudocode helps little if you jump straight to build () without understanding what the system should optimize for. This is where the difference between politics and engineering lies: Engineering works under controlled conditions. Politics must operate in a complex, volatile reality -- with backlinks, delays and consequences that no syntax can capture.
We should be wary of “vibecoding the community” without deep understanding of algorithms, operating systems, and KI architecture. A community cannot be refactored like a library. It needs to be managed, with principles that make action possible -- not with syntax that makes complex questions easy on screen. And if we first vibecode: Then we also have to ask whether the KI who writes the code shares our cultural and democratic premises. It rarely does.
Emergent properties
Vibecoding works great on TikTok, but not at the community level. Systems theory reminds us that complex societies do not follow if/else logic, but have emergent properties and feedback loops that do not allow themselves to be abstracted away. One can write an elegant function like build (), but without an understanding of the whole -- nature, security, trust, capacity -- you don't get a system running. The KI bids attempt to address just this: the systemic architecture that must be in place before one can write the application logic.
Askvig asks for concretization, and that is a reasonable requirement — but that is precisely why the concretization must not be left to a small professional environment. What Norway is actually going to do build () must be determined by business, research, municipalities, civil society and voters. This is not an area where better prompting gives us the truth; it requires priorities, trade-offs, and a broad democratic mandate. The Ki commandments are not the facet, but the guarantee that the process does not start in the blind.
The KI bids are not a finished program.
They are the import-statement Norway needs to start build ().
More from Langsikt

Data is not like oil. It's better.
Data lacks what we had for oil: an institutional architecture around the resource.

Norway contributes to the growth of others
AI is becoming the most important infrastructure of our time. Norway has significant financial interests, but is falling behind in the industrial sector. It makes us rich as investors -- and vulnerable as business.

AI threats in the short and long term
The fact that KI is causing serious problems today does not mean that we can dismiss the threats of the future.

Data centers aren't the problem -- poor prioritization is
Data centers are portrayed as a threat to Norwegian industry and the power system. The figures show that the risks lie in unclear frameworks, not in the data centres themselves.