Consensus on AI Act - France, Germany and Italy give in
In December, EU policymakers reached a political consensus on the main contentious issues of the AI Act (KI Regulation), the EU's flagship for regulating artificial intelligence. After a month of technical clarifications, formal agreement was reached on Friday.

Ki-generated illustration from Midjourney
Main moments
In December, EU policymakers reached a political consensus on the main contentious issues of the AI Act (KI Regulation), the EU's flagship for regulating artificial intelligence. After a month of technical clarification, it was formal agreement on Friday.
France, Germany and Italy, have worked against the legislation all the way, after lobbying pressure from its leading AI company. They wanted more lenient rules for powerful AI models. After growing resistance both internally and from the other member states, Germany reversed itself last week, with Italy and France eventually seeing themselves having to follow suit. The big tech companies may want to keep looking for opportunities to water down legislation until it is finally adopted by the EU parliament in April, but in this half they will have to face defeat.
- A victory for both security and innovation for artificial intelligence in Europe. With higher requirements for those who make the most powerful models, there will be a greater regulatory burden on the largest players who have the best ability and resources to mitigate risk. At the same time, it will be easier for both small and large players to adopt the technology in a safe way,” says Jacob Wulff Wold, advisor at Long Term, about the development.
- In the future, it will be interesting to see how the legislation will work in practice. Much is high-level principles more than concrete requirements.
IN The AI Act KI systems are classified basically by area of application and fall into four categories: unacceptable, high, limited and minimal risk. Regulation should be in line with the risk, and systems with unacceptable risk are therefore prohibited.

(figure from Technology Council)
The four areas had at first been defined only by the area of application, but with the emergence of general models such as the one in ChatGPT, this had to be updated. All general-purpose models must document how they are made, and the most powerful general systems are classified as posing a systemic risk, with similar requirements as high-risk applications.
It is a long time before the legislation comes into force in Norway, but in Europe, areas with unacceptable risk will be banned already during the year, and requirements for new AI models will become effective in spring 2025.
More from Langsikt

Data is not like oil. It's better.
Data lacks what we had for oil: an institutional architecture around the resource.

Pseudocode is easy -- politics is hard. The AI Commanders Build the Bridge
if/else solves nothing in an adaptive, complex system like Norway. AI policy requires systems understanding, considerations of nature, security and voter acceptance—and it requires common principles before we can write the concrete features.

Norway contributes to the growth of others
AI is becoming the most important infrastructure of our time. Norway has significant financial interests, but is falling behind in the industrial sector. It makes us rich as investors -- and vulnerable as business.

AI threats in the short and long term
The fact that KI is causing serious problems today does not mean that we can dismiss the threats of the future.