Main moments
On Tuesday, Foreign Minister Espen told Barth Eide, in his semi-annual report to Parliament on important EU and EEA matters, that the EU's Artificial Intelligence Regulation (AI Act) “looks set to become a good regulatory framework”. The truth is that lobbying pressure from German and French companies threatens an essential part of the regulation, namely the regulation of the most powerful KI systems.
So-called foundation models are models such as GPT-4 (which is part of ChatGPT), which are trained on huge amounts of text and images. With good language skills, they can quickly adapt to new tasks. Therefore, they are used by many small companies to improve their products. Proposals for the AI Act included regulation of such models. It would mean that big tech companies, such as OpenAI/Microsoft and Google Deepmind, would have to ensure that the models they offer meet security and documentation requirements.
After intense lobbying pressure from KI companies such as Mistral and Aleph Alpha, France and Germany have reversed. They will now exempt the basic models from regulation.
Undermines the whole point
Giving exemption to the largest models undermines the whole point of the regulation, namely to regulate Risky KI Models without to slow innovation and development in areas where there is less risk.
It increases the risks associated with the most powerful models because base models are difficult to insure against bad behavior and can be used for evil purposes. Securing these models must be the responsibility of the manufacturers and not the users.
It also favors the few most powerful companies at the expense innovation and development among smaller players. It deprives the big companies of responsibility and puts the burden on the many small companies that won't have the expertise or ability to adequately secure their own systems.
Unless Germany and France reverse, the regulation could become an irresponsible regulatory framework that favors powerful companies at the expense of the safety of most people. If the regulations are to be in line with our interests, the Norwegian authorities and other countries must act.
More from Langsikt

Data is not like oil. It's better.
Data lacks what we had for oil: an institutional architecture around the resource.

Pseudocode is easy -- politics is hard. The AI Commanders Build the Bridge
if/else solves nothing in an adaptive, complex system like Norway. AI policy requires systems understanding, considerations of nature, security and voter acceptance—and it requires common principles before we can write the concrete features.

Norway contributes to the growth of others
AI is becoming the most important infrastructure of our time. Norway has significant financial interests, but is falling behind in the industrial sector. It makes us rich as investors -- and vulnerable as business.

AI threats in the short and long term
The fact that KI is causing serious problems today does not mean that we can dismiss the threats of the future.
