Main moments
On Tuesday, Foreign Minister Espen told Barth Eide, in his semi-annual report to Parliament on important EU and EEA matters, that the EU's Artificial Intelligence Regulation (AI Act) “looks set to become a good regulatory framework”. The truth is that lobbying pressure from German and French companies threatens an essential part of the regulation, namely the regulation of the most powerful KI systems.
So-called foundation models are models such as GPT-4 (which is part of ChatGPT), which are trained on huge amounts of text and images. With good language skills, they can quickly adapt to new tasks. Therefore, they are used by many small companies to improve their products. Proposals for the AI Act included regulation of such models. It would mean that big tech companies, such as OpenAI/Microsoft and Google Deepmind, would have to ensure that the models they offer meet security and documentation requirements.
After intense lobbying pressure from KI companies such as Mistral and Aleph Alpha, France and Germany have reversed. They will now exempt the basic models from regulation.
Undermines the whole point
Giving exemption to the largest models undermines the whole point of the regulation, namely to regulate Risky KI Models without to slow innovation and development in areas where there is less risk.
It increases the risks associated with the most powerful models because base models are difficult to insure against bad behavior and can be used for evil purposes. Securing these models must be the responsibility of the manufacturers and not the users.
It also favors the few most powerful companies at the expense innovation and development among smaller players. It deprives the big companies of responsibility and puts the burden on the many small companies that won't have the expertise or ability to adequately secure their own systems.
Unless Germany and France reverse, the regulation could become an irresponsible regulatory framework that favors powerful companies at the expense of the safety of most people. If the regulations are to be in line with our interests, the Norwegian authorities and other countries must act.
More from Langsikt

Media crisis on the stairs
Artificial intelligence can undermine the media business model.
.png)
KI-horisonten nyhetsbrev
Vi publiserer nyhetsbrev om de viktigste nyhetene innen kunstig intelligens den siste tiden. Abonner for å få månedlige utgaver tilsendt på mail.

We took control of the oil. Let's do the same with data and artificial intelligence
When the Americans came to Norway to conduct oil drilling, blood-trimmed politicians and bureaucrats ensured that the oil resources benefited the entire Norwegian people. But when the same Americans come to drill for data we do... nothing?

Ten commandments for long-term and value-added AI
Norway needed a vision for oil — now we need one for data and artificial intelligence. A AI committee consisting of 15 experts will draw up 10 KI and data budgets for Norway. The bids will be announced at the end of October.