We must dare to speculate about both the possibilities and the dangers of AI
The lesson from generative AI is that we must dare to look forward in time and take seriously uncertain scenarios.

Ki-generated illustration from Midjourney
Main moments
Bjørn Stærk is in Aftenposten 24 September frustrated that people speaking for general and hypothetically about artificial intelligence (KI). Of course, technologists need to have specific applications of existing AI systems in focus. But the public conversation should not limit itself to existing applications. In order for us to be ready for the challenges ahead, we need to take a long term perspective and dare to speculate about both the possibilities and dangers at KI.
An example is generative KI, which had few concrete applications as recently as three years ago. Therefore, the technology was rather not mentioned in the EU regulation, which is the world community's foremost attempt to regulate KI. However, after OpenAI launched ChatGPT, it became clear that the regulation was in no way adapted to the new technology, and the EU is now struggling to navigate an uncluttered landscape.
Should act humbly
The lesson from generative AI is that we must dare to look forward in time and take seriously uncertain scenarios. Only those who took exponential growth seriously and imagined how powerful the KI systems could become -- if only the models became larger and one had more data and computing power at their disposal — could imagine what the language models could accomplish. Maybe we've peaked, but it's just as likely that the models will only get better.
Perhaps Stærk is right that there are harsh limitations in the development of artificial intelligence; that they are just stupid calculators that do not understand the meaning of what they produce. But here we should act with humility. Despite the fact that language models only predict the next word, they still perform more of the tasks we thought required “real” intelligence. It's possible, of course, that the machines may never be as capable as us, but would you bet your future on it?
If AI systems become ever more powerful, more general, and yes, more intelligent, we need to have an idea of the challenges we face and how we as a society can meet them. If we're going to do that, we need more forward-thinking, not less.
More from Langsikt

Data is not like oil. It's better.
Data lacks what we had for oil: an institutional architecture around the resource.

Pseudocode is easy -- politics is hard. The AI Commanders Build the Bridge
if/else solves nothing in an adaptive, complex system like Norway. AI policy requires systems understanding, considerations of nature, security and voter acceptance—and it requires common principles before we can write the concrete features.

Norway contributes to the growth of others
AI is becoming the most important infrastructure of our time. Norway has significant financial interests, but is falling behind in the industrial sector. It makes us rich as investors -- and vulnerable as business.

AI threats in the short and long term
The fact that KI is causing serious problems today does not mean that we can dismiss the threats of the future.