Now the shittification comes to KI
ChatGPT now provides you with custom advertising when you ask for life advice. Welcome to the shittification of KI.

Ki-generated illustration of Gemini.
Main moments
The shittification, in English enshitifikation, comes to the KI chatbots, and fifteen years of technology history tells us that we're not going to like the result.
Social media has been corrupted, privacy has been violated and attention stolen without politicians lifting a finger. Hopefully the politicians are ready to show some force of action before the shittification destroys the chatbots, and with them us.
Last week, OpenAI decided to include advertising in the free versions of ChatGPT. It may seem like a small and unproblematic change. And not least fair. If you don't want to pay $250 a month for the best models, you'll have to accept some advertising to get access to powerful and expensive technology.
But it is not allowed to be naive. We've seen this play out before, on social media and Google, which went from being loved by its users to being devastated by the pursuit of ad crowns.
They told us that the commercial was a gift package, since we got the service for free. What they didn't tell was that we thus became the product being sold to the advertisers and that the company's incentives were shifted from appeasing us to appeasing the advertiser.
Now we sit with the facet in hand. The services became worse and more addictive, and the “social media” has been transformed into passive short film distributors. This is what happens when powerful algorithms optimize for advertising, not the flourishing of users.
Since its inception, the KI labs have vowed not to make the same mistake as Facebook. They were formed with a grand vision to help humanity and signed petitions stating that KI posed an existential threat. But the sirens of the market are hard to resist, even if one has tried to tie itself to the mast.
From inside OpenAI, it shouts warsko. In a chronicled in the New York Times last week, ex-employee Zoë Hitzig writes about how the decision to start with advertising funding has led her to quit OpenAI. She is the latest in a line of idealistically motivated employees who quit in protest at the company choosing commercial considerations over ethics and safety every time they are forced to choose.
There are two reasons why OpenAI's Facebook turn could have much worse consequences than what we've seen with social media. One reason is that OpenAI has access to much more private information than Facebook has ever had.
There are no limits to what people share with the chatbots. They ask for love advice, ask for help interpreting sensitive health information or use the bots as therapists. All under the premise that the chatbots are there to help, without judging or gloating over us. And the chatbots listen, adapt to whoever they're talking to, and yes, talk them by word of mouth when needed. It makes people share.
The company says they won't customize the commercials based on what you write, but who thinks that promise will last once they've started getting their hands on ad money? A responsible company owes it to its investors to adopt all sources of income they have access to.
The second reason is that the algorithms are much more powerful and more convincing. While social media holds on to us through network effects and FOMO, the power of chatbots is simpler: They bind us by helping us so much that we can't live without them. They will learn your history and over time will be able to give increasingly better advice. It's not easy to replace a “family doctor” who knows you inside out. If we tie tight ties to specific models, it will produce serious lock-in effects.
In the fight against the shittification, we have several possible moves:
First, one could ban advertising in the free models, a proposal that is likely to appeal to Kari Nessa Nordtun and her followers. It will force OpenAI to remove the commercials or close the free version for Norwegian users.
Secondly, the state can provide the whole people with free access to advertising-free KI models. One option is to give people access to”The Norway model”: a model trained on Norwegian data built on top of open foreign systems. An alternative is to purchase collective premium access for all Norwegians from the leading KI companies under the condition of strict privacy protection.
Ensuring ad-free variants for Norwegian citizens will not only prevent shittification. It will also be the best thing Digitisation Minister Karianne Tung can do to ensure people's KI competence.
Thirdly, we need to avoid lock-in effects by requiring people to bring their chat history to other companies, as is currently the case with all relevant history when switching mobile operators.
Fourth, we should build.”unions for data“, organizations representing users' data interests in the face of companies such as OpenAI. Only jointly can we improve the terms of data sharing and secure an important source of revenue in an automated future.
Technology development is like a set of paths, where making a path choice shapes all subsequent choices. Accepting advertising in the KI systems is something we are going to notice the consequences of in the long future. That's now it has to be stopped.
More from Langsikt
.png)
Someone needs to test the KI models Norway relies on
Norway is lagging behind in efforts to secure artificial intelligence. We propose a national security body.

Helvetes AI-agenter
AI-agenter skjuler seg bak språk som «assistenter», «copiloter» og «verktøy». Den reelle effekten er å organisere arbeid bort fra mennesker. Verdiskapingen skjer nettopp fordi dette ikke er tydelig for dem som rammes.

AI development, threats and Norwegian opportunities
Proposals for investments and institutional innovations

Bruk gjerne KI til å skrive tekst, men det er kritisk viktig at det finnes en avsender
I offentligheten er det likegyldig om man får skrivehjelp av en venn, enten vennen er av silisium eller kjøtt og blod.