Op-ed

Artificial intelligence is an existential threat

First published in:
The Evening Post

When the world's top experts warn that artificial intelligence could pose a threat on par with pandemics and nuclear war, it's time to listen. A gram of foresight is worth a ton of hindsight. We should act today, rather than regret tomorrow.

Download

Ki-generated illustration from Midjourney

Main moments

! 1

! 2

! 3

! 4

Content

Increasing attention

Reducing the risk of artificial intelligence wiping out humanity should be a global priority on a par with other major societal risks, such as pandemics and nuclear war.

The words belong to 350 experts, including world-leading scientists, technologists and industry leaders. On May 30, they signed a Petition which gained international attention.

In Norway, however, the mood is different. Physicist Inga Strümke, who recently published book on artificial intelligence, is stated over what she calls “wolf, wolf” mood. But we can't wait to cry wolf until the wolf has arrived.

The world community was not prepared for COVID-19, and the climate transition should have started many decades ago. We already know enough to take seriously the risk of artificial intelligence having disastrous consequences.

Harmful algorithms

That “smart” systems can have unfortunate consequences should be obvious. The algorithms used to buy and sell stocks on the New York Stock Exchange generated enormous profits until, in 2010, they contributed to a “flash crash” “flash crash” A lightning-fast decline in a stock or index that entailed losses of thousands of billions of dollars. A couple of months ago I took a belgian father of two children life sit at the urging of the chatbot Eliza, according to the newspaper La Libre.

However, such consequences are not the reason why artificial intelligence is put in the same stall as pandemics and nuclear war. The concern is that the technology could have disastrous consequences, not just for individual people but for humanity.

We can't wait to cry wolf until the wolf has arrived

One problem is that malicious actors can misuse the technology, for example, to create weapons of mass destruction. Another is that artificial intelligence will lead to societal system failure, for example by technology cracking our digital certification systems to establish who is who, or causing unemployment on a large scale.

The experts behind the petition, however, are most concerned that we are creating super-intelligent systems that wipe us out. Superintelligent systems are, by definition, smarter than us, and they pose an existential threat because it is difficult to control and stop a smarter system. Just ask every other animal on the planet. Their existence depends entirely on what we humans choose to do.

Selective Rice and Praise

The developers' hope is that they can create systems that act in line with our interests. But the existing paradigm, which has given us ChatGPT and AlphaFold, makes no such guarantees. These algorithms are trained like puppy dogs: The algorithm gets rewarded when it has been clever. In response, it develops internal rules of thumb to maximize the chance of gaining reward in the future.

Although selective spanking and praise often leads to purposeful behavior, we don't know what internal rules the machine makes, and whether they necessarily overlap with our interests.

If we could just turn off systems that turn out to be harmful, we'd have less to fear. But in the same way that we could not just turn off covid-19 after it had spread from Wuhan, shut down the Russian government after the country invaded Ukraine, or turn off well-known computer viruses like WannaCry or Stuxnet If they are spread to the wrong machines, it may already be too late to turn off the algorithm when you see signs that it is malicious.

We agree that the argument is speculative. However, we do not need to know for sure that artificial intelligence will wipe us out, in order to take action to minimize the risk. The risk of a new Chernobyl is small. But we don't drop strict safeguards for nuclear power plants for that reason.

How to prevent risk

So what should we do? We propose four concrete initiatives that can mitigate the greatest risks.

  • We should regulate artificial intelligence to prevent the use of the technology to develop weapons of mass destruction. Arms suppliers can't sell weapons to terrorists. We must likewise demand that technology companies do not assist malicious actors. The companies developing the technology must be held accountable if their models are used to develop bioweapons or advanced forms of hacking.
  • We need to establish security standards for the development and deployment of new artificial intelligence models. The models must be tested and verified under the supervision of independent third parties before they are connected to the internet or put into use. It must also be possible to roll back models that have been shown to be harmful.
  • We must ensure that the benefits of artificial intelligence accrue to everyone. The corporations are taking a risk on behalf of all humanity, and they should not be allowed to privatize the profits in the meantime. Possible solutions are a tax on computing power, a windfall clause or similar. An alternative is more political management of the technology to prevent mass unemployment.
  • Norway should take the initiative of an international research collaboration, a CERNCERN, the European Organization for Nuclear Research. for safe artificial intelligence. Such cooperation can focus on technical challenges such as how to avoid dishonest or manipulative behavior, or develop models that can be adjusted after they have been adopted.

About to wake up

The risk of catastrophic consequences is great enough that we need immediate action. Many scientists believe it is possible to create super-intelligent systems, and that it happens within 40 years.

This year was 40 years ago Brundtland Commission report on climate change. Many argued that science in the early 1980s was too uncertain to defend drastic action. But with the facet in hand, we know that the early climate models have been Surprisingly accurate, and that we should have taken action sooner. The same applies to artificial intelligence.

Fortunately, it seems that Norwegian politicians are waking up to everyday life where artificial intelligence is one of the most important topics of our time, including with a selection of experts from The Right and Representative proposal of the Socialist Left Party. We expect politicians to take the existential risks from artificial intelligence as seriously as other sides of the technology.

As Strümke says, “We haven't seen artificial intelligence do damage at the level of nuclear weapons yet.” Let's make sure that quote never goes out of date.

Download