Utspill

Statement on Biosecurity Risks at the Convergence of AI and the Life Sciences

Først publisert i:
Nuclear Threat Initiative

This statement was developed by NTI in association with the AIxBio Global Forum.

Last ned

Hovedmomenter

1

2

3

4

Innhold

Original statement at this link. Reproduced with permission from the original authors.

Statement

Rapid advances in artificial intelligence (AI) and its convergence with the life sciences offer incredible potential societal benefits, including advancing public health through the development of new vaccines and treatments, and by strengthening capabilities to rapidly detect new infectious disease outbreaks. These advances have the potential to reduce the burden of disease across the globe and to drive economic development. At the same time, rapid advances in AI capabilities that enable engineering of living systems—referred to here as AIxBio capabilities—also increase the risk of deliberate or accidental release of harmful biological agents, including those that could cause a global biological catastrophe that affects populations around the world.

As AIxBio capabilities continue to advance, they are likely to lower barriers to malicious actors causing harm with biology. Such capabilities could make it easier for a malicious actor to access the necessary knowledge and troubleshooting assistance to design, build, and deploy a dangerous biological agent. This could allow malicious actors to achieve their objectives significantly faster and more effectively than has been possible in the absence of AI tools.

At the same time, AIxBio capabilities could raise the ceiling on what is possible, potentially increasing the level of harm that a malicious actor can cause with biology. AI-enabled biological tools could make it possible to design pathogens that are more dangerous than what is found in nature or what humans can develop on their own with current scientific knowledge– for example, pathogens that are more virulent or more transmissible among humans. Although the timeline is uncertain, this misuse scenario could be feasible within the next few years if sufficient guardrails for AIxBio capabilities are not developed.

Recent technological progress includes AI models that can design new individual biological molecules, such as toxins, proteins found in pathogens, or proteins that bind to important targets in the human body. AIxBio capabilities are advancing rapidly, and future AI models could enable the design of more complex biological systems, for example, groups of biomolecules working together to perform more complex functions—like cell signaling or enzymatic production of materials—or genome sequences that encode entire blueprints of viruses or bacteria.

These advances could make it easier to design biological agents with novel properties tailored to specific goals. Although it is not trivial to build engineered viruses or other biological agents based on AI designs, the technological barriers to doing so continue to drop over time.

One key emerging technology that could lower the barriers for malicious actors to cause harm and change the landscape of risks is the development of AI agents optimized for scientific discovery and engineering. These agents are designed to autonomously perform multiple tasks in a row to achieve more complex goals and can be applied to the life sciences. Life science-focused AI agents are progressing rapidly in their ability to understand scientific literature, generate hypotheses, design experiments, and interpret data, and they are beginning to interface with bioscience laboratory equipment and advanced laboratory robotics. Without careful oversight, these AI agents may pursue scientific advances in unexpected ways that could unintentionally increase biosafety or biosecurity risks, or malicious actors could use them to help develop harmful biological agents.

Another concern is that AIxBio capabilities could reduce the effectiveness of biosecurity and biodefense measures, including evading biosurveillance systems for detecting infectious disease outbreaks, enabling resistance to medical countermeasures, and circumventing nucleic acid synthesis screening. A weakened global biosecurity posture could increase the perceived tactical utility of bioweapons, creating a more permissive environment for destabilizing biological attacks.

Future advances in the life sciences and AI capabilities are difficult to predict, but the rapid pace of progress in these areas requires us to be forward-thinking to anticipate emerging risks on the horizon. Bearing in mind the risks outlined above, an especially damaging scenario could involve a sophisticated malicious actor using AI-enabled biological tools to design and subsequently produce and release a biological agent with novel properties that make it significantly more dangerous than pathogens found in nature. The release of such an engineered agent could cause a high-consequence biological event with global implications that is as damaging as the COVID-19 pandemic or potentially much worse.

The profound benefits of AIxBio capabilities combined with their potential to cause significant harm to populations around the world demands urgent attention, international engagement with a diverse range of stakeholders, and decisive action. As AIxBio capabilities advance, tracking evolving technological developments, understanding associated biosecurity risks, and developing effective risk reduction measures will be critical. We call for national governments, industry, academia, philanthropy, and civil society to work together to develop governance mechanisms, technical guardrails, and other approaches to promote safety and security while supporting the positive potential of these powerful capabilities.

Signatories

  • Yoshua Bengio, Université de Montreal, LawZero, Mila - Quebec AI Institute
  • Ayelet Berman, Asia Centre for Health Security, National University of Singapore
  • Ayodotun Bobadoye, Global Emerging Pathogens Treatment Consortium
  • Sigrid Bratlie, Langsikt
  • Sarah R. Carter, Science Policy Consulting
  • Beth Cameron, The Pandemic Center, Brown University School of Public Health
  • Siméon Campos, SaferAI
  • George Church, Wyss Institute, Harvard University
  • Rt Hon Helen Clark, Member of The Elders, Former Prime Minister of New Zealand
  • Le Cong, Stanford University
  • James Diggans, Twist Bioscience
  • Maria Espona, ArgIQ
  • Kevin Esvelt, Massachusetts Institute of Technology
  • Anjali Gopal, Anthropic
  • Steph Guerra, Former White House Office of Science and Technology
  • O’Neil Hamilton, Stimson Center
  • Andrew Hebbeler, Coalition for Epidemic Preparedness Innovations
  • Dan Hendrycks, Center for AI Safety
  • Tom Inglesby, Johns Hopkins University Center for Health Security
  • Chris Isaac, iGEM
  • Sang Yup Lee, Korea Advanced Institute of Science and Technology (KAIST)
  • Becky Mackelprang, Engineering Biology Research Consortium
  • Piers Millett, International Biosecurity and Biosafety Initiative for Science
  • Suryesh Namdeo, Indian Institute of Science
  • Cassidy Nelson, Centre for Long-Term Resilience
  • Judith Chukwuebinim Okolo, National Biotechnology Research and Development Agency, Nigeria
  • Claire Qureshi, Sentinel Bio
  • Lakshmy Ramakrishnan, Observer Research Foundation
  • David Relman, Stanford University
  • Jonas Sandbrink
  • Hayley Severance, Nuclear Threat Initiative
  • Jacob Swett, Blueprint Biosecurity
  • Nikki Teran, Emerging Technology Solutions
  • Oyewale Tomori, African Center of Excellence for Genomics of Infectious Diseases, Redeemer's University
  • Brian Tse, Concordia AI
  • Mengdi Wang, Princeton University
  • Nicole Wheeler, Advanced Research + Invention Agency
  • Jaime Yassif, Nuclear Threat Initiative
  • Andrew Yao, Tsinghua University
  • Zakariyau Yusuf, Tech Governance Project
  • Weiwen Zhang, Tianjin University Center for Biosafety Research and Strategy
Last ned