The next pandemic virus could be built using AI
Almost five years after the COVID-19 pandemic began, its central question remains subject to heated debate: Did the SARS-CoV-2 virus originate naturally, or was it leaked from a scientific lab participating in gain-of-function research?
This controversy has sparked vitriol from all sides, as politicians and scientists battle publicly over the contents of data from the Wuhan market and the role of NIH-funded research. At the same time, blame games among nations have further complicated international relations, and efforts to reach a global pandemic agreement continue to stall.
This geopolitical turmoil has diverted our attention from an underlying existential threat. It is not only possible but plausible to engineer a pandemic, as the convergence of genetic engineering and artificial intelligence brings both the potential to revolutionize medicine and the risk of significant harm.
In February 2001, near the completion of the Human Genome Project, a study in the Journal of Virology on mousepox virus showed how this knowledge could be used in unexpected ways. Aiming to address the overpopulation of mice and rabbits in Australia, scientists transferred a single immune-regulating gene from mice into the mousepox virus, predicting that infection with this enhanced virus would induce sterility. The results instead proved deadly; the engineered virus killed not only the susceptible mice, but also those that were genetically resistant or vaccinated against the original virus.
Now, 20 years after this alarming research outcome, we have more sophisticated and accessible tools to engineer viruses, and far more information about a multitude of human genes that could enhance viruses to evade our own immune systems and render standard vaccines ineffective. This kind of genetic modification could easily be performed on a virus capable of infecting humans, such as monkeypox or smallpox.
It is now common for typically harmless viruses to safely deliver gene therapies to patients for a wide range of diseases. Scientists are manipulating these viruses to target specific tissues, like the difficult to access neurons in the brain; to evade the immune system to prevent drug rejection; and to manufacture viral delivery platforms on a large scale.
This effort has been accelerated by high-throughput genomic technologies and generative AI to produce and integrate vast amounts of data. The tools and information needed to build novel viruses that can be refined for any purpose are now widely available. But the possibility for misuse is just as great. An individual could use these tools today to alter a pathogen’s genetic code for destructive purposes.
A state-sponsored program, a small group or even a “lone wolf” actor likely has access to the necessary tools to initiate a catastrophic pandemic. Even more concerning, generative AI introduces a new creative capability to genetic designers, who are no longer limited by human-inspired designs. The sophistication of methods, availability of tools and generative potential of AI massively raise the risk of an intentional or accidental viral outbreak.
Engineered pathogens in the age of AI are an urgent threat to global health and security that we must address immediately.
Fortunately, the necessary solutions are the same as those required to rectify the broader shortcomings in our outbreak response systems. We must improve our public health and clinical infrastructure. The tactics needed to stop an AI-generated superbug are no different from those that will prevent flu, respiratory syncytial virus and COVID-19 from spreading in workplaces and schools.
We must dedicate the necessary resources for broader pathogen detection and surveillance; for the development of novel diagnostic, therapy and vaccine platforms; and for improved and equitable access to preventive measures and care. Given that our healthcare systems crumbled under the weight of COVID-19, a pathogen designed to maximize human harm would certainly topple existing infrastructure.
Furthermore, successful strategies implemented during the pandemic, such as ongoing genomic surveillance and public health reporting, are faltering due to reduced investment. The recent alarming infections of cattle and humans with avian flu highlights just how necessary these practices are. With proper surveillance, dangerous pathogens can be identified and outbreaks can be stopped before they cause widespread disease.
It is also time to start broader conversations about preemptive threat assessment and response. Our own lab recently reconsidered the mousepox example as a group exercise, aiming to devise strategies to counteract novel viruses designed to evade vaccines. While we identified potential solutions, each required significant time and resources to pursue.
To keep pace with the joint threat of AI and genetic engineering, we cannot afford to wait for the emergence of an engineered pathogen. And, while the uncertainty of COVID-19’s origins has turned the value of virological research into a political issue, more investment in scientific investigation is necessary to bolster our national biodefense capabilities. To combat these growing threats, we must regularly conduct pandemic “wargames” and simulations. We should further consider how pathogens can be altered for malicious ends, and devise and stockpile solutions in advance, remaining one step ahead of nefarious actors.
At the same time, we must consider that the existence of such efforts can provide information that could spark a catastrophic event. This is the vexing double-edged challenge of technology that is capable of both protection and harm. We only allow the existing threat to grow if we delay the conversation about whether and how such inquiries should be pursued.
Renewed effort is needed to become proactive about pandemic preparedness. The WHO member nations have delayed the formation of a global pandemic agreement to 2025. The current agreement has not been updated in 20 years, since the very beginning of the Human Genome Project. The technological advances in genomics and AI made since then could unleash novel engineered pathogens that take millions of lives. Collaboration towards unified action is needed now.
In 1945, the atomic bomb brought the world to its knees, its creation facilitated by coordinated national efforts. Today, society’s greatest existential threat is much closer to home: one person has the power to cause the next global pandemic and millions of deaths. The joint technological revolutions of genomics and AI hold the power to cure and to kill, and we must actively reckon with both sides of this coin. In order to maximize benefits and minimize harm, we must dive headfirst into the difficult debates on what it means for our existence.
Both the threat and its solutions are clear. Now, we must act.
Arya Rao is an M.D.-Ph.D. candidate in the Harvard Medical School/MIT joint degree program. Al Ozonoff is an associate professor at Harvard Medical School. Pardis Sabeti is a professor at Harvard University and the Harvard T.H. Chan School of Public Health. Together, they conduct research at the intersection of genomics, artificial intelligence and infectious disease at the Broad Institute of MIT and Harvard.
Copyright 2024 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed..