Published:
|
Updated:

Published on INFORMS Analytics Magazine (Joseph Byrum)
Done right, technology could sort through large genetic datasets and find combinations that supercharge the immune system to prevent disease.
The need for rapid development of vaccines and other therapeutics has never been greater. Nobody wants a repeat of 1918 or 2020, and the good news is we’ll likely be ready to beat the next once-a-century pandemic – as long as quantum computing lives up to the hype.
Up until now, the gene-editing tool known as CRISPR has been one of our most advanced drug discovery tools, but it relies heavily on trial and error. CRISPR allows researchers to turn individual genes on or off to see what happens. It can result in breakthroughs, including a COVID-19 detection method [1], but our ability to make the most of CRISPR is severely limited by genetic complexity. We don’t know which genes do what.
Even something that should be simple, like a plant, can have a genetic code whose complexity exceeds our ability to track the possible combinations. Many traits are controlled not by one gene, but by specific combinations of genes, as is the case for height or hair color in humans [2]. Coronary heart disease, diabetes and most disorders also tend to fall into this “polygenic” category. That means genetic research in these areas requires the use of statistics and estimates from population samples [3] and progress is slow.
Genetically Simple Living Organism
Several years ago, scientists at Stanford University decided to see just how much they could learn by examining the most genetically simple living organism, Mycoplasma genitalium (Mgen) [4]. This nasty form of bacteria has just 525 genes – quite simple compared to the 30,000 genes in a human. The scientists created a computer simulation that modeled the internal workings of a single Mgen cell, which revealed the 284 genes essential to its growth and division, which is valuable information for those fighting Mgen-related maladies.
Extracting that information came at a high computational cost. It took 10 hours to model just one of that creature’s cells using the fastest computer cluster the scientists had access to at the time. One of the researchers, Markus Covert, said processing must happen much faster if we want to explore hypotheses, develop answers [5] and ultimately begin to design medicines rather than discover them.
With 30 trillion cells in the human body [6] and 30,000 genes, the number of combinations that make us who we are might as well be infinite. There’s not enough time in a human lifespan to model a system this complex even if every computer on the planet were simultaneously harnessed for the task. This is what makes us all unique – even identical twins aren’t entirely identical and can have slight differences in DNA due to mutation during early developmental stages [7].
We may soon be able to explore these complex genetic questions by harnessing the strange behavior of subatomic particles. Unlike conventional computers that break information down into 1s and 0s, quantum machines use qubits that can exist in a state between 0 and 1 (known as superposition), and can affect the values of one another through entanglement – a property that Einstein famously described as “spooky action at a distance” [8].
Because they represent a range of values, qubits are ideal for simulating real-world problems that can’t always be neatly reduced to 1s and 0s. The power of a quantum machine scales exponentially with the number of qubits, opening the possibility of tackling previously impossible calculations.
But there’s a catch. Quantum particles don’t like to be harnessed. It only takes a stray photon, a degree of temperature or the slightest hint of vibration to throw the particles out of alignment, triggering what’s known as decoherence.
Prototype quantum hardware has worked around these challenges. For instance, the coldest place in the universe can be found inside quantum computers that regularly run at a chilly 0.015 Kelvin – more than 300 degrees Fahrenheit below the coldest-ever thermometer reading from Antarctica [9]. That’s also a tiny bit above the theoretical temperature at which motion begins to cease, minimizing vibrations that would otherwise affect the qubit.
Current quantum computing qubits are considered “noisy.” That is, they are still prone to error-inducing interference. Conventional computers run into similar problems, with stray cosmic rays known on rare occasions to flip memory bits from 0 to 1 or vice versa [10]. Enterprise computers guard against this by setting aside extra bits for error correction. The same technique would work on a quantum computer, but current quantum machines don’t have enough qubits to spare for this function.
Will Hurdles Ever be Overcome?
This is the point where skeptics suggest these hurdles will never be overcome, and quantum computing will remain an unfulfilled dream along the lines of fusion energy and flying cars. Unlike those endeavors, there’s already an entire infrastructure in place to take advantage of quantum computing. Software development kits and programming languages work right now on cloud providers, offering access to early quantum hardware with enough power to solve simple problems.
There are also hybrid solutions that use quantum hardware to perform a first run at more complex problems, narrowing possible answers so that a conventional supercomputer can take over and zero in on the best result. All the big names in technology, including Google, Amazon, Microsoft, IBM, Honeywell and others, are investing in a way that suggests they think quantum computing is inevitable.
The reward for getting this right is massive, as quantum hardware could sort through a large genetic dataset to find combinations associated with particular disorders, or the combinations that could supercharge the immune system to prevent disease. We would model biological processes and understand what’s happening at a deeper level, enabling speedy design of treatments and therapies never before possible. We’d finally realize the full potential of CRISPR without the trial and error.
It’s just the sort of power we need to identify serious problems and rapidly come up with solutions. Done right, it will mean doing away with the need to ever use the word pandemic again.
References
- Joung, J., Ladha, A., Saito, M., Kim, N. G., Woolley, A. E., Segel, M., et al., 2020, “Detection of SARS-CoV-2 with SHERLOCK one-pot testing,” New England Journal of Medicine, Vol. 383, No. 15, pp. 1492-1494, https://www.nejm.org/doi/full/10.1056/NEJMc2026172.
- https://www.genome.gov/genetics-glossary/Polygenic-Trait
- Lvovs, D., Favorova, O. O., and Favorov, A. V., 2012, “A Polygenic approach to the study of polygenic diseases,” Acta naturae, Vol. 4, No. 3, pp. 59-71, https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3491892/.
- Karr, J. R., Sanghvi, J. C., Macklin, D. N., Gutschow, M. V., Jacobs, J. M., Bolival, B., Jr., Assad-Garcia, N., Glass, J. I., and Covert, M. W., 2012, “A whole-cell computational model predicts phenotype from genotype,” Cell, Vol. 150, No. 22, pp. 389-401, https://doi.org/10.1016/j.cell.2012.05.044.
- Macklin, D. N., Ruggero, N. A., and Covert, M. W., 2014, “The future of whole-cell modeling,” Current Opinion in Biotechnology, Vol. 28, pp. 111-115, https://doi.org/10.1016/j.copbio.2014.01.012.
- Sender, R., Fuchs, S., and Milo, R., 2016, “Revised estimates for the number of human and bacteria cells in the body,” PLoS Biology, Vol. 14, No. 8, e1002533, https://doi.org/10.1371/journal.pbio.1002533.
- Jonsson, H., Magnusdottir, E., Eggertsson, H. P. et al., 2021, “Differences between germline genomes of monozygotic twins,” Nature Genetics, Vol. 53, pp. 27-34, https://doi.org/10.1038/s41588-020-00755-1.
- “Born-Einstein letters,” 1971, Macmillan, p. 158.
- Scambos, T. A., Campbell, G. G., Pope, A., Haran, T., Muto, A., Lazzara, M., et al., 2018, “Ultralow surface temperatures in East Antarctica from satellite thermal infrared mapping: The coldest places on Earth,” Geophysical Research Letters, Vol. 45, pp. 6124-6133, https://doi.org/10.1029/2018GL078133.
- Sivo, L. L., Peden, J. C., Brettschneider, M., Price, W., and Pentecost, P., 1979, “Cosmic ray-induced soft errors in static MOS memory cells,” IEEE Transactions on Nuclear Science, Vol. 26, No. 6, pp. 5041-5047, https://doi.org/10.1109/TNS.1979.4330269.

Joseph Byrum is an accomplished executive leader, innovator, and cross-domain strategist with a proven track record of success across multiple industries. With a diverse background spanning biotech, finance, and data science, he has earned over 50 patents that have collectively generated more than $1 billion in revenue. Dr. Byrum’s groundbreaking contributions have been recognized with prestigious honors, including the INFORMS Franz Edelman Prize and the ANA Genius Award. His vision of the “intelligent enterprise” blends his scientific expertise with business acumen to help Fortune 500 companies transform their operations through his signature approach: “Unlearn, Transform, Reinvent.” Dr. Byrum earned a PhD in genetics from Iowa State University and an MBA from the Stephen M. Ross School of Business, University of Michigan.