Published:
|
Updated:

Published on INFORMS Analytics Magazine (Joseph Byrum)
Author’s Note: This blog series Understanding Smart Technology – And Ourselves examines our relationship with advancing technologies and the fundamental choices we face. As we stand at the threshold of an uncertain future shaped by artificial intelligence, the author challenges readers to consider whether we should embrace these transformative changes or resist them in defense of our humanity. Drawing from historical patterns of technological adoption and resistance, the series promises to deliver nuanced perspectives on our technological trajectory, beginning with a comprehensive overview of our current understanding of smart technology and its implications for society. Read Part 4 where the author discusses the “known knowns” of the man-machine interaction.
“Always we learn things and then we forget them.”
Dave Eggers, “You Shall Know Our Velocity” (Vintage, 2003)
Having looked at what we know with certainty about the underpinnings of smart technology, there are issues on the edge that remain unresolved or unknown. There are even issues related to how our minds and how machines work covered in the previous blog that are worth exploring from a different angle. These things are worth examining.
The Purpose of Forgetfulness
As humans, we learn from experience, which is a process that deep learning seeks to replicate. We also fail to learn from experience – sometimes spectacularly so. As one team of psychologists explained, “recalling failures has substantially different affective and cognitive consequences than does recalling successes” [1]. That is, we fail to learn from those bad memories because we want to avoid negative truths.
Intentional forgetting is in the human DNA as well. There’s good reason to believe that forgetting serves a vital purpose in our decision-making process [2]. Specifically, by regularly sweeping obsolete information from our mind, we become more effective in making choices in dynamic environments. As with all things in life, it’s about striking a balance – remembering what’s important and forgetting what isn’t.
Reconsidering the Limits of Computer Science
In a recent experiment, researchers were able to digitize a set of commonplace items in an uncommon way. They took a computer operating system, a French film, an Amazon gift card, a computer virus, an engraved design found on the plaque of a Pioneer spacecraft, and a scholarly article. In each case, the files were compressed into a master file, and then the data were split into short strings of binary code made up of ones and zeros. This was then converted into one of the building blocks of living organisms, DNA [3]. This raises important questions.
Machines aren’t supposed to have much in common with organic things, yet as Self-Aware Systems founder Steve Omohundro put it, “Computers are essentially mathematical engines that should behave in precisely predictable ways. And yet software is some of the flakiest engineering there is, full of bugs and security issues” [4]. In a 2002 study, the National Institute of Standards and Technology found that each year bad programming costs the U.S. economy more than $60 billion in revenue – an amount that has no doubt grown as we have grown more reliant upon machines [5].
Meanwhile, software grows ever more complex. A common measure of software complexity is the number of lines of code in an application or machine. The Space Shuttle had about 400,000. The Mars Curiosity Rover had five times that. The Boeing 787 has over 10 million lines of code. It is estimated that the average modern production car has 100 million lines of code across all its systems [6]. For comparison, the total DNA base pairs in a mouse (a measure of genetic coding complexity) is about 2.7 billion [7], while the human genome has about 3 billion base pairs [8]. The wonderfully compact DNA coding we have contains the construction code for a human brain with over 120 billion neurons in total [9]. The coding in our machines may already surpass the complexity of the simplest living organisms like bacteria, but we are still far from having machines that could rival mammalian brains in complexity.
Since software complexity continues to grow exponentially, however, it does seem to be a matter of when, rather than if, that will happen.
As software becomes more complex, we expect more from it. We tend to move the goalposts as to what impresses us as “artificial intelligence.” There was a time when the first spell checkers in word processors seemed like artificial intelligence to us. These days, we are growing accustomed to digital personal assistants recognizing our spoken commands. We routinely rely on search engines to autocomplete our entries without thinking much about it. Our expectations of what machines can do for us keep rising.
Algorithms can be as imperfect as the individuals who coded them [10]. In 2012, a snippet of bad code in an automated trading program at Knight Capital Group triggered $460 million in trading losses [11], earning it a spot on the list of most expensive computer glitches of all time [12].
Words can also fail us. Human language is more mysterious than we tend to acknowledge. It reflects the complexity of the brain, a frontier we must continue to explore if our technology is ever to become truly smart in ways that are helpful to humanity. Philosopher Benedetto Croce, writing on general aesthetics a century ago, opined that “It is false to say that the verb or noun is expressed in definite words, truly distinguishable from others. Expression is an indivisible whole [13].
Linguist Noam Chomsky once held out the possibility of a universal grammar, which, if properly analyzed, could bring human and computer language closer. The idea was that there were innate properties of communication shared between the thousands of different languages spoken around the world. But in recent years, his writings have come close to admitting that arriving at a universal grammar may be impossible [14].
This has implications for the interface between man and machine, a topic to be explored in the next blog entry.
References and Notes
- Hristina Nikolava, Cait Lamberton and Kelly L. Haws, April 2016, “Haunts or helps from the past: Understanding the effect of recall on current self-control,” Journal of Consumer Psychology, http://www.sciencedirect.com/science/article/pii/S1057740815000728.
- BA Richards and PW Frankland, June 2017, “The Persistence and Transience of Memory,” https://www.ncbi.nlm.nih.gov/pubmed/28641107.
- “Researchers Store Computer Operating System and Short Movie in DNA,” March 2, 2017, https://phys.org/news/2017-03-short-movie-dna.html#jCp.
- http://www.thedailybeast.com/this-is-what-happens-when-you-teach-machines-the-power-of-natural-selection. Omohundro’s website is https://selfawaresystems.com/author/.
- The Economic Impacts of Inadequate Infrastructure for Software Testing, “National Institute of Standards and Technology, 2002, https://www.nist.gov/sites/default/files/documents/director/planning/report02-3.pdf. As noted in a recent blog (“The High Cost of Technical Debt,” by Samuel Mullen) this amount would be worth $109 in today’s dollars, based on Gross Domestic Product, http://samuelmullen.com/articles/the-high-cost-of-technical-debt/.
- http://www.informationisbeautiful.net/visualizations/million-lines-of-code/
- https://www.genome.gov/10002983/2002-release-draft-sequence-of-mouse-genome/
- https://www.genome.gov/11006943/human-genome-project-completion-frequently-asked-questions/
- Suzana Herculano-Houzel, 2009, “The Human Brain in Numbers: A Linearly Scaled-up Primate Brain,” Frontiers in Human Neuroscience, https://doi.org/10.3389/neuro.09.031.2009.
- https://venturebeat.com/2017/05/06/ai-powered-trading-raises-new-questions/
- Jessica Silver Greenberg, Nathaniel Popper and Michael J. De La Merced, Aug. 3, 2012, “Trading Program Ran Amok, With No ‘Off’ Switch,” The Wall Street Journal, https://dealbook.nytimes.com/2012/08/03/trading-program-ran-amok-with-no-off-switch/?_r=0.
- https://money.cnn.com/2012/08/09/technology/knight-expensive-computer-bug/index.html
- Benedetto Croce, 1920, “Aesthetic as Science of Expression and General Linguistic,” translated from the Italian by Douglas Anslie, New York, Noonday Press, p. 146.
- Noam Chomsky, May 17, 2004, “Biolinguistics and the Human Capacity,” delivered at MTA Budapest, https://chomsky.info/20040517.

Joseph Byrum is an accomplished executive leader, innovator, and cross-domain strategist with a proven track record of success across multiple industries. With a diverse background spanning biotech, finance, and data science, he has earned over 50 patents that have collectively generated more than $1 billion in revenue. Dr. Byrum’s groundbreaking contributions have been recognized with prestigious honors, including the INFORMS Franz Edelman Prize and the ANA Genius Award. His vision of the “intelligent enterprise” blends his scientific expertise with business acumen to help Fortune 500 companies transform their operations through his signature approach: “Unlearn, Transform, Reinvent.” Dr. Byrum earned a PhD in genetics from Iowa State University and an MBA from the Stephen M. Ross School of Business, University of Michigan.