Published:
|
Updated:

Published on INFORMS Analytics Magazine (Joseph Byrum)
How linear thinking has been a hindrance to the development of AI
The dream of creating artificial general intelligence has captivated our imagination for generations. Alan Turing was among the first to think through what it would look like to have a thinking machine. As he described his famous imitation game in 1950, he said it was only a matter of time – he estimated 50 years [1] – before a machine could pass for a human in a conversation.
Given he was working in an era where the state-of-the-art machine was an 8-ton vacuum tube behemoth with a memory capacity of 48 kilobytes [2], his timeline was prescient.
In essence, Turing’s view was that you could come up with a sophisticated program – one large enough to account for every realistic conversational possibility – that could pass as human a given percentage of the time. Building such a system was a matter of investing time in developing the software and increasing the capabilities of the hardware. “As I have explained,” he wrote, “the problem is mainly one of programming. Advances in engineering will have to be made too, but it seems unlikely that these will not be adequate for the requirements.”
Considering how close today’s ubiquitous digital assistants have come to understanding speech and responding to certain types of inquiries, Turing’s prediction was certainly within the ballpark. Surely, he would have been impressed with current technology’s ability to understand and replicate the nuances of speech – factors that were never part of the original game.

Turing’s outlook was fundamentally linear: Throw more time and computing resources at the problem and eventually it will be solved. This line of thinking has persisted to this day in which we certainly have software and hardware beyond what could have been imagined in 1950. Today’s most powerful supercomputer can do in one second what would take three million years to process on the most powerful machine of the era [3].
You have to wonder, even with all that power now at our disposal, why are Alexa, Siri, Google Assistant and Bixby still not quite right?
Linear reasoning is as common now as it was decades ago. Judged by our entertainment, the prevailing assumption of our era is that it’s only a matter of time before we make super-intelligent machines capable of guiding humanity into one of three possibilities: utopia, slavery or annihilation – rarely anything in between. “Star Trek” stands out by depicting a fictional world in which artificial general intelligence can play a positive role. One of “Star Trek: The Next Generation’s” key crew members, for instance, was a robot powered by artificial intelligence (AI).
In the original show episode “The Ultimate Computer,” an AI called M-5 kills two crewmen. When the lead actors explain to the machine what it had done, it repents, saying, “Murder is contrary to the laws of man and God” [4]. So, for that series, even an AI that goes rogue can be redeemed.
The less cheery, remorseless depictions of AI in fiction are easier to find. The standouts are “2001: A Space Odyssey,” “The Matrix” and “The Terminator” in which machines wipe out humans either individually or en masse. The fictional works are built upon the same basic assumption of linear progression to an inevitable AI future. The only difference is whether that future is positive or a vision of doom. Why choose one view over the other?
The Linear AI Model Might be a Dead End
Perhaps linear thinking has been a hindrance to the development of AI. The thought has always been that an artificial general intelligence system must eventually work as long as we can get enough circuits to match the neurons and connections in the human brain. By replicating the mechanics of the brain, we can achieve machine intelligence. This presumes intelligence itself is linear, an assumption roughly akin to saying if we give an infinite number of monkeys a typewriter, inevitably one of them will randomly type out a Shakespeare play. It’s fundamentally a numbers game
.
Even if you could get a monkey to produce a line or two of Shakespeare by complete happenstance, which is unlikely enough on its own, you won’t have discovered anything worthwhile. You don’t have a monkey that can create; you have a facsimile of a poet with no understanding.
The linear model of AI is also a numbers game, which is why it may be a dead end. Take one of the most popular versions of AI around today, the autonomous vehicle. It is most clearly built upon a linear model of AI. Autonomous vehicles process sensory data and information without true understanding.
A self-driving car takes in the information about its speed, its location and the conditions of the road. The programming has observed the “correct” reaction to obstacles in a number of circumstances, so it is able to extrapolate that, right now, the new object detected on the road must also be an obstacle. And so it swerves out of the way. All the same, the system does not know why it’s doing what it’s doing. It does not comprehend.
Putting Cognition into AI

If you consider what it means for a human to be intelligent, the answer is not simple. When the philosophers of the ancient world [5] went about looking for the one property that distinguishes rational man from irrational animals, they went with this: Man is a risible animal. That is, because humans are rational, they can appreciate and tell jokes. They can be creative.
The hyena can make what sounds like laughter, but this, again, is just an imitation without understanding. It isn’t actually laughter. They don’t enjoy limericks.
Making a good joke isn’t the end result of a linear process, as if the smarter you are, the funnier you must be. It’s a very specific ability. Some of the greatest minds in science have a reputation for being humorless, while at the same time you wouldn’t want your favorite comedian to be designing rockets.
French psychologist Alfred Binet came up with the IQ test in 1904 as a measure of general intelligence, the g factor. But research has also identified specific intelligences, or s factors covering a number of abilities such as language, spatial reasoning and analytics. Theories vary, counting anywhere from three to 56 factors. But the basic phenomenon these theories attempt to describe is that some people, to give an example, are very good at math, but not particularly good at practical reasoning. Both are highly intelligent, but in distinct ways.
In creating AI systems, it makes more sense to focus on specialized intelligence. So instead of, for example, trying to create a robot medical assistant that makes human doctors obsolete by reasoning like a super-intelligent human doctor, you create a system that focuses on a subset of specific tasks a doctor must do. Record-keeping, particularly since the advent of electronic medical records, has become a nightmare for the profession.
Imagine a digital assistant that transcribes the doctor’s diagnosis as he talks to the patient, relieving the paperwork burden to leave the human with more time to see additional patients. The system could automatically search through the medical literature for examples of symptoms that match the patient when dealing with an unusual case. The system would also automatically cross-reference any drugs the patient might be taking and highlight any potential harmful interactions with the new drugs the doctor might want to prescribe.
Each step such a system takes would be fully transparent. That is, its algorithms can be reviewed and adjusted for better results, and each choice can be documented – why it recommended this drug instead of that one given the medical history. Think of it like Jarvis for your physician, an AI-enabled Iron Man suit that doesn’t replace medical expertise, it amplifies it. The human supplies judgment and expertise, while the machine supplies a near-infinite memory capacity and the ability to perform mind-numbingly boring tasks with perfection.
That’s the type of artificial intelligence that’s far more useful today, right now, than chasing the dream of creating general intelligence systems that render humans obsolete. Augmented intelligence systems also have the side benefit of being real, not theoretical.
Augmented intelligence systems exist in the world of finance, in defense and in oil and gas exploration. The systems provide specialized cognition for the most difficult or most burdensome tasks in each of these industries. The creative spark, the humor, come from the human operators who remain integral to success.
Of course, the biggest downside to artificial specialized intelligence is that once the idea is popularized, science fiction may never recover. Specialized AI is far too controllable. You know exactly what is going on, and it won’t lead us to a doomsday scenario. So, when sci-fi movies create a new stock villain, you’ll know the idea has finally caught on.
References
- https://www.abelard.org/turpap/turpap.php
- https://apps.dtic.mil/dtic/tr/fulltext/u2/694600.pdf
- (Univac = 2000 ops/s vs. Oak Ridge National Laboratory IBM Power System AC922: 200,794.9 tflop/s), (200.7959×1015 operations / 2000 = 1×1014 seconds / 31,557,600 seconds in a year = 3,181,403 years), https://www.top500.org/lists/2018/11/
- http://memory-alpha.wikia.com/wiki/The_Ultimate_Computer_(episode)
- Porphyry, Isagoge to Aristotle’s Categories 2.4, 3d century AD.

Joseph Byrum is an accomplished executive leader, innovator, and cross-domain strategist with a proven track record of success across multiple industries. With a diverse background spanning biotech, finance, and data science, he has earned over 50 patents that have collectively generated more than $1 billion in revenue. Dr. Byrum’s groundbreaking contributions have been recognized with prestigious honors, including the INFORMS Franz Edelman Prize and the ANA Genius Award. His vision of the “intelligent enterprise” blends his scientific expertise with business acumen to help Fortune 500 companies transform their operations through his signature approach: “Unlearn, Transform, Reinvent.” Dr. Byrum earned a PhD in genetics from Iowa State University and an MBA from the Stephen M. Ross School of Business, University of Michigan.