Anxiety Over Artificial Intelligence

Published:

|

Updated:

Featured Image on Anxiety over artificial intelligence
Photo Courtesy of 123rf.com | © abidal

Published on INFORMS Analytics Magazine (Joseph Byrum)

Now is the time to address misunderstandings, before it’s too late.

Operations research (O.R.) has had it easy. From its earliest days, the discipline harnessed the incredible power of mathematical algorithms to drive efficiency without drawing undue public attention. Militaries have relied on O.R. ever since World War I to better move around supplies or to ensure strike missions reach their target. In business, just the Franz Edelman Award finalist projects alone have brought home the tidy sum of $250 billion in cost savings.

Despite these notable achievements, good luck finding a random person on the street able to explain what O.R. stands for. The same can’t be said for artificial intelligence. “AI” is among the most recognizable set of initials today.

One reason for the notoriety is widespread media coverage. AI is a novelty with a lot of promise, and that makes it the subject of intense speculation. The McKinsey Global Institute called AI the next digital frontier. With up to $30 billion invested annually in AI, companies are placing a significant bet on AI’s ability to improve R&D, create more accurate forecasts, drive productivity and customize the whole experience for the end user.

So far, so good. But not all of the attention is quite so positive.

Hollywood has cast AI in central roles for decades – usually as the bad guy. In 1968, theater audiences watched in amazement as the HAL 9000 supercomputer disobeyed, and ultimately tried to kill, all of its human masters. The 1983 film “WarGames” kicked it up a notch with an autonomous military computer system itching to trigger World War III with no more remorse than it would take to make the opening move in a game of chess. In the following year, “The Terminator” kicked off a massively successful movie franchise featuring Skynet, an all-powerful AI system so committed to wiping out humanity that it sent robots through time to ensure anyone who happened to get in its way would never have a chance to exist.

All of this is sheer fantasy, but sometimes that fantasy ends up shaping reality.

AI in its Infancy

Today’s AI systems are very much in their infancy. We’re nowhere near the point where machine learning algorithms could become self-aware, much less develop an unrelenting grudge against mankind. Even the most sophisticated military AI projects only scratch the surface of what’s theoretically possible.

Because AI is so new, this is our one and only opportunity to better explain what it means to a skeptical public before science fiction screenwriters and celebrities completely drown out the conversation with compelling, but not exactly accurate, stories.

Analytics experts can play an important role in providing needed clarity. The use of mathematical algorithms to solve business problems and make better decisions through optimization creates a kinship between AI and O.R.

AI is a system in which algorithms, or computer programs, make choices. AI separates from analytics in that AI’s choices are made in response to some form of learning – a process by which the program alters itself in response to a changing environment. This “learning” capability is one distinctive trait of AI made possible by techniques such as modeling artificial neural networks after the human brain itself [1]. AI involves a certain amount of autonomy.

Traditional O.R. applications are created by data scientists and mathematicians who craft custom algorithms to optimize the task at hand. AI goes to the next level by optimizing the algorithm creation process itself by greatly reducing the need for human intervention. It’s O.R. on steroids.

Analytics professionals are as familiar as anyone can be with the inherent limitations of computer algorithms. Perhaps one day these learning algorithms will become so effective that the machines that employ them could become a threat, but worrying about that now would be like the Wright brothers worrying about the implications of space travel a few minutes after landing their first prototype airplane.

Narrow and General AI

Business-focused AI systems today are narrow, focused on solving discrete problems. It is possible that an artificial general intelligence might one day be simulated by combining multiple narrow AI systems [2], but we’re not there yet.

General AI is what’s responsible for all the drama, which is why it’s important to draw the distinction between narrow and general AI If we don’t take the opportunity right now to spell out why a smart algorithm will always be bounded by its programming, we will, as a community, face problems in the future. History is full of examples of what happens when pioneers fail to educate the public about new technologies.

Two centuries ago, British textile workers rioted. The angry mobs smashed machines and set fire to newly automated factories to protest the onset of the first Industrial Revolution. From hindsight, it’s clear the machinery that the Luddites fought so hard against brought more jobs, not fewer, to a textile industry that grew to clothe half the world. Yet the rioters didn’t see it that way. The Luddites weren’t so much afraid of the machines as the social upheaval they represented.

This isn’t just a historic curiosity. We now see similar doubts about biotechnology that spurred several European nations to band together to outlaw crops that are genetically modified to be higher-yielding, more nutritious or more resistant to dangerous pests. Despite overwhelming evidence of their inherent safety, there hasn’t been a single instance of GMO poisoning, ever – these nations banned the technology based upon a speculative possibility of harm. Better safe than sorry. Even in the United States, where the technology is widespread, a large segment of the public remains skeptical of biotech.

This is the case because doubters will move in and fill the void whenever the pioneers of new technology leave space for others by failing to address the risks and benefits. Regulators, who will never fully understand complex adaptive systems, are attuned to the general sentiment. They won’t hesitate to act on public doubts about AI, and that would be a shame. We can’t afford to lose one of the most promising technological advances of our lifetime.

AI Not Without Risk

None of this is to say that we need to pretend that AI is without risk. Quite the opposite is the case – it’s just a risk of a much different sort. Mechanical looms and steam engines did indeed cause disruption in the 18th century, but the problems hardly qualified as apocalyptic.

So it is with the AI of the present. The downsides are real, but they’re manageable. It’s more important to be honest about what it means and to prepare for what lies ahead by thinking through the likely scenarios.

We must, as the Centre for the Study of Existential Risk at the University of Cambridge suggests, weigh the potential benefits against the potential for harm, and plan accordingly. The most obvious issue is the impact of AI on employment, a topic that has been explored in extensive detail.

One of the less examined possibilities is that providing relief from repetitive tasks and driving efficiencies in critical functions could make life really dull. As Bill Gates put it, “What if people run out of things to do?” The Microsoft founder used those words as his title for a review of a book that asked whether people become unhappier the more society is perfected.

This raises the important point that, no matter how good AI becomes, it cannot solve all of our problems. It will never be a cure-all for human error. Artificial intelligence is best seen as a supplement, an augmentation, of human abilities, not a replacement.

The first official crash investigation involving a “self-driving” car illustrates how this is so. A federal panel looking into the deadly incident concluded that the driver of a Tesla Model S with autopilot was killed because of “inattention due to overreliance on vehicle automation, which resulted in the car driver’s lack of reaction” to the truck that was turning left into his path at an intersection.

Like any tool, AI can be dangerous when misused. And we must be honest about that, and explore all of the possibilities. It’s the unknown that’s scariest of all, and fear is the biggest threat to technological advances.

Expanding knowledge – eliminating the unknown – is the best way to alleviate anxiety and reduce the natural impulse of politicians to ban what they don’t understand. If we wait until the point where we need to plead the case for AI and O.R. to lawmakers, the battle will already have been lost. 

Notes

  1. Core technologies typically associated with AI include deep and/or machine learning, natural language processing platforms, predictive application programming interfaces and speech or image recognition. Source: “Artificial Intelligence Industry: An Overview by Segment,” July 25, 2016, https://www.techemergence.com/artificial-intelligence-industry-an-overview-by-segment/
  2. U.S. Deputy Secretary of Defense Bob Work told attendees at a conference at the Johns Hopkins University Applied Physics Laboratory: “We’ve never gotten to the point where we’ve had enough narrow AI systems working together throughout a network for us to be able to see what type of interactions we might have.” Source: “War Without Fear: DepSecDef Work on How AI Changes Conflict.” May 31, 2017, http://breakingdefense.com/2017/05/killer-robots-arent-the-problem-its-unpredictable-ai/
Scroll to Top