When Good A.I. Goes Bad

Published:

|

Updated:

Featured Image on When good A.I. goes bad
Image © Kheng Ho Toh | 123rf.com

Published on INFORMS OR/MS Today (Joseph Byrum)

Face-off: The legal and ethical questions surrounding artificial intelligence. 

About as many people say artificial intelligence is going to destroy humanity as say it will solve all of our problems.

Perhaps it’s more likely that the outcome will lie in between the two extremes – society as a whole will benefit, but it won’t be perfect. On the optimistic side of the scale, there’s plenty of reason to be upbeat about where A.I. projects are headed.

We know from the field of operations research that algorithms can be effective in optimizing just about every critical business process. Adding the power of machine learning could be the key to popularizing O.R. in the industries that do not currently take advantage of what it has to offer, as it takes a sizable investment in time and money to develop customized O.R. tools.

Now imagine a software tool that absorbs a company’s operational data and automatically models alternative scenarios to come up with suggestions for better, more efficient processes. The system learns as it goes, as opposed to being set up with pre-written equations. Such an A.I. system would open the potential for faster and more certain results while lowering the barrier to entry for startups and smaller firms that may currently think of O.R. as a luxury they can’t yet afford.

Now, give the machine-learning algorithms some autonomy to adjust to conditions in real time, and you have an A.I. system that can make a realistic contribution to business efficiency. Even in the nonprofit world, they could assist researchers in analyzing massive data sets in scientific fields, combing through the vast emptiness of space to discover new planets, for example [1].

More A.I. Control, More Room for Error

But the more control A.I. systems have, the more room there is for error. A.I. systems are essentially software executed on computer hardware, both of which suffer from the inherent limitations of programming. It’s impossible for us to anticipate all the ways in which something might go wrong.

The year began with the news that, for over a decade, computer chips from the main suppliers have been vulnerable to highly sophisticated computer code exploits known as Meltdown and Spectre. Nobody noticed the potential for these flaws because the chips have become so complex. Intel’s first processor, introduced in 1971, had a manageable number of transistors – 2,300 [2]. Now the company packs 100 million of them in a square millimeter [3], and Intel is well on the way within the next decade of selling a chip with as many transistors as there are neurons in the human brain (100 billion) [4].

Unlike the brain, which evolved over the course of hundreds of thousands of years, those computer chips haven’t been around for very long. If the hardware can go bad, what happens to the decisions of an A.I. that depend on the silicon functioning properly?

The legal and ethical questions surrounding A.I. are perhaps most fully explored in the context of self-driving cars, yet even there we seem to lack clear answers. Consider what happens if an autonomous car decides to swerve into a car in an adjacent lane instead of striking a pedestrian crossing the road. Who’s responsible for the crash? The self-driving car’s owner, the pedestrian or the developer of the autonomous technology that made the choice to swerve?

Evil A.I. Unlikely

Contrary to the plot of so many sci-fi blockbusters, a self-aware, evil A.I. purposely aiming at pedestrians is just as unlikely as one launching nuclear missiles. Far more mundane errors are likely to trouble advanced A.I. programs, because no matter how perfect and benign the underlying algorithms, and how flawless the computing hardware might be, things can still go very wrong.

An instructive example can be found in the fate of the Mars Climate Orbiter. NASA launched this probe in 1999, but the orbiter never actually reached the red planet. It didn’t blow up on the launch pad or meet a spectacular demise. Instead, the probe quietly went missing after having traveled more than 400 million miles. It turns out that the engineers at mission control failed to convert readings from pound-seconds into the metric equivalent, newton-seconds [5] before feeding that data into the guidance system.

NASA’s internal assessment confirmed that there was nothing wrong with the orbiter itself. Instead, the “verification and validation” process for the project failed to include the software that controllers on the ground were using. The failure to double-check their work cost the agency $125 million [6], not counting the toll of the late-night comedians joking about NASA’s ability to do simple math.

Back on earth, an automated trading program at Knight Capital Group “ran amok” [7] in 2012, losing $460 million in what is surely the most expensive cut-and-paste error of all time. Once again, the algorithm itself was fine, but in the process of setting up new code on the company’s servers, a technician accidentally allowed an obsolete bit of code to go live. That code took unwanted market positions worth $6.6 billion.

The Securities and Exchange Commission [8] provided insight into what went wrong. Investigators later determined that Knight’s system failed to set trading limits that would have stopped, or at least set off alarms, when the system first began placing massive orders. More importantly, there was no policy in place to double-check the computer code before it went live. A simple test of the system would have revealed the fatal flaw.

As happened with the NASA project, validation was the first thing jettisoned in a high-pressure environment. The teams involved likely felt a need to cut corners to get their job done faster. This is, after all, a universal temptation, even in the field of O.R. The brilliance of a well-written algorithm naturally grabs all of the attention, but it’s difficult to get excited about validation, the step that ensures the data are accurate and the results are correct.

The lesson as we develop increasingly powerful A.I. solutions is that the A.I. can never substitute for due diligence. The more we turn over decisions to A.I., the more we must ensure that procedures are in place to catch the mistakes made not by the A.I., but the humans who set up and operate the systems. The verification and validation steps must never be skipped, and managers need to make sure that this corner isn’t cut.

A.I. systems will still make mistakes, and some of them may even create spectacular headlines. Don’t be surprised when it happens to see a rush to blame A.I. Just know that there’s a good chance that the real cause was a lack of validation.

References

  1. https://www.wired.com/2017/03/astronomers-deploy-ai-unravel-mysteries-universe/
  2. https://www.intel.com/content/www/us/en/history/museum-story-of-intel-4004.html
  3. https://spectrum.ieee.org/nanoclast/semiconductors/processors/intel-now-packs-100-million-transistors-in-each-square-millimeter
  4. https://www.theinquirer.net/inquirer/news/2321275/ces-microprocessors-to-be-emotionally-smarter-than-human-brain-within-next-decade-says-intel
  5. ftp://ftp.hq.nasa.gov/pub/pao/reports/1999/MCO_report.pdf
  6. http://articles.latimes.com/1999/oct/01/news/mn-17288
  7. “Trading Program Ran Amok, With No ‘Off’ Switch,” The New York Times, by Jessica Silver Greenberg, Nathaniel Popper, and Michael J. De La Merced, Aug. 3, 2012. https://dealbook.nytimes.com/2012/08/03/trading-program-ran-amok-with-no-off-switch/?_r=0
  8. https://www.sec.gov/litigation/admin/2013/34-70694.pdf
Scroll to Top