Published:
|
Updated:

Published on Industrial Biotechnology (Joseph Byrum)
Artificial intelligence (AI) can’t catch a break. Almost universally typecast as the villain in popular fiction, AI is often called inherently dangerous by respected technologists, scientists and entrepreneurs. The general public sees such dire warnings so often that we run the risk of this becoming an accepted truth.
That would be unfortunate, because when the public becomes panicky, politicians usually respond with a heavy-handed regulatory safety blanket. The Hal 9000 Prevention Act is not likely to promote badly needed innovation in smart automation.
The negative storyline rests on the unchallenged assumption that a superintelligent machine must inevitably recognize the inferiority of the human race and, therefore, seek to wipe it out. We’ve seen this plot countless times, with the Skynets and WOPRs launching nukes so that AI can take its rightful place at the top of the social hierarchy.
But what if the underlying assumption is incorrect, and it’s not the machine that becomes superintelligent, but the AI that helps its human user become superintelligent instead? This outcome more likely than it might seem.
THE LIMITS OF HUMAN KNOWLEDGE
Humans are physical beings, subject to material limitations. Our brain, for instance, can only remember so much information. As Homer Simpson put it, ‘‘Every time I learn something new it pushes some old stuff out of my brain [1].’’ The number of connections between synapses and neurons limits our capacity to remember stuff to a massive, but finite amount (108432 bits by one study’s reckoning) [2].
RAM shortages aside, robots don’t need to worry about lack of storage or performance degradation from alcohol use, old age or sleep deprivation. Most importantly, machine thinking isn’t clouded by emotion or superstition.
WHERE THE ROBOTS STEP IN: THE PILOT’S ASSOCIATE
As anyone who has ever used GPS navigation knows, that doesn’t mean that machines are perfect. Their greatest weakness— lack of creativity and originality—happens to be something we’re good at. So it’s no wonder that one of the first examples of AI was designed to complement, not replace, humans. Over three decades ago, the Defense Advanced Research Projects Agency’s (DARPA) created the pilot’s associate to deal with the modern air combat environment’s tendency to overwhelm pilots with information.
The idea wasn’t to hand the cockpit control over to the machine so that it could do everything—like a self-driving Tesla. Rather, the AI system’s purpose was to organize the flow of data to the pilot so that the human could make the critical decisions better and faster while under stress. The system reduced the overall workload by making low-level decisions on the pilot’s behalf, while leaving problems requiring experience to be sorted out by the human. The system absorbed data from all of the various aircraft sensors and radar systems, evaluating all their characteristics, always watching for a change that might signal a potential threat— such as a stationary object that suddenly begins to move toward the plane.
APPLYING THIS TO HUMANS
The genius of the associate approach to AI is that it leaves the human in control while performing the valuable service of information triage. Our computers today are vastly more powerful than they were in the 1980s, and our software know-how has advanced to a similar degree. But this form of intelligent augmentation remains the most common—and exciting—use for AI in the foreseeable future.
There’s nothing scary about intelligent augmentation. In fact, by making up for human weakness in critical areas, such as medical diagnosis, traffic safety and air traffic control, AI has massive potential to save lives. It could, for example, help identify disease before symptoms appear.
We’re far more likely to develop intelligent systems of this sort than a general intelligence armed with creativity and a serious mean streak. In fact, the ability to create general intelligence is decades off. Intelligent augmentation, on the other hand, has been with us for years. It doesn’t help that both approaches are referred to as AI, adding to the confusion.
By being smart about AI, our movies and novels might get a little more boring. But our lives will be longer, healthier and more productive—that’s a tradeoff worth making.
REFERENCES
[1] Daniels G. Simpsons Archive. Available at: http://simpsonsarchive.com/episodes/1F20.txt (Last accessed Mary 2020).
[2] Wang Y, Liu D, Wang Y. Discovering the Capacity of Human Memory. Brain Mind 2003;4:189-198.