Artificial Intelligence: The Values That Should Guide the AI Revolution

Published:

|

Updated:

Featured Image on Artificial Intelligence: The values that should guide the AI revolution

Published on INFORMS Analytics Magazine (Joseph Byrum)

The first and most important principle is maintaining human control of AI systems.

The technology has made tremendous leaps forward, yet it remains nowhere near its full potential

Advanced artificial intelligence algorithms have the ability to take over tasks traditionally reserved for skilled human operators, such as driving a truck or performing a medical diagnosis. What once was the stuff of science fiction is now reality. This technology has made tremendous leaps in the last decade, yet it remains nowhere near its full potential.

It is still early, and we have the opportunity to guide AI’s development in a rational way, following a set of clear principles. Thinking through those principles provides insight into what a fully developed, ethical AI system ought to look like.

A number of organizations, including the Association for Computing Machinery (ACM) [1], Future of Life Institute (FLI) [2], Institute for Electronics and Electrical Engineers (IEEE) [3] and Google [4] have asked their experts to think through every possible scenario, from avoiding Hollywood’s overused vision of megalomaniacal AI to preventing a programmer’s implicit biases from infecting algorithms that ought to be free from prejudice.

Reviewing these concepts is important in understanding how to best take advantage of AI’s potential. Fortunately, a consensus is emerging on the main principles that should be respected by technology developers.

Human Control

The first and most important principle is that we must maintain human control of AI systems. One example would be to have an easily accessible “off” switch to ensure humans can intervene and stop problems from rapidly growing into crises when AI steps out of line. In Stanley Kubrick’s film “2001: A Space Odyssey,” it was so hard for the human crew to unplug the uppity HAL 9000 system that the lip-reading AI realized what was happening and decided that, “This mission is too important for me to allow you to jeopardize it.”

But it’s more than just an off switch. The current state of self-driving cars illustrates the risk of making human involvement an afterthought. At the current level of technology, human intervention is essential for AI systems that have yet to master every aspect of the chaotic road environment. At the same time, however, humans don’t really do much while the car drives itself; their barely engaged attention lapses.

This scenario played out in a tragic accident in March 2018 in Tempe, Ariz. [5]. A self-driving Volvo struck and killed a darkly clad pedestrian who darted out from the middle of a dark road – something the system’s designers had not anticipated. The forward- and side-mounted cameras, radar and lidar sensors did detect an unknown object, but by the time the human responded by hitting the brakes, it was too late.

Pushing too many functions off on the AI can create a dangerous complacency, which is why it’s important to maintain human control.

Human Safety

Ensuring human safety is another key principle. Isaac Asimov developed his three laws of robotics, designed to protect humanity, as a defense against the cheap plot device. Apparently, having robots turn on their creator in what had already become a cliché by the 1950s. While the laws served a fictional purpose initially, they remain an enduring statement of programming safeguards that still make sense today:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

The basic concept is simple, but real-world issues tend to become more complicated at the edges of implementation. Self-driving cars once again pose the classic dilemma. If a robotic car rounds a corner and suddenly comes across a group of school children crossing the road, does the car continue straight, endangering the children, or swerve into an obstacle that risks the life of the owner? The right answer is fodder for endless discussion and debate, but what matters most is that AI systems are developed to handle realistic scenarios and the consequences of the choices made. AI should be better than humans at making the right choice in terms of human safety.

Human Well-Being

“Human well-being” means the AI should work on humanity’s behalf, not the other way around. Machines are supposed to be labor-saving devices, but often we find ourselves spending endless hours supplying data to algorithms (which is the primary purpose of social media, at least from the perspective of Facebook and Twitter), or spending long hours working to be able to afford the latest expensive gadget.

AI systems must also be fed a never-ending stream of data to function effectively, and this is an area that should require minimal human interaction. AI is best at maintaining human well-being when such systems take over repetitive tasks with a level of accuracy and precision only available to systems that are not susceptible to boredom and fatigue. This is an area where AI can easily fill in for a human weakness.

Human Freedom

The principle of “human freedom” means we must be free to make our own choices. The temptation to let AI take over and make every decision while humans relax must be avoided. First, having AI take over our decision-making would conflict with the previous principle of maintaining human well-being because it would be essentially enslaving mankind.

The primary problem with human judgment is that there are so many variables involved in any particular choice that we often rely on intuition or luck in making a selection in the face of nearly infinite options. We might decide what car to buy because we like the color or make a stock pick based on a small slice of information such as the price-to-earnings ratio or a recent earnings report.

AI’s strength is its ability to process all available data, sorting through what’s relevant so that the algorithm can present options to the human based on analysis rather than superstition. Operating under this principle, AI doesn’t substitute its judgment for human decisions. Rather, it augments the power of human decisions by focusing human effort.

Transparency

The term AI covers a wide variety of technologies. It can refer to deep-learning algorithms that draw patterns out of data that the AI system then uses to adjust its own programming. This can often lead to situations where human programmers have no idea why the AI did what it did.

Amazon used neural network algorithms of this sort in its Reckognition facial recognition system [6], which was designed to enable “immediate response for public safety and security.” When a public interest group put Reckognition to the test, it falsely identified 28 members of Congress as criminals [7].

Such goofs are inevitable with deep-learning algorithms that alter themselves in ways that can’t actually be explained. Developing systems with transparency as a primary principle can help reduce embarrassing mistakes caused when strange inputs yield even more odd output.
In some cases, government regulatory agencies expect and demand accountability, making transparency even more important.

Putting it All Together

AI is not a magic solution to all of life’s problems. It is best seen as a tool that, when developed in accordance with the principles above, can enhance human-led projects. Augmented intelligence systems take advantage of AI’s strengths to fill in for human weakness in accordance with the principles of good practice for AI.

Think of the Iron Man suit from the movies of the same name. The suit’s AI system feeds the most relevant information to Tony Stark so that he can make the ultimate choice about the best course of action.

Combining the AI’s data processing abilities with human judgment is what gives the whole system the ability to perform better than either AI alone or humans on their own. Such mutual dependence also ensures the AI will never judge human beings to be obsolete, significantly reducing the chance of a reduction in human safety from a robot-triggered nuclear holocaust. This approach leaves humans in control, respects human freedom, and leaves you with someone who can explain the reasoning behind the decisions that were made at each critical step.

Following these principles in AI development will promote AI systems likely to enhance our lives, but at the cost of making movies more boring.

  1. https://www.acm.org/binaries/content/assets/public-policy/2017_usacm_statement_algorithms.pdf
  2. https://futureoflife.org/ai-principles/
  3. http://standards.ieee.org/develop/indconn/ec/ead_v1.pdf
  4. https://www.blog.google/technology/ai/ai-principles/
  5. https://www.ntsb.gov/news/press-releases/Pages/NR20180524.aspx
  6. https://aws.amazon.com/rekognition/
  7. https://www.aclu.org/blog/privacy-technology/surveillance-technologies/amazons-face-recognition-falsely-matched-28
Scroll to Top