Artificial Intelligence: Machines, Man and Intelligence

Published:

|

Updated:

Featured Image on Artificial Intelligence: Machines, man and intelligence

Published on INFORMS Analytics Magazine (Joseph Byrum)

A.I.: reflection on the fundamental differences between machines, mankind and the notion of intelligence itself.

A deeper look at artificial intelligence, starting with the technology’s development from its most basic building blocks.

Discussions of artificial intelligence often veer in strange directions. On the one hand, you have the sort of doomsday scenarios that are staples of science fiction – a disobedient Hal 9000 goes on a killing spree, for instance. Or, at the other end of the spectrum, you have marketing departments adding “A.I.” to the most pedestrian of electronic devices in an attempt to capitalize on media hype. Thank you, but my toaster does not need A.I. [1].

To better understand what A.I. is, and isn’t, this first installment in the series will examine the technology’s development from its most basic building blocks, starting with a reflection on the fundamental differences between machines, mankind and the notion of intelligence itself. With this firm foundation, future articles will build upon the ideas of what’s truly possible with intelligent automation.

The World of Machines

At the most basic level, we expect digital machines to perform their designated functions in a repeatable way. Given the same input, they’re supposed to produce the same output. If that doesn’t happen, the device malfunctioned.

In the world of machines, we find mechanistic determinism. This can be seen in the algorithms that make pseudo-random numbers. We have to call them “pseudo-random” because the numbers can never be truly random. Instead, they’re cleverly generated in a mechanistic way to simulate or appear as random numbers.

The World of Brains

We don’t expect machines to be spontaneous like humans, but we humans do have a few mechanistic features. We have arms and legs that can lift and pull, and eyes, ears and other senses that can detect the world, and a brain that seems to control our physical interactions. You could argue just about everything in the body can be explained mechanistically – until you look at the brain.

The past 30 years have seen rapid progress in unraveling the brain’s physical features. We have mapped its neuro-anatomy in great detail [2]. We know it is made of billions of neurons with hundreds of billions, even trillions of connections between the neurons. We have learned that the connections may not be permanent; the brain has neuro-plasticity that allows it to change the connections under some circumstances [3].

With modern instrumentation, the functional interactions within the brain at the tissue level are beginning to be understood. The brain is not a uniform mass of neurons that are connected in any old way. Instead the brain is segregated into separate parts, all made of neurons, yet each mass of tissue somehow functionally distinct from the other parts. We have been able to follow the flow of information from sensation to perception to cognition and then to action by watching the electrical and chemical activations of the brain in living people as they perform tasks.

Progress in neuroscience is exciting, but also sobering. The brain is nothing like a computer as we can envision computers today. Despite the progress, we have no idea how to describe many of the primitive aspects of cognition in the brain that we know are there, such as memory, language creation and imagination. The massive connectionism of the brain hides its capabilities in its complexity [4].

The World of Mind

Long before modern psychology and neuroscience arrived, philosophers raised the key questions relating to how we form thoughts and ideas. If people have individual minds, is that mind made of the same physical material as the body, or is it somehow different? If the mind is made of something different and is not physical, how does it interact with the body? Holding only a mechanistic view of the physical world, these questions were unresolvable.

While controversy still exists about how to characterize the mind (especially related to free will), the development of digital computers has provided a metaphor for the mind – it is a program that runs in the brain. The brain, as the platform for the program, clearly does have physical existence. Because the mind, like a computer program, is composed of energy and information, it has real existence inside the brain – just not a permanent physical existence.

The “mind as program” metaphor must be treated with great care. After all, we have no idea how that really happens in the brain. We also have no clear idea how the program in the brain is created in the first place. Finally, while a computer program is a sequence of instructions, the massive connectionism of the brain suggests that the brain is highly parallel in its functioning.

Even so, mind as a program formed the foundation for artificial intelligence as scientists and philosophers since the dawn of computing have wondered whether we can write a computer program giving the computer a mind. Even pioneers such as Charles Babbage and Lady Lovelace discussed this possibility before a functioning computer was ever made.

Writing a computer program of any complexity is a disciplined engineering endeavor. A key step in designing a computer program is to determine what functions or behaviors that the program has to perform and in what order should they take place. For large, complex programs, the functions and interactions are usually organized into higher level program units with lower level functions within them. If we want to write a program for artificial intelligence, we will first need to define the functions of the mind.

Functions of the Mind

Early in the 20th century, the industrialization of the western countries drove an interest in understanding individual differences in training and task aptitudes across people. This interest led to the first formulations of “intelligence” as an attribute of a person. Initial models of intelligence were test-driven (IQ tests), and this model persists today – equating intelligence with doing well on a test.

Modern scientific models of intelligence are considerably broader, often defining between nine and 30 different separate dimensions for the concept. While these broader models are more effective in characterizing the individual differences between people, they are not generative models. That is, these models do not inform the designer of a computer program about the functions of the mind that are necessary to produce intelligent behaviors.

In the mid-20th century, psychology itself became embroiled in the mind/body problem. Researchers argued that since the mind cannot be seen, it must not exist and, therefore, the whole idea is an empty, unscientific construct. Under the view of radical behaviorism, only behaviors existed; psychology as a science should have no interest in claiming how the behaviors were created.

The development of programmable computers at the end of World War II and their rapid advancement in the post-war years allowed the development of the mind as a program metaphor. The stage was set for the birth of cognitive psychology – the scientific study of how thinking and thoughtful behavior is created in the mind. By systematically exploring the functions of the mind, cognitive psychology has provided many of the key insights needed to advance the field of A.I. At the same time, A.I. has provided many of the computational models needed to study and validate hypotheses about how humans think. They each build off one another.

The First Wave of A.I. (1975 to 1990)

The starting point for formulating possible functions of the mind and for building the first A.I. programs was the field of logic. The early Greeks were the first to systemize the rules of logic, and ever since the syllogism and the manipulation of logically derived mathematical expressions have been the tools of choice for solving complex problems. It was also fairly easy to create computer programs that could perform logical operations.

As computer memory, storage and program sizes climbed through the early 1970s, increasingly complex logic programs could create complex proofs and formulate new scientific conjectures. A.I. was beginning to be noticed – this was strongly anchored in the belief that deductive reasoning was the key function of the mind. By making a deductive logic system that had two parts, a rule engine and a separately stated rule set, the engine could execute the deductive logic algorithms in order to solve the problem defined by the rule set. This very powerful idea, the separation of function from knowledge, was the true contribution of the First Wave.

Deductive logic, however, has some traps that seriously limit its utility. The first is focus of attention. In a large rule set, there are many deductions that can be made at any point in the execution of the rules. Most of these deductions are obvious and not particularly interesting in that the deduction was irrelevant to the desired or expected output. The second problem was that the execution of rules in a different order would produce a different outcome. But the real blow to deductive logic and rule sets was that the rule sets were difficult to build and maintain at scale. The result was that A.I. was unable to meet the enthusiastic expectations of its proponents, to the disappointment of all.

The backlash from the failed expectations became known as The Great A.I. Ice Age to those in the business of making and selling A.I. systems. The Ice Age lasted from 1990 until around 2010, a time in which many of the pioneering companies and products disappeared. During the Ice Age, one could not say the phrase “artificial intelligence” without noticing eyes rolling.

During the Ice Age, small groups continued to diligently work on alternative ways to define the functions of the mind. With a growing body of findings from cognitive science about human thinking, new views of knowledge and logic processing were emerging, but they largely went ignored. One bright spot in this period was the DARPA Strategic Computing Program (SCP) and its offshoots. By the time it shut down around 1992, SCP teams had made the first driverless, autonomous land vehicle and the first real-time cognitive engine known as the Pilot’s Associate [5]. The discoveries in these programs continued to slowly incubate within U.S. and European defense research establishments, albeit with very thin funding.

The Second Wave of A.I. (1990- )

We are now in the Second Wave of A.I. Massive computing, powerful digital sensors and signal processing have made new algorithms possible. Driverless cars are leaving the test track and driving on our highways, and A.I. is once again hotly pursued as a means of boosting human cognitive performance.

Most importantly, A.I. has left the nursery to face the real world. In many ways, it is also leaving the careful hands of the researcher and plunging into the full complexity and raggedness of incomplete and inconsistent information, conflict and uncertain futures. If A.I. is going to earn its stripes, it must be able to deal with the real world, an imperfect environment of people, systems and enterprises.

In the business world, economic value is created by the efforts of people. So the efficiency and competency of people has been of primary importance. Historically, this has split into two main concerns: labor and decision-making. In the cobbler’s shop of the Middle Ages, labor and decision-making were united in a small group. As industrialization changed the scale of businesses, larger operations began to separate labor from decision-making and the management role was born.

Industrialization and scale also created physical demands far beyond the strength of laborers. So machines were needed to make possible large structures, faster transportation and efficient production. As machines improved during the 18th and 19th centuries, the labor of people turned more and more toward the service of the machines.

The Information Age is bringing a similar transformation to the decision-making role in enterprises. Computers are already integral to successful management, with even the smallest of businesses reliant upon their digital connection to sales, distribution and financial networks. As machines took on the responsibilities once the sole domain of people, computers are now redefining our role, as people more and more find themselves in the service of the computer.

Given these powerful trends, what should be the role of intelligent machines in the future? Many Second Wave A.I. practitioners dodge this question, and as a result, help fuel Hollywood’s doomsday forecast for A.I., which says intelligent machines are merely biding their time before taking over the world. There are many technical and practical reasons why the doomsday forecast is unlikely, but the question remains. As designers, builders and customers of A.I. systems, we get to choose what the A.I. should be like. We have many options besides total surrender of decision-making authority to intelligent information systems.

One of the powerful ideas that took root during the A.I. Ice Age was that an intelligent computer could act as a trusted companion of person, augmenting his ability while at the same time preserving his authority and responsibility for his decisions. If A.I. can make a mind as program, why not make a social mind that understands its human counterparts, can interact richly with humanity and is competent to support our needs without taking over? This was exactly what the DARPA SCP Pilot’s Associate had created by 1992.

Professor David Woods and his colleagues at Ohio State described this notion of humans and intelligent machines as companions for one another as a joint cognitive system [6]. Unlike the view of A.I. driven by a goal of system autonomy, a joint cognitive system is designed and built with the functions of the mind as a program that gives the computer part of the joint system the needed capabilities for situation and task sharing and coordination with humans in ways that humans understand. Some members of the Second Wave of A.I. have also embraced this idea with the term “augmented intelligence.” Importantly, there is a sizable body of research that demonstrates how joint cognitive systems outperform computer-only and human-only decision systems, particularly in real world settings full of messy data, ambiguity and conflict.

The Intelligent Enterprise

The joint cognitive system view is the first step in creating the intelligent enterprise – a large, complex organization with global distribution of its resources and processes run by people who make consistently excellent adaptations to rapid changes in the world. They also share the execution of these rapid adaptations seamlessly across the entire organization without loss of consistency and coherence.

In the next installment of this series, we will travel further down this path, exploring the intelligent enterprise of the future in greater detail.

References

  1. https://juneoven.com/the-oven
  2. http://www.med.harvard.edu/aanlib/
  3. https://www.nature.com/articles/nrneurol.2010.200
  4. https://www.sciencedirect.com/science/article/pii/S1364661315000236
  5. http://www.dms489.com/PA/PA_index.html
  6. https://www.crcpress.com/Joint-Cognitive-Systems-Patterns-in-Cognitive-Systems-Engineering/Woods-Hollnagel/p/book/9780849339332
Scroll to Top