Published:
|
Updated:

Published on INFORMS Analytics Magazine (Joseph Byrum)
Author’s Note: This blog series Understanding Smart Technology – And Ourselves examines our relationship with advancing technologies and the fundamental choices we face. As we stand at the threshold of an uncertain future shaped by artificial intelligence, the author challenges readers to consider whether we should embrace these transformative changes or resist them in defense of our humanity. Drawing from historical patterns of technological adoption and resistance, the series promises to deliver nuanced perspectives on our technological trajectory, beginning with a comprehensive overview of our current understanding of smart technology and its implications for society. Read Part 8 where the author discusses the economic impacts of smart technology.
Thoughtful people recognize the need for ethical guidelines in the use of smart automation. Some of the ethical-use initiatives are global in nature, some national, while some are industry-specific. Typical guidelines seek to prohibit artificial intelligence (AI) or robotic systems from harming humans and include a requirement for transparency and accountability in their operation. But this is a fast-evolving field. No matter how good the guidelines are today, we know that we will need to revisit them tomorrow. The new set of norms will focus on specific areas.
Human-AI relations
We must specify the degree of human control that should be maintained over artificial intelligence and autonomous systems. In other words, how independent will we allow our creations to become? It is possible that at some point in the distant future that the general intelligence of machines could surpass the general intelligence of humans. What then? What will life be like if computers can read and understand information as we do?
This possibility raises difficult questions. If we entertain the idea of fully independent systems, we will need to create the legal space for them to exist. This is where legal personhood for some autonomous systems comes in. Can we then be sued by our own creations? Will we confer rights on these creations of ours?
In addition, certain types of AI pose unique trust challenges because, unlike previous technologies, there is an opaqueness inherent to their algorithms. Neural networks teach themselves to recognize certain patterns within thousands of data sets. As part of that learning process, the neural network continually reconfigures itself in ways that humans cannot understand.
Let’s take a hypothetical. Imagine an AI credit-rating algorithm refused you a new mortgage. That would be quite a departure from the current system in which such decisions are based on easily understood factors like FICO scores. It may not be possible for you to ever know why an AI algorithm rejected your request, and the mortgage provider might not know either. This problem becomes acute when otherwise well-behaved AI malfunctions. What if your self-driving car crashes into a truck because that truck was painted silver and looked like open space to it? Investigators may never know why that happened. Unlike ordinary software, neural networks do not lend themselves to forensic investigations in which you can isolate the bug to a particular subroutine or specific line of code.
Nvidia has already demonstrated a vehicle that taught itself how to drive by reviewing data showing how humans, for example, move the steering wheel under a wide range of road conditions [1]. Once trained, the system’s neural network absorbs vehicle sensor information, processes the information and issues commands to the steering wheel, brakes and engine systems. Even the system’s designers are unable to explain how the neural network makes these decisions. Deep learning has proved to be very powerful at solving specific problems like image identification, voice recognition and language translation. There is hope that this technique could help diagnose deadly diseases, but it is important to make these systems transparent to their creators and accountable to their users. Otherwise, we will not be able to predict or learn from failures.
AI and Society
Google’s AI research division, Google Brain, launched a new initiative called PAIR which stands for “People + AI Research” initiative [2]. Its purpose is to conduct fundamental research on making future AI systems people-centric. It is part of a philosophy the Google community calls “human-centered machine learning” in which machine learning algorithms are used to solve problems with human needs and behaviors in mind [3]. It starts with the pragmatic principle that machine learning may not be the most suitable way to solve every problem – there are often simpler and better ways.
The initiative also recommends interacting with real users as the systems are in the prototype stage, so that designers can understand the mental models people form when they interact with the machine. Such systems must design with failure in mind. Algorithms will inevitably miscategorize input data a certain percentage of the time, for example, misidentifying one out of four cat pictures as a dog picture. System designers need to anticipate such errors and consider the consequences.
Such guidelines will help direct efforts as AI systems grow increasingly powerful, and that, too, will have a huge impact on society. Automation is making a lot of boring and unpleasant human tasks redundant, but we cannot categorically say that even that is a good thing. Humans can take pleasure and derive a sense of purpose from seemingly unnecessary physical labor like chopping wood or polishing brass. At the same time, we need the mental stimulation of complex thinking, lest our brains atrophy.
For years to come, a major question will be how to divide labor between humans and machines. Of potentially greater concern, who will share in the benefits of automation, and how can you help dislocated workers make career transitions?
As we approach the third decade of the 21st century, is there some new world view we need to discover and adopt? Stay tuned.
References
- https://blogs.nvidia.com/blog/2016/05/06/self-driving-cars-3/
- https://ai.google/pair/
- https://www.fastcodesign.com/90132700/googles-rules-for-designing-ai-that-isnt-evil

Joseph Byrum is an accomplished executive leader, innovator, and cross-domain strategist with a proven track record of success across multiple industries. With a diverse background spanning biotech, finance, and data science, he has earned over 50 patents that have collectively generated more than $1 billion in revenue. Dr. Byrum’s groundbreaking contributions have been recognized with prestigious honors, including the INFORMS Franz Edelman Prize and the ANA Genius Award. His vision of the “intelligent enterprise” blends his scientific expertise with business acumen to help Fortune 500 companies transform their operations through his signature approach: “Unlearn, Transform, Reinvent.” Dr. Byrum earned a PhD in genetics from Iowa State University and an MBA from the Stephen M. Ross School of Business, University of Michigan.