Artificial Intelligence – It’s Much Older and More Pervasive Than You Think

 

Artificial intelligence (AI) should really be called “Augmented Intelligence” – a modern-day extension of a very ancient adaptive strategy in humankind.  However, there is currently an emerging new generation of AI technologies that crosses a major threshold.  It gets us closer to truly “Autonomous Intelligence.”  As with a great many other human inventions over the millennia, this new development holds both promise and peril.

Although the term “intelligence” can be defined in various ways, one widely accepted common denominator is the capacity to utilize information to make choices and take actions.  In these terms, every bacterium has at least some basic intelligence, if nothing else the ability to sense various internal and external conditions and to respond.  This is often referred to as “feedback” control, and there is an entire scientific discipline, called “cybernetics”, that is devoted to the subject.

Not only do humans excel with our biologically based feedback capacities (consider touch, taste, smell, hearing, and eyesight, just for starters) but we have also been augmenting these capacities for thousands of years.  An obvious example is how our remote ancestors learned to exploit the special capabilities of dogs to serve us as sentinels and as collaborative hunters.  Call it “Animal Intelligence.”  A modern-day version of this ancient animal AI “technology” is the so-called honeyguide birds that help the East African Boran hunter-gatherers with locating and sharing hard-to-find honeybee nests.  And, of course, there are the sniffer dogs that aid the police in searching for drugs.

The first truly “artificial” feedback device (to our knowledge) was developed around 270 BC, when the float valve was invented as a regulator for water clocks and, later on, oil lamps, wine dispensers, and siphons.  This ancient technology is still in use today in modern toilet tanks and water tanks, among other things.  During the industrial revolution, centrifugal force was exploited to create “governors” for steam engines and a variety of other speed control devices.   A similar kind of augmented intelligence is embedded in modern alarm clocks, thermostats, autopilots, magnetic homing torpedoes, point-and-shoot cameras, motion sensor alarms, and a host of other human technologies -- not to mention the great variety of devices that we have invented over the years to enhance our ability to acquire and use information directly:  abacuses, slide rules, calculators, thermometers, barometers, sonar, radar, and, of course, computers.  

The currently popular term “Artificial Intelligence” has mostly been applied to systems that link computers with the decision-making capabilities provided by digital computer “algorithms” – an internal electronic process, or program that uses a set of “if-then” rules or procedures to reason and make “choices”.  Human operators design these algorithms, so the outcomes still represent only an extension (augmentation) of human intelligence.  But when this new digital technology was wedded to vastly more powerful computers and combined with the concept of a neural network – simulating the diffuse, synergistic way the human mind processes and uses information – a vastly higher level of learning, reasoning, and problem solving became possible.   The impact has been huge and is still growing – from Google’s search engine to smart phones, spell check programs, language translation programs, voice recognition programs, software programs that influence various functions in your automobile, air traffic control systems, and Amazon’s amazing automated warehouse and delivery operations.  Less visible but perhaps even more important are the AI systems that shape such things as stock market trading, medical diagnoses, the pricing of services like hotels and airline flights, and even police work and judicial decisions.

Now we are edging into the next stage – linking these increasingly powerful computer-based information and decision-making systems with systems that can act autonomously – from robots to self-driving automobiles, robotic surgery, and (most ominously) military drones that can identify targets and then independently initiate attacks.  It could be called “Autonomous Intelligence.”  Human operators still define the goals and specify the desired values and outcomes, so we have not lost ultimate control.  But if a goal is defined too broadly, we may in fact create opportunities for unintended consequences that violate other goals and values.  Imagine this hypothetical scenario:  A class of robots with the power to use lethal force is given the worthy goal of finding and implementing the most efficient way to reduce global warming.  This new robot army might conclude that humans are the root of the problem and should be eliminated.  To make a pun about the rogue computer Hal in Stanley Kubrick’s famous science fiction movie, “2001”, it might be Hal on Earth.

However, the promoters of present-day AI have assured us, repeatedly, that, even if the next generation of AI programs is endowed with “sentience”, their built-in social values would not allow them to act independently against our ultimate goals and values -- unlike Hal, which was surreptitiously given an ego and a sense of its own self-interest.  (The screen writer, the famed science fiction author Arthur C. Clarke, used a bit of box office “poetic license.”)  To reassure us on this score, a cartoon has been making the rounds recently that shows how the world chess champion, Gary Kasperov, could have defeated the IBM chess-playing program in only one move.  Just turn off the power switch.

I find the current AI hype much too sanguine – dangerously so.  One of the great unsolved (and underappreciated) challenges for the coming era of Autonomous Intelligence is how to link the increasingly powerful AI information and problem-solving capabilities with the multiple, often conflicting social values and goals that may need to be reconciled – a human skill (at its best) that no computer, to my knowledge, has yet been able to emulate.  Indeed, a commonplace daily conundrum for all of us is making choices between close alternatives – shall I do X, or Y?   

A related, very real concern is about the potential for harm in the newest AI technologies that do “machine learning,” or “deep learning.”  These are programs that allow a system to learn autonomously and improve its performance over time, including the ability to learn from other AI programs.  The fear is that this could lead to huge, even catastrophic mistakes that are beyond human control because the controllers are blind to what the computers are doing.  The AI expert, Peter Haas, illustrates with an example of an AI program that misidentified a dog as a wolf because the background was covered with snow.  Nobody knew the program was using the background as a clue.

However, the greatest potential threat from AI (of all kinds) is not a rogue Hal computer, or “killer robots,” or an unwitting computer mistake.   It’s us.  Just as the promoters of the Internet in the early days had fantasies about how it would be an unalloyed benefit to humankind and failed to take into account how it would also empower people with dark, evil, or simply self-serving purposes at our expense, AI poses an even bigger threat.  Like so many other human inventions, AI can be used for good, or it can serve the purposes of a Vladimir Putin, or worse.  (Putin is reputed to have said that whoever controls AI will rule the world.) 

Forget Hal.  Going forward into this “brave new world” (far beyond anything the sci-fi novelist Aldous Huxley ever imagined), it’s vitally important to recognize and honestly address this overarching existential AI threat.  It’s us – and how we use or misuse AI.  Stay tuned.      

Category: 

Peter Corning

Peter Corning is currently the Director of the Institute for the Study of Complex Systems in Seattle, Washington.  He was also a one-time science writer at Newsweek and a professor for many years in the Human Biology Program at Stanford University, along with holding a research appointment in Stanford’s Behavior Genetics Laboratory.  

 


Comments Join The Discussion