The Computers of Star Trek (12 page)

BOOK: The Computers of Star Trek
2.23Mb size Format: txt, pdf, ePub
ads
5
Artificial Intelligence
AI, or artificial intelligence, is a common term in the
Star Trek
universe. Yet it's rarely explained or even documented. In many ways it seems as much technobabble as “dilithium crystals.” However, if we take a closer look at the computers of
Trek
we can deduce quite a bit about their AI abilities from the way they act.
Landru is a massive computer that has ruled Beta III for hundreds of years (“Return of the Archons,”
TOS
). Landru acts to protect and preserve the culture of the world. It is self-aware and destroys what it considers threats to society, including busybody space travelers. In fact, it is so protective that it has insulated the planet from all outside influences or change for centuries, reducing its human population to childlike servitude.
Landru is an artificially intelligent machine. It thinks and analyzes information, but only in a very basic way. It views the world in terms of yes and no, true or false, black or white. There is no “maybe” or adaptability in its programs. The complex idea of harm has been narrowed down to the simple, linear concept of physical harm—and the opposite idea, good, has been equated
with physical safety. Landru is another anachronism blown up to gigantic speed and power, although in this case the parody is clearly intentional. It is a creation of the 1960s, when artificial intelligence was viewed primarily as the reduction of all thought processes to a series of if/then questions. This reasoning style was inadequate to deal with ambiguity or conflicting values.
Is AI the strict logic of Landru, or something entirely different?
By definition, artificial intelligence has to do with the ability of computers to think independently. Of course, the concept revolves around the basic question of how we define intelligence. Machine intelligence has always been a compromise between what we understood of our own thought processes and what we could program a machine to do.
Norbert Wiener, one of the greatest scientists of this century, was among the first to note the similarities between human thought and machine operation in the science of cybernetics that he helped found. Cybernetics is named after the Greek word for
helmsman
. Typically, a helmsman steers his ship in a fixed direction: toward a star or a point on land, or along a given compass heading. Whenever waves or wind throw the ship off this heading, the helmsman brings it back on course. This process, in which deviations result in corrections back to a set point, is called negative feedback. (The opposite, positive feedback, occurs when deviations from a set point result in further deviations. An arms race is the classic example.) The most famous example of negative feedback is a thermostat. It measures a room's temperature, then turns the heat on or off to keep the room at a desired temperature. Wiener theorized that all intelligent behavior could be traced to feedback mechanisms. Since feedback processes could be expressed as algorithms, this meant that theoretically, intelligence could be built into a machine.
This simple way of looking at human logic and applying it to machines provided a foundation for computer-science theory. Early artificial intelligence attempted to reduce our thought processes to purely logical steps and then encode the steps for use by a computer.
As noted in Chapter 1, a computer functions at its lowest level by switching between two states: binary one for TRUE, and zero for FALSE. Circuits are made from combinations of ones and zeros. This fact about circuits carried some inherent limitations: It meant that computers could calculate only through long chains of yes-no, true-false statements of the form “if A is true, go to step B; if A is false, go to step C.” Statements had to be entirely true or entirely false. A statement that was 60 percent true was vastly more difficult to deal with. (When Lofti Zadeh began introducing partially true statements into computer science in the 1970s and 1980s—for example, “The sky is cloudy”—many logicians argued that this was not an allowable subject. The field of logic that deals with partially true statements is called fuzzy logic.) Ambiguity, error, and partial information were much more difficult to handle. Computers, whose original function, after all, was to compute, were much better equipped to deal with the clean, well-lighted world of mathematical calculation than with the much messier real world. It took some years before computer scientists grasped just how wide the chasm was between these worlds. Moreover, binary logic was best suited to manipulating symbols, which could always be represented as strings of ones and zeros. Geometric and spatial problems were much more difficult. And cases where a symbol could have more than one meaning provoked frequent errors.
This older school of AI is what we call the top-down approach—the heuristic IF-THEN method of applying intelligence to computers. Very methodical, very Spocklike, very much like the Emergency Holographic Medical Doctor on
Voyager
, and corresponding to the way computers think on the original series.
A breakthrough decade for top-down AI was the 1950s. Herbert Simon, who later won a Nobel Prize for economics, and Allen Newell, a physicist and mathematician, designed a top-down program called Logic Theorist. Although the program's outward goal was to produce proofs of logic theorems, its real purpose was to help the researchers figure out how people reach conclusions by making correct guesses.
Logic Theorist was a top-down method because it used decision trees; making its way down various branches until arriving at either a correct or incorrect solution.
A decision tree is a simple and very common software model. Suppose your monitor isn't displaying anything—that is, your computer screen seems to be dead.
Figure 5.1
is a tiny decision tree that might help deduce the cause of the problem.
Using this approach, Logic Theorist created an original proof of a mathematical theorem, and Simon and Newell were so impressed that they tried to list the program as coauthor of a technical paper. Sadly, the AI didn't land its publishing credential. The journal in question rejected the manuscript.
In “The Changeling” (TOS), a top-down computer traveling through space, Nomad, beams onto the
Enterprise.
It scans a drawing of the solar system and instantly knows that Kirk and his crew are from Earth. An insane robot with artificial intelligence, Nomad mistakenly thinks that Kirk is “The Creator,” its God. According to Spock, a brilliant scientist named Jackson Roykerk created Nomad, hoping to build a “perfect thinking machine, capable of independent logic.” But somehow Nomad's programming changed, and the machine is destroying what it perceives to be imperfect lifeforms. Spock eventually concludes that “Nomad almost renders as a life-form. Its reaction to emotion [like anger] is unpredictable.”
In 1956, Dartmouth College in New Hampshire hosted a conference that launched AI research. It was organized by John Very simple decision tree that helps determine why your monitor isn't displaying anything. The real logic for the tree would be far more complex. Decision trees for expert systems—diagnostics and problem solving—are often ten or twenty pages long. One of the authors of this book wrote hundreds of pages of computer diagnostic decision trees in the 1980s. The real decision tree to diagnose a monitor malfunction was perhaps five pages long. McCarthy, who coined the term “artificial intelligence.” In addition to McCarthy, Simon, Newell, and Logic Theorist (we must list the first recognized AI program as a conference participant), the attendees included Marvin Minsky, who in 1951 with Dean Edmonds had built a neural-networking machine from vacuum tubes and
B-24
bomber parts. Their machine was called Snarc.
FIGURE 5.1
Dicision Tree
As far back as this 1956 conference, artificial intelligence had two definitions. One was top-down: make decisions in a yes-no, if-then, true-false manner—deduce what's wrong by elimination. The other was quite different, later to be called bottom-up: in addition to yes-no, if-then, true-false thinking, AI should also use induction and many of the subtle nuances of human thought.
The main problem with the top-down approach is that it requires an enormous database to store all the possible yes-no, if-then, true-false facts a computer would have to consider during deduction. It would take an extremely long time to search that database, and would take an extremely long time to arrive at conclusions. It would have to make its way through mazes upon mazes of logic circuits. This is not at all the way humans think. An astonishing number of thoughts blaze through the human brain all at the same time. In computer lingo, our brains are massive parallel processors.
What top-down AI brings to the table are symbolic methods of representing some of our thought processes in machines. Put more simply, top-down AI codes known human behaviors and thought patterns into computer symbols and instructions.
Perhaps the greatest boost to the top-down philosophy was the defeat of world chess champion, Gary Kasparov, by the IBM supercomputer, Deep Blue. Though not artificially intelligent, Deep Blue used a sophisticated IF-THEN program in a convincing display of machine over man.
Chess, however, is a game with a rigid set of rules. Players have no hidden moves or resources, and every piece is either on a
square or not, taken or not, moveable in well-defined ways or not. There are no rules governing every situation in the real world, and we almost never have complete information. Humans use common sense, intuition, humor, and a wide range of emotions to arrive at conclusions. Love, passion, greed, anger: how do you code these into if-then statements?
A great example of top-down thinking is Data's inability to understand jokes and other human emotions. It takes Data six years to comprehend one of Geordi's jokes. When O‘Brien is upset, Data asks if he wants a drink, a pillow, or some nice music. Data goes through a long list of “comfort” options, none of which makes sense to O'Brien. This is why the top-down approach is inadequate. We can't program all possibilities into a computer.
From the very beginning of AI research, there were scientists who questioned the top-down approach. Rather than trying to endow the computer with explicit rules for every conceivable situation, these researchers felt it was more logical to work AI in the other direction—to take a bottom-up approach. That is, figure out how to give a computer a foundation of intrinsic capabilities, then let it learn as a child would, on its own, groping its way through the world, making its own connections and conclusions. After all, the human brain is pretty small and doesn't weigh much, and is not endowed at birth with a massive database having full archives about the situations it will face.
Top-down AI uses inflexible rules and massive databases to draw conclusions, to “think.” Bottom-up AI learns from what it does, devises its own rules, creates its own data and conclusions—it adapts and grows in knowledge based on the network environment in which it lives.
Rodney Brooks, a computer scientist at MIT, is one of bottom-up AI's strongest advocates. He believes that AI requires an intellectual springboard similar to animal evolution, that is, an
artificially intelligent creature must first learn to survive and prosper in its environment before it can tackle such things as reasoning, intuition, and common sense. It took billions of years for microbes to evolve into vertebrates. It took hundreds of millions of years to move from early vertebrates to modern birds and mammals. It took only a few hundred thousand years for humans to evolve to their present condition. So the argument goes: The foundation takes forever, yet human reasoning and abstract thought take a flash of time.
1
Therefore, current research emphasizes “survival” skills such as robotic mobility and vision. Robots must have visual sensors and rudimentary intelligence to avoid obstacles and to lift and sort objects.
How are the two approaches different? Captain Kirk, searching desperately for clues to a murder, instructs the ship's computer to identify similar crimes taking place on other planets over the course of the past several hundred years. Meanwhile, Jack the Ripper's essence invades the ship's computer and takes control. Spock issues a “class A compulsory directive” to the computer, instructing it to “compute to the last digit, the value of pi.” The computer churns and grinds, doing nothing but calculating the infinite value of pi (“Wolf in the Fold,” TOS). Both actions, searching a huge database for a limited set of attributes as well as devoting its entire processing capability to calculating a linear sequence of numbers, mark this as a top-down machine.
Some years later, the
Enterprise-D
is caught in an asteroid field by a booby-trapped derelict spaceship. Any use of the
Enterprise
engines is dangerous. Geordi has the computer call up a simulation of Dr. Lea Brahms, who designed the starship's propulsion unit. Within a short time, Geordi and Lea are working together to solve the problem that threatens the crew's existence (“Booby Trap,”
TNG
). The Lea simulation actually reasons and reaches
conclusions about a novel situation, much as a human would do. The simulation is so human-like that Geordi grows quite attached to it, causing himself considerable embarrassment when the real Lea Brahms shows up a few months later.
BOOK: The Computers of Star Trek
2.23Mb size Format: txt, pdf, ePub
ads

Other books

Loose Connections by Rosemary Hayes
Sold to the Sheikh by Chloe Cox
The Portrait by Judith B. Glad
Being Jolene by Caitlin Kerry
Corsarios de Levante by Arturo Pérez-Reverte
Anécdotas de Enfermeras by Elisabeth G. Iborra