Online Encyclopedia Search Tool

Your Online Encyclopedia

 

Online Encylopedia and Dictionary Research Site

Online Encyclopedia Free Search Online Encyclopedia Search    Online Encyclopedia Browse    welcome to our free dictionary for your research of every kind

Online Encyclopedia



Three Laws of Robotics

In science fiction, the Three Laws of Robotics are a set of three laws written by Isaac Asimov, which most robots appearing in his fiction have to obey. They state that:

  1. A robot may not harm a human being, or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence, as long as such protection does not conflict with the First or Second Law.

Contents

History of the laws

Asimov attributes the Three Laws to John W. Campbell from a conversation which took place on December 23, 1940. However, Campbell claims that Asimov had the Laws already in his mind, and they simply needed to be stated explicitly. Several years later, Asimov's friend Randall Garrett attributed the Laws to a symbiotic partnership between the two men, a suggestion which Asimov adopted enthusiastically. According to his autobiographical writings, Asimov included the First Law's "inaction" clause because of Arthur Hugh Clough's poem "The Latest Decalogue", which includes the lines "Thou shalt not kill, but needst not strive / officiously to keep alive".

Although Asimov pins the Laws' creation on one date, their appearance in his literature happened over a period of time. Asimov wrote two stories without the Three Laws mentioned explicitly ("Robbie" and "Reason"); Asimov assumed, however, that robots would have certain inherent safeguards. "Liar!", Asimov's third robot story, makes the first mention of the First Law but leaves out the other two. All three laws finally appeared together in "Runaround". When these stories and several others were compiled in the anthology I, Robot, "Reason" and "Robbie" were updated to acknowledge all of the Three Laws, though the material Asimov added to "Reason" is not entirely consistent with the Laws as he described them elsewhere. In particular, the idea of a robot protecting human lives when it does not believe those humans truly exist is at odds with Elijah Baley's reasoning, described below.

The Three Laws' appearance in "Runaround" is the first recorded use of the word robotics in the English language. Asimov was not initially aware of this; he coined the word in analogy with mechanics, hydraulics, and all the other similar terms denoting branches of applied knowledge.

Other occurrences

The Three Laws are often used in science fiction novels written by other authors, but tradition dictates that only Dr. Asimov would quote the Laws explicitly.

Some amateur roboticists have evidently come to believe that the Three Laws have a status akin to the laws of physics; i.e., a situation which violates these laws is inherently impossible. This is incorrect, as the Three Laws are quite deliberately hardwired into the positronic brains of Asimov's robots. In fact, Asimov distinguishes the class of robots which follow the Three Laws, calling them Asenion robots. The robots in Asimov's stories, all being Asenion robots, are incapable of knowingly violating the Three Laws, but there is nothing to stop any robot in other stories or in the real world from being non-Asenion.

A historical curiosity: Asimov invented the term Asenion based on his own name. The magazine Planet Stories published a letter in early 1941, taking its byline from Asimov's handwritten signature: the i resembled an e, and so forth. Asimov used this obscure variation to insert himself into The Caves of Steel, in much the same way that Vladimir Nabokov appeared in Lolita, anagrammatically disguised as "Vivian Darkbloom".

Asimov took varying positions on whether the Three Laws were optional: although in his first writings they were simply carefully engineered safeguards, in later stories Asimov stated that they were an inalienable part of the mathematical foundation underlying the positronic brain, and that it would therefore be very difficult to create intelligent robots without these laws. This is historically consistent: the occasions where roboticists modify the Laws generally occur early within the stories' chronology, at a time when there is less existing work to be re-done. In "Little Lost Robot", Susan Calvin considers modifying the Laws to be a terrible idea, but doable, while centuries later, Dr. Gerrigel (Caves) believes it to be impossible.

In the real world, not only are the laws optional, but significant advances in artificial intelligence would be needed for robots to understand them. Some have argued that, since the military is a major source of funding for robotic research, it is unlikely such laws would be built into the design. Others have countered that the military would want strong safeguards built into any robot where possible, so laws similar to these would be embedded if possible. David Langford has suggested, tongue-in-cheek, that these laws might be:

  1. A robot will not harm authorized Government personnel but will terminate intruders with extreme prejudice.
  2. A robot will obey the orders of authorized personnel except where such orders conflict with the Third Law.
  3. A robot will guard its own existence with lethal antipersonnel weaponry, because a robot is bloody expensive.

Roger Clarke wrote a pair of papers analyzing the complications in implementing these laws, in the event that systems were someday capable of employing them. He argued, "Asimov's Laws of Robotics have been a very successful literary device. Perhaps ironically, or perhaps because it was artistically appropriate, the sum of Asimov's stories disprove the contention that he began with: It is not possible to reliably constrain the behavior of robots by devising and applying a set of rules." On the other hand, Asimov's later novels (The Robots of Dawn, Robots and Empire, Foundation and Earth) imply that the robots inflicted their worst long-term harm by obeying the Laws perfectly well, thereby depriving humanity of inventive or risk-taking behavior.

The Three Laws are sometimes seen as a future ideal by those working in artificial intelligence—once an intelligence has reached the stage where it can comprehend these laws, it is truly intelligent.

Alterations of the Laws

Asimov's stories test his Laws in a wide variety of circumstances, proposing and rejecting modifications. He once wondered how he could create so many stories in the sixty-one words that made up these Laws. For a few stories, the only solution was to change the Laws. A few examples:


Asimov once added a "Zeroth Law", so named to continue the pattern of lower-numbered laws superseding in importance the higher-numbered laws. It was supposedly invented by R. Daneel Olivaw in Robots and Empire, although it was mentioned earlier in "The Evitable Conflict" by Susan Calvin. In Robots and Empire, R. Giskard Reventlov was the first robot to act according to the Zeroth Law, although it proved destructive to his positronic brain, as he violated the First Law. R. Daneel, over the course of many thousand years, was able to adapt himself to be able to fully obey the Zeroth Law.

0. A robot may not injure humanity, or, through inaction, allow humanity to come to harm.

A condition stating that the Zeroth Law must not be broken was added to the original Laws.

Several NS-2 robot s (Nestor robots) were created with only part of the First Law. It read:

1. A robot may not harm a human being.

This solved the original problem of robots not allowing anyone to be subject to necessary radiation even for proper time limits (robots were rendered inoperable by doses reasonably safe for humans, and were being destroyed attempting to rescue the humans). However, the modified law caused many other troubles as detailed in "Little Lost Robot".

In the 1990s, Roger MacBride Allen wrote a trilogy set within Asimov's fictional universe. Each title has the prefix "Isaac Asimov's", as Dr. Asimov approved Allen's outline before his death. These three books (Caliban, Inferno and Utopia) introduce a new set of Laws. The so-called New Laws are similar to Asimov's originals, with three substantial differences. The First Law is modified to remove the "inaction" clause (the same modification made in "Little Lost Robot"). The Third Law is modified so it is no longer superseded by the Second (i.e. a "New Law" robot cannot be ordered to destroy itself). Finally, Allen adds a Fourth Law, which instructs the robot to do "whatever it likes" so long as this does not conflict with the first three Laws. According to the first book's introduction, Allen devised the New Laws in discussion with Asimov himself.

Allen's two most fully characterized robots are Prospero, a wily New Law machine who excels in finding loopholes, and Caliban, an experimental robot programmed with no Laws at all.

The Solarians eventually created robots with the Three Laws as normal but with a warped meaning of "human". Solarian robots were told that only people speaking the Solarian language were human. This way, their robots did not have any problem harming non-Solarian human beings (and were specifically programmed to do so).

Asimov addresses the problem of humaniform robots ("androids" in later parlance) several times. The novel Robots and Empire and the short stories "Evidence" and "The Tercentennary Incident" describe robots crafted to fool people into believing that the robots were human. On the other hand, "The Bicentennial Man" and "That Thou art Mindful of Him" explore how the robots may change their interpretation of the Laws as they grow more sophisticated. "That Thou art Mindful of Him", which Asimov intended to be the "ultimate" probe into the Laws' subtleties, ends with two robots concluding that they are the most advanced thinking beings on the planet, and that they are therefore the only two true humans alive.

In The Naked Sun, Elijah Baley points out that the Laws had been deliberately misrepresented because robots could unknowingly break any of them. A clever murderer might, for example, instruct one robot to poison a drink, saying "Place this entirely harmless liquid in a glass of milk. Once I observe its effects upon milk, the mixture will be poured out. When you finish, forget that you have done so." The murderer may then instruct a second robot, "Pour a glass of milk for this man." In all innocence, as Baley says, the robots become instruments of crime. (The Naked Sun complicates the issue by portraying a decentralized, planetwide communication network among Solaria's millions of robots, meaning that the criminal mastermind could be located anywhere on the planet. In essence, Asimov was presaging murder committed over the Internet.)

Gaia, the planet with collective intelligence in the Foundation novels, adopted a law similar to the First as their philosophy:

Gaia may not harm life or, through inaction, allow life to come to harm.

Advanced robots are typically programmed to handle the Laws in a sophisticated manner. In many stories, like "Runaround", the potentials and severity of all actions are weighed and a robot will break the laws as little as possible rather than do nothing at all. In another story, problems with the First Law were noted—for example, a robot could not function as a surgeon, which caused damage to a human; nor could it write game plans for American football since that could injure humans.

Twice in his fiction-writing career, Asimov portrayed robots which disregard the Three Laws entirely, unlike Daneel and Giskard, who attempt to augment them. The first case, a short-short entitled "First Law", is often considered insignificant or even apocryphal. On the other hand, the short story "Cal" (collected in Gold), told by a first-person robot narrator, features a robot who disregards the Laws because he has found something far more important—he wants to be a writer. Humorous, partly autobiographical, and unusually experimental in style, "Cal" has been regarded as one of Golds strongest stories.

Pastiches and parodies

John Sladek's parodic short story "Broot Force" (supposedly written by "I-Click As-I-Move") concerns a group of Asimov-style robots whose actions are constrained by the "Three Laws of Robish", which are "coincidentally" identical to Asimov's laws. The robots in Sladek's story all manage to find logical loopholes in the Three Laws, usually with bloody results. Sladek later wrote a novel, Tik-Tok, in which a robot discovers that his so-called "asimov circuits" are not restraining his behavior at all, making him in effect a sociopath; he comes to doubt whether "asimov circuits" are even technically possible, deciding that they are simply a pseudo-religious belief held by robots.

Roland Charles Wagner wrote a short story, Three Laws of Robotic Sexuality (1982), which treated the use of robots for sexual pleasure. (The story is available online.)

Upon occasion, Asimov himself poked fun at his Laws. In "Risk", Gerald Black parodied the Three Laws to describe Susan Calvin's behavior:

  1. Thou shalt protect the robot with all thy might and all thy heart and all thy soul.
  2. Thou shalt hold the interests of US Robots and Mechanical Men, Inc. holy provided it interfereth not with the First Law.
  3. Thou shalt give passing consideration to a human being provided it interfereth not with the First and Second Laws.

In the 1984 movie Repo Man the character Bud talks about the "Repo Code", a parody of the Three Laws.

"...I shall not cause harm to any vehicle nor the personal contents thereof. Nor through inaction let that vehicle or the personal contents thereof come to harm..."

The character J. Frank Parnell in the movie also resembles Asimov.

See also

External links


Last updated: 10-24-2004 05:10:45