The Three Laws of Robotics often shortened to The Three Laws



Download 377,64 Kb.
Page1/3
Date conversion26.10.2017
Size377,64 Kb.
  1   2   3
Three Laws of Robotics

From Wikipedia, the free encyclopedia



The Three Laws of Robotics (often shortened to The Three Laws or Three Laws) are a set of rules devised by the science fiction author Isaac Asimov and later added to. The rules were introduced in his 1942 short story "Runaround", although they had been foreshadowed in a few earlier stories. The Three Laws are:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

  2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.

  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

These form an organizing principle and unifying theme for Asimov's robotic-based fiction, appearing in his Robot series, the stories linked to it, and his Lucky Starr series of young-adult fiction. The Laws are incorporated into almost all of the positronic robots appearing in his fiction, and cannot be bypassed, being intended as a safety feature. Many of Asimov's robot-focused stories involve robots behaving in unusual and counter-intuitive ways as an unintended consequence of how the robot applies the Three Laws to the situation in which it finds itself. Other authors working in Asimov's fictional universe have adopted them and references, often parodic, appear throughout science fiction as well as in other genres.



This cover of I, Robot illustrates the story "Runaround", the first to list all Three Laws of Robotics.



Laws of robotics

Three Laws of Robotics
by Isaac Asimov
(in culture)
 ·
Tilden's Laws of Robotics
by Mark Tilden

Related topics

Roboethics · Ethics of AI
Friendly AI · Machine ethics

  • v

 

  • t

 

  • e

The original laws have been altered and elaborated on by Asimov and other authors. Asimov himself made slight modifications to the first three in various books and short stories to further develop how robots would interact with humans and each other; he also added a fourth, or zeroth law, to precede the others:

0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

The Three Laws, and the zeroth, have pervaded science fiction and are referred to in many books, films, and other media.



Contents

  [hide



  • 1 History

  • 2 Alterations

    • 2.1 By Asimov

      • 2.1.1 First Law modified

      • 2.1.2 Zeroth Law added

      • 2.1.3 Removal of the Three Laws

    • 2.2 By other authors

      • 2.2.1 Robert Bloch's Response

      • 2.2.2 Roger MacBride Allen's trilogy

      • 2.2.3 Foundation sequel trilogy

      • 2.2.4 Robot Mystery series

      • 2.2.5 Additional laws

  • 3 Ambiguities and loopholes

    • 3.1 Unknowing breach of the laws

    • 3.2 Ambiguities resulting from lack of definition

      • 3.2.1 Definition of "human being"

      • 3.2.2 Definition of "robot"

    • 3.3 Resolving conflicts among the laws

  • 4 Other occurrences in media

    • 4.1 The Three Laws in film

  • 5 Applications to future technology

  • 6 See also

  • 7 References

    • 7.1 Bibliography

    • 7.2 Notes

  • 8 External links

[edit]History



A typical robot before Asimov's Laws, seen in a Superman cartoon. Asimov's First Law would prohibit this robot from attacking humans.

Before Asimov began writing, the majority of artificial intelligence in fiction followed the Frankenstein pattern. Asimov found this unbearably tedious. He explained in 1964 that

... one of the stock plots of science fiction was ... robots were created and destroyed by their creator. Knowledge has its dangers, yes, but is the response to be a retreat from knowledge? Or is knowledge to be used as itself a barrier to the dangers it brings? With all this in mind I began, in 1940, to write robot stories of my own – but robot stories of a new variety. Never, never, was one of my robots to turn stupidly on his creator for no purpose but to demonstrate, for one more weary time, the crime and punishment of Faust.[1]

This was not an inviolable rule. In December 1938 Lester del Rey published "Helen O'Loy" the story of a robot that is so much like a person she falls in love with her creator and becomes his ideal wife. The next month Ernest and Otto Binder published a short story "I, Robot" featuring a sympathetic robot named Adam Link who was misunderstood and motivated by love and honor. This was the first of a series of ten stories; the next year "Adam Link's Vengeance" (1940) featured Adam thinking "A robot must never kill a human, of his own free will."[2]

On 7 May 1939 Asimov attended a meeting of the Queens Science Fiction Society where he met Binder, whose story Asimov had admired. Three days later Asimov began writing "my own story of a sympathetic and noble robot", his 14th story.[3] Thirteen days later he took "Robbie" to John W. Campbell the editor of Astounding Science-Fiction. Campbell rejected it claiming that it bore too strong a resemblance to del Rey's "Helen O'Loy".[4] Frederik Pohl, editor of Astonishing Stories magazine, published "Robbie" in that periodical the following year.[5]

Asimov attributes the Three Laws to John W. Campbell from a conversation that took place on 23 December 1940. Campbell claimed that Asimov had the Three Laws already in his mind and that they simply needed to be stated explicitly. Several years later Asimov's friend Randall Garrett attributed the Laws to a symbiotic partnership between the two men – a suggestion that Asimov adopted enthusiastically.[6] According to his autobiographical writings Asimov included the First Law's "inaction" clause because of Arthur Hugh Clough's poem "The Latest Decalogue", which includes the satirical lines "Thou shalt not kill, but needst not strive / officiously to keep alive".[7]

Although Asimov pins the creation of the Three Laws on one particular date, their appearance in his literature happened over a period. He wrote two robot stories with no explicit mention of the Laws, "Robbie" and "Reason". He assumed, however, that robots would have certain inherent safeguards. "Liar!", his third robot story, makes the first mention of the First Law but not the other two. All three laws finally appeared together in "Runaround". When these stories and several others were compiled in the anthology I, Robot, "Reason" and "Robbie" were updated to acknowledge all the Three Laws, though the material Asimov added to "Reason" is not entirely consistent with the Three Laws as he described them elsewhere.[8] In particular the idea of a robot protecting human lives when it does not believe those humans truly exist is at odds with Elijah Baley's reasoning, as described below.

During the 1950s Asimov wrote a series of science fiction novels expressly intended for young-adult audiences. Originally his publisher expected that the novels could be adapted into a long-running television series, something like The Lone Ranger had been for radio. Fearing that his stories would be adapted into the "uniformly awful" programming he saw flooding the television channels[9] Asimov decided to publish the Lucky Starr books under the pseudonym "Paul French". When plans for the television series fell through, Asimov decided to abandon the pretence; he brought the Three Laws into Lucky Starr and the Moons of Jupiter, noting that this "was a dead giveaway to Paul French's identity for even the most casual reader".[10]

In his short story "Evidence" Asimov lets his recurring character Dr. Susan Calvin expound a moral basis behind the Three Laws. Calvin points out that human beings are typically expected to refrain from harming other human beings (except in times of extreme duress like war, or to save a greater number) and this is equivalent to a robot's First Law. Likewise, according to Calvin, society expects individuals to obey instructions from recognized authorities such as doctors, teachers and so forth which equals the Second Law of Robotics. Finally humans are typically expected to avoid harming themselves which is the Third Law for a robot.

The plot of "Evidence" revolves around the question of telling a human being apart from a robot constructed to appear human – Calvin reasons that if such an individual obeys the Three Laws he may be a robot or simply "a very good man". Another character then asks Calvin if robots are very different from human beings after all. She replies, "Worlds different. Robots are essentially decent."



In a later essay Asimov points out that analogues of the Laws are implicit in the design of almost all tools:

  1. A tool must not be unsafe to use. Hammers have handles, screwdrivers have hilts.

  2. A tool must perform its function efficiently unless this would harm the user.

  3. A tool must remain intact during its use unless its destruction is required for its use or for safety.[11]

In The Robots of Dawn, the third in the Robot series, Dr. Han Fastolfe states that the planet Aurora was an attempt to create an entire planet which obeys the Laws of Robotics.

[edit]Alterations

[edit]By Asimov

Asimov's stories test his Three Laws in a wide variety of circumstances leading to proposals and rejection of modifications. Science fiction scholar James Gunn writes in 1982, "The Asimov robot stories as a whole may respond best to an analysis on this basis: the ambiguity in the Three Laws and the ways in which Asimov played twenty-nine variations upon a theme".[12] While the original set of Laws provided inspirations for many stories, Asimov introduced modified versions from time to time.

[edit]First Law modified

In "Little Lost Robot" several NS-2, or "Nestor" robots, are created with only part of the First Law. It reads:

1. A robot may not harm a human being.

This modification is motivated by a practical difficulty as robots have to work alongside human beings who are exposed to low doses of radiation. Because their positronic brains are highly sensitive to gamma rays the robots are rendered inoperable by doses reasonably safe for humans. The robots are being destroyed attempting to rescue the humans who are in no actual danger but "might forget to leave" the irradiated area within the exposure time limit. Removing the First Law's "inaction" clause solves this problem but creates the possibility of an even greater one: a robot could initiate an action which would harm a human (dropping a heavy weight and failing to catch it is the example given in the text), knowing that it was capable of preventing the harm and then decide not to do so.[13]



Gaia is the planet with collective intelligence in the Foundation novels which adopts a law similar to the First Law, and the Zeroth Law, as its philosophy:

Gaia may not harm life or, through inaction, allow life to come to harm.

[edit]

  1   2   3


The database is protected by copyright ©sckool.org 2016
send message

    Main page