The Laws of Robotics
1 Infinite Loop
20-12-2003, 06:38
I have submitted a UN Proposal, I would like it if everyone endorsed it, but Please feel free to use this thread to discuss it. To Find it in the UN Proposal Registy, please search for the title
The Laws of Robotics
Here is the Proposal in its entirety
Laws of Robotics
Introduction,
Seeing as how many of our Brother and Sister nations are on the verge of or have already created and in some cases utilizing independent Automatons, or Robots and Androids, We Of Infinite Loop Propse in order to protect the Human Race from the possibility of "Rogue Man Killing" Robots, and to protect Humanity from the threat of extermination at the hands of Robotic warriors Such as have been seen in recent Films, Urge the United Nations to ratify and institute the Three Asimovian Laws of Robotics.
1) A Robot may not Harm a Human being, or through Inaction allow a Human to come to Harm
2) A Robot Must Obey all orders given to it by a Human, so long as those orders do not conflict with the first Law.
3) A Robot must protect it's own existance, so long as doing so does not conflict with either the first or second Law.
These Laws will be Hardcoded into the Robots Operating System, as well as hardcoded onto a backup chip perminantly affixed to the Robots Central network Bus so that all information must pass through this chip with no means of bypass.
Also should the chip shall be designed so as to not be bypassed it shall control power supply and or motorfunctions.
All Robots currently in operation must within 3 years fitted with Asimov circuts (as Hardcoding of the Laws would in some cases damage the Robot. All robots found after this time without a valid Asimov Circuit shall be immeadiatly Destroyed.
Furthermore this Proposal should it pass will establish the Robot Constibulary lor R.C.: a division within local law enforcement charged with enforcing this and all further Robot Laws. the R.C. shall be the sole Robotic Law enforcement Division.
Backed up by Federal or equivalent Law enforcement division.
This Proposal is futher open for future ammendments to be submitted via the normal UN submision system.
I thank you for taking the time to read and consider this proposal and urge you to support it so as to safegaurd our future, and prevent the Terminator or Matrix from coming about. Also, Hello.
Hikaru Motenai
Peoples General Secretary.
Infinite Loop
We of Naleth, having a private sector devoted almost entirely to computer (and related) technologies, recognize the threats that robots could pose if not properly controlled and regulated. This proposal has our support.
(OOC: Plus I like Asimov's writing ;))
1 Infinite Loop
20-12-2003, 08:15
I thank you, the reason we write this is Just the other day, one of our Cylon Planetary survey Robots attempted to attack a Freighter carrying supplies to the Martian colony of Packilvania, the Unit was of course disposed of by one of our skilled Viper Pilots, we are currently refitting all of our Android and Cylon units with Asimov circuits as quickly as possible.
The only problem i see with this is that an intellegent robot is in my view a sentient being. And as such, you can not program specific behavior without violating that being's rights.
I don't like murder or suicide, but i wouldn't want to reprogram a person so that they can't do these things, i would just expect them not to do them.
As, i don't believe in using artificial intellegence for slave labor. If the computer is not sentient, then there is no problem ... but sentient beings should not be at the bidding of any other being.
I would vote against these laws, even though i am an asimov fan.
I like it... not in the UN, so I have just about nothing to do with this, but it looks pretty good to me.
1 Infinite Loop
20-12-2003, 11:29
The only problem i see with this is that an intellegent robot is in my view a sentient being. And as such, you can not program specific behavior without violating that being's rights.
I don't like murder or suicide, but i wouldn't want to reprogram a person so that they can't do these things, i would just expect them not to do them.
As, i don't believe in using artificial intellegence for slave labor. If the computer is not sentient, then there is no problem ... but sentient beings should not be at the bidding of any other being.
I would vote against these laws, even though i am an asimov fan.
These apply only to Robots and like simple androids, Artificially intellegent Machines would naturally be exempt, for example, Andrew, the Positronic man naturally has a Asimov circuit, as he is not created to be sentient but acquires it during his life, however Mr Data he is naturally sentient from the get go and there fore has not one,
this is intended to keep Terminators and Sentinels and thsoe groovy matrix machines from taking over and Killing All Humans.
(I cannot believe I forgot to add that part, oh well at least I did leave it open for amendments)
I have submitted a UN Proposal, I would like it if everyone endorsed it, but Please feel free to use this thread to discuss it. To Find it in the UN Proposal Registy, please search for the title
The Laws of Robotics
Here is the Proposal in its entirety
Laws of Robotics
Introduction,
Seeing as how many of our Brother and Sister nations are on the verge of or have already created and in some cases utilizing independent Automatons, or Robots and Androids, We Of Infinite Loop Propse in order to protect the Human Race from the possibility of "Rogue Man Killing" Robots, and to protect Humanity from the threat of extermination at the hands of Robotic warriors Such as have been seen in recent Films, Urge the United Nations to ratify and institute the Three Asimovian Laws of Robotics.
1) A Robot may not Harm a Human being, or through Inaction allow a Human to come to Harm
2) A Robot Must Obey all orders given to it by a Human, so long as those orders do not conflict with the first Law.
3) A Robot must protect it's own existance, so long as doing so does not conflict with either the first or second Law.
These Laws will be Hardcoded into the Robots Operating System, as well as hardcoded onto a backup chip perminantly affixed to the Robots Central network Bus so that all information must pass through this chip with no means of bypass.
Also should the chip shall be designed so as to not be bypassed it shall control power supply and or motorfunctions.
All Robots currently in operation must within 3 years fitted with Asimov circuts (as Hardcoding of the Laws would in some cases damage the Robot. All robots found after this time without a valid Asimov Circuit shall be immeadiatly Destroyed.
Furthermore this Proposal should it pass will establish the Robot Constibulary lor R.C.: a division within local law enforcement charged with enforcing this and all further Robot Laws. the R.C. shall be the sole Robotic Law enforcement Division.
Backed up by Federal or equivalent Law enforcement division.
This Proposal is futher open for future ammendments to be submitted via the normal UN submision system.
I thank you for taking the time to read and consider this proposal and urge you to support it so as to safegaurd our future, and prevent the Terminator or Matrix from coming about. Also, Hello.
Hikaru Motenai
Peoples General Secretary.
Infinite Loop
We'll need a definition of 'robot', before we make any decisions. (OOC: Isaac Asimov rules. :) )
Robot meaning a machine with the ability to think on its own.
(Probably)
Asimov is the best and I would vote yes for this proposal.
Youngtung
20-12-2003, 20:42
The Empire would gladly give the go-ahead to this proposal, however, there is only one problem that we can forsee. We would like to propose that every robotic being be implanted with a small electro-static discharge device incase we lost control of the artificial life-form. If we lost control, we could simply hit a switch and the life-form would immediately be de-activated. This is the Empire's proposition.
The Empire would gladly give the go-ahead to this proposal, however, there is only one problem that we can forsee. We would like to propose that every robotic being be implanted with a small electro-static discharge device incase we lost control of the artificial life-form. If we lost control, we could simply hit a switch and the life-form would immediately be de-activated. This is the Empire's proposition.
But if robots can be defined as machines capable of independent thought (actually some humans can be defined that way too, but I digress) wouldn't you be violating robot rights ?
1 Infinite Loop
21-12-2003, 07:10
would you rather have a few Robots siting around thinking Man them Humans sure are stinky, or would you rather have a million T-800 and Cylons roaming around kiling every Human they find?
Kill switches as Youngtung proposed might be a good idea. One concern, of course, would be the possibility of improper shutdown of important robots at particularly important moments (presumably by anti-machine or anti-government groups).
I'm not sure if this comes from any of Asimov's writings, but I've seen mention of a "0th Rule" that allows robots to harm human beings if that harm would prevent a greater harm to society. Such a rule might or might not be a good idea, and would obviously be difficult to program, but I'll mention it just to get the idea going.
would you rather have a few Robots siting around thinking Man them Humans sure are stinky, or would you rather have a million T-800 and Cylons roaming around kiling every Human they find?
Those aren't the only two options. Every sentient being has a right to self-determination. I would hesitate to call any creature with such a strong instinct to kill sentient. A wise robot would be much like a human -- yes, it would have the capacity to kill, but would not be likely to do so, and would then face the justice of its peers.
Until we reach the point of sentience in machines, however, I support Asimov's robot rules as the best I've ever come across.
I think the main stumbling block for this proposal will be finding a good definition of "robot" that everyone can agree on. Heck, I can't come up with one that I stay happy with. Here's what Dictionary.com said, just as an example:
Robot, n.
1. A mechanical device that sometimes resembles a human and is capable of performing a variety of often complex human tasks on command or by being programmed in advance.
2. A machine or device that operates automatically or by remote control.
3. A person who works mechanically without original thought, especially one who responds automatically to the commands of others.
Fallen Eden
21-12-2003, 08:11
The best definition of "robot" is "A machine that operates independent of external control."
This resolution is supported, for robots that cannot display human intelligence, as determined by three judges to be educated in robotics and psychology.
Shaviv
Emissary
1 Infinite Loop
21-12-2003, 09:27
Well by Robot I am primarily going by the Asimovian concept and the only self aware robot in his writing that I can think of who is self aware is Andrew Martin, and I seriously doubt Andrew would even need an Asimov circuit.
in the sci fi aspect imagine how much easir to deal with a few situations ould have been for example, Dr Smith couldt have told Robot to kill the Robinsons if Robot had an Asimov circuit.
or Aimee in that mars movie, if she had a Asimov circuit she couldnt have tried to kill the martian dudes.
Maxmillion, in the black hole,
Nomad in Star trek, I would add V-Ger but although at the most basic level it is a robot, it wasnt more than a satelite when launched.
the Cylons in Battlestar Galactica.
and my last example will be a classic movie, West world and Future world,
both groovy films, but with asimov circuits they would have been less interesting.
Bearbrass
21-12-2003, 10:28
The rules sound right in theory, but most of Asimov's robot stories exploit their loopholes.
But maybe Bearbrass could support it, 1IL, if you change your Delegate's vote on the Spam proposal.
As I recall, most of Asimov's loopholes arise from specific situations like "Well, if we build a mind-reading robot, what would happen?" or "Suppose we eliminate the first rule, what would happen?" And I don't recall any of them ever ending in someone's death, which would of course be the primary reason to reject the rules.
would you rather have a few Robots siting around thinking Man them Humans sure are stinky, or would you rather have a million T-800 and Cylons roaming around kiling every Human they find?
I never know who you're replying to.. if you're replying to me, quote something. Otherwise I'd never know. What makes you think they'd be killing humans ? Can you not see how discriminatory this is ? replace "humans" with (some race or religion) and "T-800" and "Cylons" with two other races or religions.. how does it read now ? :)
Endolantron
21-12-2003, 20:59
First of all, I can only hope that this "Laws of Robotics" proposal is not to include sentient, inorganic entities with human intelligence, because if it does, then 10.7% of my nation's population, including some government leaders, would suddenly be forced into slavery after 348 years of civil and political freedoms. I'm not asking that the machines be allowed to hurt anyone they weren't defending against, but they should be allowed to at least keep the freedoms they have. By the way, 99.2% of the humans in my nation sympathize with them.
Um... I might want to mention that as you may suspect, I have been roleplaying my nation as being in the future... year 2573 ang rising, to be precise. The thing about 10.7% of my nation's population being inorganic comes with that roleplay, especially if I did in fact lead a nation into the 26th century, C.E. ...and I'm nearly certain that roleplaying a nation in the year 2573 ought to be acceptible.
Anyway, I just hope the proposal does not include any machine with human intelligence for the reasons stated above.
Also, wouldn't the robot law proposal outlaw the use of automated military vehicles, especially during a war when they're most needed?
The extended laws may also be concidered:
The Meta-Law ; A robot may not act unless its actions are subject to the Laws of Robotics.
Law Zero ; A robot may not injure humanity, or, through inaction, allow humanity to come to harm.
Law One ; A robot may not injure a human being, or, through inaction, allow a human being to come to harm, unless this would violate a higher-order Law.
Law Two ; A robot must obey orders given it by human beings, except where such orders would conflict with a higher-order Law. A robot must obey orders given it by superordinate robots, except where such orders would conflict with a higher-order Law.
Law Three; A robot must protect the existence of a superordinate robot as long as such protection does not conflict with a higher-order Law. A robot must protect its own existence as long as such protection does not conflict with a higher-order Law.
Law Four; A robot must perform the duties for which it has been programmed, except where that would conflict with a higher-order Law.
The Procreation Law; A robot may not take any part in the design or manufacture of a robot unless the new robot's actions are subject to the Laws of Robotics.
However, this does cause the problem of what classifies under law zero, as most automatons would be unable to comprehend what actions would classify, and it leaves open the way for many new "interpritations" of what is best for humanity. Therefore, I would suggest its omission.
United Typos
22-12-2003, 02:38
... Some robots and robotic systems are specifically designed to be weapons, so, will those countries who wish to use AI in weapons (as most weapons have AI now) stop progressing in that development and never used these weapons after this resolution passes?
And seeing as robots are machines that operate outside human control, would semi-intelligent machine systems like satellites and even factory line robots need to have an incompromisable Asimov circuit? And if so, how so?
Rational Self Interest
22-12-2003, 05:31
That this sort of absurdity - a few simplistic and poorly thought out ideas from children's fiction - is taken seriously in relation to real issues is quite surprising to Rational scientists working in the fields of robotics and artificial intelligence.
All of these laws are the very vaguest kinds of guidelines, and susceptible of infinite variations of interpretation by sapient minds. To a non-sentient robot, they are completely meaningless - any computerized mind capable of controlling specific actions on the basis of these crude and vague guidelines would necessary be of at least human intelligence.
It would have to solve ethical questions such as determining when human life begins and ends, which humans have yet to solve to our own satisfaction.
It would have to calculate the probabilities of all possible future consequences of specific actions.
It would have to weigh the relative ethical values of various possible futures; for instance, deciding whether a 10% probability of death for an already condemned criminal outweighs a 90% probability of serious injury to an important leader.
It would (hopefully) have to decide when to place reasonable limits on the application of the laws. If robots actually took the first law literally, it would be necessary for them to take all control in society away from humans (to prevent us from hurting ourselves and each other). Such dangerous activities as rock-climbing, driving, going outside, or eating solid food would have to be forcibly prevented, lest a robot by inaction allow a human being to fall, crash, be struck by lightning, or choke.
We respect the achievements and vision of researchers in artificial intelligence, but we do not seriously suppose that machines programmed by humans can ever provide final answers to questions of ethics that are insoluble to humans, let alone that non-sapient robots can be in any way usefully governed by rules that are meaningful only to sapients, to which those rules are extremely ambiguous and confusing.
Hideki Yukawa
Commissioner of Technological Oversight
Federation of Rational Self Interest
Bearbrass
23-12-2003, 05:24
Santin said
As I recall, most of Asimov's loopholes arise from specific situations like "Well, if we build a mind-reading robot, what would happen?" or "Suppose we eliminate the first rule, what would happen?" And I don't recall any of them ever ending in someone's death, which would of course be the primary reason to reject the rules.
It's a while since I've read the robot novels but, if I recall, the mind-reading robot used the 3 laws to evolve a 'zeroth' law, which involved putting the interests of humanity ahead of those of individual humans (always dangerous).
That led him to stand by while Earth was turned into a radioactive wasteland. Which probably involved a few people dying of cancer along the way, though Asimov glosses over that bit.
Eh, hardly original but I'll give it a vote come U.N. floor time... must get round to reading "Do Androids Dream Of Electric Sheep"...
Cheerio, Merry What-ever, A Rep of Komokom. :D
Whereas we see the likelyhood of Rational Self Interest's predictions based on these circuits, we suggests that if (and hopefully, when) this proposal is ammended and re-submited for consideration, the abstract first law can be changed to read "...allow a human to come to immediate harm."
In implementation, this would mean that a robot must only prevent harm to humans in it's immediate area. Potential future harm based on actions that have not even occurred need not be prevented by robots, as it is up to human society as a whole to decide what to do with itself, not robots.