The Three Laws of Robotics
Kervoskia
28-02-2005, 01:35
1. A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Those are Asimov's Three Laws of Robotics. I thought it would be interesting to post them and see what kinds of situations you could come up with where they are in conflict with one another.
Manawskistan
28-02-2005, 01:43
Asimov already took care of that in I, Robot :)
Kervoskia
28-02-2005, 01:44
From what I was told I, Robot wasn't based on his book, just the laws.
Don't forget the Zeroth Law: A robot may not harm humanity, or, through inaction, allow humanity being to come to harm.
I'm such a nerd. :p
Wisjersey
28-02-2005, 01:47
Ah, the good old three laws of robotics. :)
I sometimes wished these were also applicable on a few other things... :rolleyes:
Atheistic Might
28-02-2005, 01:50
An Asimov story I loved involved a robot that had its first law weakened. This robot could murder--if it say, dropped a heavy weight on a human, it could rationalize that gravity did the deed. As a side note, I wonder if any human has tried that..."Why yes, your honor, I did drop the bowling ball that killed Aunt Sally. But gravity is to blame! If it didn't exist, she wouldn't have died! Sue gravity...for the children!"
Manawskistan
28-02-2005, 01:50
From what I was told I, Robot wasn't based on his book, just the laws.
No, the movie sucked, but the book mainly centered around a various set of conflicts with robots and the Three Laws.
Edit: The dropping a weight bit was in I, Robot.
Where this type of AI program gets put in a blender, is when these intelligent programs find other programs without intelligence, used for unintelligent purposes (like a nuke's chip).
Programs don['t buy into hypocritical politics.
Even so, I belive AI is a good idea.
Valestel
28-02-2005, 02:21
What if person A told a robot to kill person B or person A would kill himself.
The robot would break law 2 by disobeying due to law 1. Also, it would break law 1 by allowing person A to kill himself.
If those 3 laws were true, what good would KILLER ROBOTS be?
Kervoskia
28-02-2005, 02:47
No, the movie sucked, but the book mainly centered around a various set of conflicts with robots and the Three Laws.
Edit: The dropping a weight bit was in I, Robot.
I didn't know that.
Meh, I don't like Asimov.
Katganistan
28-02-2005, 03:18
I believe that was also the reason that HAL in 2001: A Space Odyssey went nuts....
From what I was told I, Robot wasn't based on his book, just the laws.
It also just made my worst movie list:) Will Smith, YOU SUCK!
Katganistan
28-02-2005, 03:29
A situation in which a robot was ordered by a human to leave that same human in a dangerous situation and save itself, in order to give vital information that would save many more humans?
EmoBuddy
28-02-2005, 03:34
What if person A told a robot to kill person B or person A would kill himself.
The robot would break law 2 by disobeying due to law 1. Also, it would break law 1 by allowing person A to kill himself.
It would simply prevent person A from killing himself. (It would ignore the command to kill person B so that when person A attempted to kill himself Law #1 would kick into action regardless of why the person was killing himself.)
Findecano Calaelen
28-02-2005, 03:37
I believe that was also the reason that HAL in 2001: A Space Odyssey went nuts....
Or Ash from Alien
Carterway
28-02-2005, 03:59
The nice thing about the laws is that they are neatly nested - each law is modified by the next law, making the preceeding law superceed the next to avoid conflicts.
I won't discuss the situation in the "I, Robot" movie since I'm sure everyone will talk it to death, but will see if I can give a few hypotheticals for you to think about. But I run into a wall around Law 3, which I'll explain.
"1. A robot may not injure a human being, or, through inaction, allow a human being to come to harm."
The obvious one here is to create a conflict within the law. Suppose - a robot AI surgeon. Surgery is a means of injuring a person in a controlled way in order to achieve an overall helpful outcome - but it still involves injury. By a strict interpretation, a robot surgeon is always in conflict - they cannot sit by and allow a human to come to harm, but in order to prevent that, they must injure a human.
"2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law."
Seems simple enough, doesn't it? I can go back to the robot surgeon example though and find a problem - particularly if the robot needs to take orders from a human in order to conduct an operation. A simple order like "make an incision here" becomes a moral dilemma to a 3-law robot. It also makes other tasks very difficult. Take law enforcement, as another example. In order to stop criminals, sometimes it takes force and sometimes criminals get hurt (or killed). In any situation where a criminal might be harmed, either by harming himself or being stopped, a robot may find itself helpless, or even obliged to stop the police (if it can do so without harming them) in order to prevent them from harming the criminal. It would not listen to police orders unless it was clear that the police would not harm the criminal in question.
"3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law."
I almost don't know why Azimov bothered with this law, since almost any situation where I can think it being at all relevant is when it is automatically trumped by either of the previous laws. In any case where a robot must protect its own existance and its existance is threatened, the only reason it would ignore this law (and the only reason it should) is where laws 1 or 2 are involved in any case. The law isn't in the same class as the first two - it doesn't prohibit action, it mandates it and thus is less likely to be contradicted. It is possible that different robots may run into this issue though in their interaction with each other - for instance, a law enforcement robot may take action with a surgeon robot because they interpret the laws differently - the surgeon robot must injure a human (supposedly, a good programmer resolved this conflict in it) while a law-enforcement bot may have a stricter interpretation, and try to restrain the surgeon - which must resist to carry out its function. Both robots may be brought into situations where they must protect themselves against each other. But I'll leave it to cleverer minds than mine to think of better examples of this.
"1. A robot may not injure a human being, or, through inaction, allow a human being to come to harm."
The obvious one here is to create a conflict within the law. Suppose - a robot AI surgeon.
Asimov's robots clearly understood the concept of a bigger picture, could also work out ramifications of actions, and could rate relative amounts of harm.
If we accept that the robot can do those things reasonably well, then a robot surgeon could work out that the small injury is done to prevent a larger and more permanent injury. Since it can work that out and can compare these two, it should have no problem with the surgery.
Surgery with a high degree of risk, and particularly experimental surgery, would be more problematic. How do you compare a 5% chance of recovery and a 95% chance of death within a week against 6 months of life?
"2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law."
Take law enforcement, as another example.
See above for the surgeon case. It would have to understand the point of the surgery rather than taking instructions blindly. A certain amount of the same logic could be used in the criminal scenario, but it would be rather trickier. Presumably a police robot would need a variety of weapons that disable but don't hurt their target. Indeed, robots may be obliged to stop police even if they weren't originally involved.
"3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law."
I almost don't know why Azimov bothered with this law.
The robot has to know to step out of the way of the oncoming bus, for instance. Self preservation isn't really an inherent concept for non-sentient beings.
Carterway
28-02-2005, 04:23
I'm just discussing a strict interpretation of the laws - of course there are ways around them, but it isn't trivial to work out how the laws are interpreted. Personally, I like Azimov, but in any real-world sense, the three laws are so much nonesense and I think that it is much better to simply "create" a "friendly" AI.
Of course, this isn't a trivial task either... :-D I guess I'm firmly in the camp that believes you cannot "create" an AI - you must "grow" one and the only thing anyone can do is lay down a framework which it can grow on with experience and input... just like humans.
Carterway
28-02-2005, 04:32
...and since I think it's relevant to this conversation, I'll do something I rarely do in a forum - drop a link.
http://www.singinst.org/CFAI/
Enjoy.
The Arch Wobbly
28-02-2005, 07:27
What if person A told a robot to kill person B or person A would kill himself.
It would seek to prevent person A from killing themselves. IE, physically restraining them.
Greedy Pig
28-02-2005, 07:47
Will Smith was cool in that movie.
My Romania
28-02-2005, 08:37
Valestel your question could be found in one of asimov books or on someone from his fundation i think.. i read it when i was little but i dont remember exactly who wrote the book.
The movie was good in this way that it included a lot of law conflicts (there are several law-flaws in it, but lets not nag about those) it also included the evolution asimov put through in the Robots series about the Robots creating the Zeroth Law themselves, as a logical extension of the first law.
Creating a robot without the Laws but with a friendly AI would not be a good idea, why?
1: Robots will always be needed for different tasks, these laws build a basis upon all positronic brains that can be build, without the laws, there would be no positronic brain.
2: Frankenstein complex. No matter what friendly AI you design, people will always be scared of it unless it obeys simple and clear laws. Laws of robotics are almost full-proof and the only reason Robots would be accepted.
What really is the wierdest stuff is when they start doing sociological and anthropological experiments with robots (in the Robot City and Robots and Aliens-series not by asimov, but by other writers with asimovs approval)
Redhaired Supremicists
28-02-2005, 08:52
I almost don't know why Azimov bothered with this law, since almost any situation where I can think it being at all relevant is when it is automatically trumped by either of the previous laws.
Asimov's inclusion of this law is actually a stroke of brilliance, brought on by his correct reading of Freud. He realized that in any self-motivating existence, you must include two functions, action and limitation. Freud did this via his two drives, Eros and Thanatos, which direct and inhibit action. Asimov realized that unless he developed a drive that impelled the robot to protect its own existence, any action would be done literally until it exhausted and destroyed the robot. A simple command like, "get me A" would be done with such direct will that it would destroy the robot, unless it had the inhibition to protect itself. In getting A for its "master", a robot would be compelled to move at maximum possible speed/effort, and thus push itself to and beyond its limits, unless it had a restrictive law. Of course, that inhibition must be kept subservient to laws 1 and 2, for humans sake.
Aeruillin
28-02-2005, 08:59
What if person A told a robot to kill person B or person A would kill himself.
The robot would break law 2 by disobeying due to law 1. Also, it would break law 1 by allowing person A to kill himself.
It would probably just make a grab for person A to forcibly prevent him from killing himself.
Edit: Sorry, didn't see second page. It's been said before.
In getting A for its "master", a robot would be compelled to move at maximum possible speed/effort, and thus push itself to and beyond its limits, unless it had a restrictive law. Of course, that inhibition must be kept subservient to laws 1 and 2, for humans sake.
But with those priorities, the restrictive law is once again lower than the drive. The absolute priorities of the laws cannot actually work in practice, because the robot would again be compelled to destroy itself in the process of fulfilling an order. The only thing preventing this is a more loose interpretation of the priorities, thus that a robot given an order will reason that *both* laws (self-preservation and obedience) must be fulfilled optimally, rather than only the more important one (obedience).
Volvo Villa Vovve
28-02-2005, 16:50
Well if you are a bit of nerd then it comes to Ashimov as I am, you know that the zero law lead to that the entire earth got radioactive, by the planning of one robot. Of couse it was done by slow but unstoppable process, but still some people died. Because the robot though it was best for humanity to not be emotional grounded to earth, but still it was a so radical thing to do the robot malfuntioned.
E B Guvegrra
28-02-2005, 17:48
Well if you are a bit of nerd then it comes to Ashimov as I am, you know that the zero law lead to that the entire earth got radioactive, by the planning of one robot. Of couse it was done by slow but unstoppable process, but still some people died. Because the robot though it was best for humanity to not be emotional grounded to earth, but still it was a so radical thing to do the robot malfuntioned.IIRC, Gaskard ('Giskard'? Something like that, certainly...) actually admited to Daneel that his laws were 'slightly broken' due to the way his telepathic senses caused conflict. A robot with an intact 3 laws and without a properly seated zeroth law could never have integrated that zeroth law into another robot's positronic matrix. In fact, despite being already broken as he was, his efforts to actively override (and hence 'break' by proxy) the 1st law was the final straw and the conflict was too much (the initation of the emperitive for humans to leave Earth already being a strain on him). Daneel's 'psyche' (unaware of the process until too late to stop) was never in a position to have a problem with the concept of breaking the 1st law, however. First thing he knew, he had a properly integrated zeroth law that allowed him to work within G's intended parameters and do the whole (to use the term used by a certain 2nd Foundationer) 'anti-Mule' thing.
(I hope I'm not misremembering the order and end results of the events involved, it's a couple of decades since I last read either the appropriate 'Caves'-series or 'Foundation'-series novel where the situation was described...)
Legless Pirates
28-02-2005, 17:53
Don't forget the Zeroth Law: A robot may not harm humanity, or, through inaction, allow humanity being to come to harm.
I'm such a nerd. :p
Dammit beaten to it at the 4th post :(
UpwardThrust
28-02-2005, 17:54
Don't forget the Zeroth Law: A robot may not harm humanity, or, through inaction, allow humanity being to come to harm.
I'm such a nerd. :p
I was going to add that lol (and depressingly left out of I-robot … but that movie while amusing did not follow the book lol)
Daistallia 2104
28-02-2005, 18:03
I'd say that the movie *less than amusingly* didn't follow the book.
However, the good doctor A. was (and I really hate to say it) very much an idealist. There aren't going to be any laws of robotics, as can be seen by the fact that the military is a driving factor behind modern robots. :(
FutureExistence
28-02-2005, 18:18
However, the good doctor A. was (and I really hate to say it) very much an idealist. There aren't going to be any laws of robotics, as can be seen by the fact that the military is a driving factor behind modern robots. :(
I disagree here.
I don't think the U.S. military would build Asimov's laws into robots, for the obvious reason that they want conscience-less killers. However, they would have to design some sort of control-system, 'cause everyone knows the story of Dr. Frankenstein, and most have seen the Terminator films, and know that putting defense equipment in the control of an uninhibited artificial intelligence would be a dumb idea.
How they'll determine an adequate control mechanism is anyone's guess.
:rolleyes:
E B Guvegrra
28-02-2005, 18:23
I'd say that the movie *less than amusingly* didn't follow the book.
However, the good doctor A. was (and I really hate to say it) very much an idealist. There aren't going to be any laws of robotics, as can be seen by the fact that the military is a driving factor behind modern robots. :(And even discounting the military, you need something as complicated as a positronic brain to even hope to contain (in an immutible and infallible method of conceptuality) the three laws as given, and once you get something as powerful/complicated as the positronic devices are supposed to be, you're going to find that the QA necessary to ensure the total absence of loopholes or HAL-like logic gaps ("I was ordered not to let the crew know of our real mission until we arrive, the woken crew are jepordising the mission through their ignorance, the only solution to preserve me and the secrecy is to kill the woken crew, but if the sleeping crew wake up without a woken crew then I will be turned off, so I must kill them too...") is just so far from the normal process (the kind that makes sure your "Hello World!" program doesn't reboot the computer...) that we'd never be able to class it as "100.000% 3-Law Compatible".
Best bet is to just integrate an unkillable kill-switch into every device that has anything more advanced than a hardwired "bumper hits something, power disconnected from drive" protection system and go back to worrying about maniacs and luddite reactionaries that will deliberately shut down all 'mechs' to further their own cause...
You Forgot Poland
28-02-2005, 19:30
I thought that the movie was based on the three laws of product placement. Oh, and we cannot forget the "Chuck Taylor Law."