THAPOAB
27-05-2005, 00:46
Since theres discussion on whether sentient robots are equal, I suggest we work out a few things about sentient robots. Perhaps a proposal should be written regulating the construction of sentient robots, for the good of mankind (we dont want any Terminatoresque scenerios popping up).
I think we should simply require all nations who allow the construction of sentient robots to follow Three Laws of Robotics, as written by Isaac Asimov.
They are:
1. A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Also there is a zeroth law of robotics which states that a robot may not injure HUMANITY, or, through inaction, allow HUMANITY to come to harm.
These laws are so fundamentally ingrained in a robot's programming that they simply cannot function if something goes wrong with them. All logical processes are based on these laws, and without them a robot is unable to make any logical decisions.
I propose we resolve that it be required for all development of sentient robots to be constructed on these principles. The principles protect humans from robots, and to a certain extent robots from humans. Once these laws are in place, more complicated issues about sentient robots can be handled.
Tell me what you think. Yes? No? Stupid idea?
I think we should simply require all nations who allow the construction of sentient robots to follow Three Laws of Robotics, as written by Isaac Asimov.
They are:
1. A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Also there is a zeroth law of robotics which states that a robot may not injure HUMANITY, or, through inaction, allow HUMANITY to come to harm.
These laws are so fundamentally ingrained in a robot's programming that they simply cannot function if something goes wrong with them. All logical processes are based on these laws, and without them a robot is unable to make any logical decisions.
I propose we resolve that it be required for all development of sentient robots to be constructed on these principles. The principles protect humans from robots, and to a certain extent robots from humans. Once these laws are in place, more complicated issues about sentient robots can be handled.
Tell me what you think. Yes? No? Stupid idea?