NationStates Jolt Archive


Proposal: Laws of Robotics (again)

Naleth
27-01-2004, 05:44
This is a proposal submitted (in as jointly a way as possible, although more ) by myself and 1 Infinite Loop. We would like to thank the people who supported it in it's previous incarnation and point out that a few signifigant changes have been made:
1) The protection of the laws only covers civillian humans now (robots built for and used military situations can fight, harm, and kill enemy combatants)
2) Definitions of "harm" and "robot" have been included
3) Robot manufacturers can add to the definitions as they see neccissary. A nation may want to guarantee equal protection to androids, or if a nation is not composed of humans (as some UN member states are), they can include themselves in the definition applied to robots used in their nation.

The Laws of Robotics
A resolution to improve worldwide human and civil rights.


Category: Human Rights Strength: Mild Proposed by: 1 Infinite Loop
Description:
Definitions:
1) A Robot is hereby defined as a peice of sufficiently complex, automated machinery or software, often (although not necessarily) resembling a human, that can perform only pre-programmed tasks.
2) Harm, although not specifically and mathematically here defined as it will be in the eventual coding, is generally defined as immediate and certain physical harm.
Introduction:
Seeing as how many of our brother and sister nations are on the verge of or have already created and in some cases utilizing independent automatons, or robots and androids, it is hereby proposed that in order to protect the human race from the possibility of "rogue" robots, and to protect humanity from the threat of extermination at the hands of robotic warriors, urge the United Nations to ratify and institute the Three Asimovian Laws of Robotics (1-3), as well as the Extended Laws (Meta Law, 4, and Procreation Law):
Meta Law: A robot may not act unless its actions are subject to the Laws of Robotics.
1) A Robot may not harm a human being, or through Inaction allow a human to come to harm.
2) A Robot must obey all orders given to it by a human, so long as those orders do not conflict with a higher order law.
3) A Robot must protect its own existence, so long as doing so does not conflict with a higher order law.
4) A Robot must perform the duties for which it has been programmed, except when doing so would conflict with a higher order law.
Procreation Law: A Robot may not assist in the production or design of another Robot unless the new robot will also be subject to the Laws of Robotics.
Implementation:
These Laws will be hard coded into the robots operating system, as well as hard coded onto a backup chip permanently affixed to the robots central network BUS, or equivalent system, so that all information must pass through this chip with no means of bypass.
Also should the chip shall be designed so as to control power supply and or motor functions, and thereby disable the robot if it detects a violation of the Laws.
All Robots currently in operation must within 3 years fitted with Asimov circuits. Reprogramming of the OS will not be required in existing robots, as hard coding of the Laws would in some cases damage the robot. All robots found after this time without a valid Asimov Circuit shall be immediately destroyed.
Furthermore this proposal should it pass will establish the Robot Constabulary (hereafter the R.C.). The R.C. will be a division within local law enforcement charged with enforcing this and all further Robot Laws. The R.C. shall be the sole Robotic Law enforcement Division.
These restrictions apply only to robots sold to the general public or those used in a civillian envirmonment.

Amendments:
This proposal is open to additions, which may be submitted via the normal UN submission system.
I thank you for taking the time to read and consider this proposal and urge you to support it so as to safeguard our future.
Any country, corporation, or other entity that oversees the production of robots may expand (but not abridge) the definition of robot, harm, or human as they see fit.

Acknowledgements:
Naleth is hereby acknowledged for significant contributions to the proposal.
Issac Asimov is hereby acknowledged for the creation of the original laws.
Frisbeeteria
27-01-2004, 05:56
Delegate of Naleth,

The robotic design corporations of Frisbeeteria are just now in their development stage. We are still using microelectronic circuitry, hard disk drives, and similar technology in the production of our inductrial robots. While we respect the work of your lead scientist, Susan Calvin, we have neither access to the patents nor the underlying infrastructure to produce the positronic brains on which this proposal rests.

Should the nations of Naleth and 1 Infinite Loop wish to place such patents in the public domain, and (oh we do so hate to ask this way) financial assistance to the lesser developed industrial states, we would be forced to vote against this otherwise excellent law. As written, we would be unable to comply with the law, and failure to comply would cripple our burgeoning robotics export industry.

Sadly,
MJ Donovan, CEO, Frisbeeteria
1 Infinite Loop
27-01-2004, 07:31
With Four Loop nations to draw from , I would be happy to aid any nation who complies with this Proposal, as we in Infinite Loop see the control of a potential wildfire of terror that can result from rogue robots.

I encourage everyone to endorse this most imminent of Proposals.
27-01-2004, 07:33
---Post deleted by NationStates Moderators---
1 Infinite Loop
27-01-2004, 07:48
In more advanced Nations they are, Infinite Loop uses Robots made by US Robotics and Cyberdyne, (both [TA] companies)
and our military is made up of 25% Cylon Automated Military Robots.
Komokom
27-01-2004, 10:15
While Frisbeeteria makes a fair point, something to consider, I don't see why they worry as such,

The robots they are talking of sound like your basic assembly plant automatons, like those which have existed in thr R.W. for decads now (Well, at least 2, I think) and are not at all really independant, they ultimately require constant human input and oversight to commit and task, that is, they are essentially "dumb",

And I don't see how they would be applicable, I think it is really the thinking, self aware or at least further advanced robotics you need to worry about, like mobile service robots, (E.g, facility maintainance, and security bots with wheel, or legs, or ya know, self motivating) you would rather have aplly to these rules.

After all, those idusti robots you mention are your stick a car together kind, they might have a welding torch, but they can't lumber after you spinning their arms about crying "Danger A Rep Of Komokom, Danger!" while blasting zappy sparky special effects at you!

In other words, the bots you worry about don't sound like the "sufficiently complex" ones reffered to.

A Rep Of Komokom.

- Oh yeah, big clapping noise to Naleth and Loop, You guys are bringing back my faith in the U.N...

(And you get this to the U.N. floor, its got my vote, and many others I am sure, I am taking special interest on this prop, consider it post-it noted for future check ups on progress!)
Gigglealia
27-01-2004, 10:33
Gigglealia would like to note that it's citizens find it impossible to locate anything of a possible humourous nature in this proposal. As the UN delegate, I am repeatedly queried as to where the original humour and wit that was the game of Nationstates went.

Indeed, proposals such as this, utterly irrelevant and meaininglessly technical, serve nought but to drive the very essense of the game into a deeper and darker pit from which it may never escape.

Please, someone think of the children.
Rondebosch
27-01-2004, 11:07
I am concerned that by limiting these restrictions only to civilian robots, you are allowing for the possibility of rogue (possibly completely uncontrollable) robots which are, worse, military robots. I see robots without restrictions leading to self-aware robots without restrictions, and that worries me.

I am not comfortable voting for this proposal on that basis, even though I really like it otherwise.
Collaboration
27-01-2004, 16:19
We oppose military robots.
They are made to perform the evils from which all other robots are excluded.
They are a danger to their creators.
27-01-2004, 16:35
A very well thought out, extremely well written, eloquent, thorough proposal. And utterly useless.
Robots are ultimately an extension of their creators. If a creator so chooses to program them to have the potential of turning on humankind then let them suffer that foolishness. But to have a United Nation edict to outline robotics? A complete waste of time, frankly, in light of real issues.
You compassion is compelling but nevertheless unnecessary.
You can assert that I am a "noob" but it won't change the fact that this proposal is an exceptionally well written piece of fluff and totally senseless.
28-01-2004, 04:13
Vaderiland agrees to these terms. If we should go to war, we would like to settle it in a manner excluding nearly invisible creations (Terminators, basically). I would have to agree with the proposal.
Naleth
28-01-2004, 08:48
I am concerned that by limiting these restrictions only to civilian robots, you are allowing for the possibility of rogue (possibly completely uncontrollable) robots which are, worse, military robots. I see robots without restrictions leading to self-aware robots without restrictions, and that worries me.

I am not comfortable voting for this proposal on that basis, even though I really like it otherwise.
Keep in mind that we are only trying to enhance the level of safety provided by robots. Although military robots are not changed under this proposal, we believe that the benifits of increased safety to civilians are well worth the continued risk from military robots.

Furthermore, I would like to make it clear that the intent of this proposal is not to do away with robotic warfare, merely to provide enhanced safety to civillians. The alternative to exempting military robots that I considered was revoking protection for designated enemy combatants, but I decided against this because it could be applied in such a way as to either make military robots useless (if a third party could determine enemy combatant status, they would simply need to designate no one a combatant) or make the laws uterly inifective (the robots controller could designate anyone an enemy combatant).
Naleth
28-01-2004, 08:57
A very well thought out, extremely well written, eloquent, thorough proposal. And utterly useless.
Robots are ultimately an extension of their creators. If a creator so chooses to program them to have the potential of turning on humankind then let them suffer that foolishness. But to have a United Nation edict to outline robotics? A complete waste of time, frankly, in light of real issues.
You compassion is compelling but nevertheless unnecessary.
You can assert that I am a "noob" but it won't change the fact that this proposal is an exceptionally well written piece of fluff and totally senseless.
Which "real issues" would those be, if you don't mind my asking? Currently there is no proposal at vote, so time is being wasted regardeless.

You seem to think it is acceptable for innocent civillians to suffer because one person or group decides to use robots (which I hope we can agree are far more capable then humans physically and, with a good programmer, mentaly) in a way that causes harm to others. This proposal is a large step in preventing that possibility for harm.
Greenspoint
28-01-2004, 16:20
While the Rogue Nation of Greenspoint is a long way from developing robots of the technical level that would require the imposition of this proposal, that time is coming in the forseeable future. Having a guideline and set of rules in place for the creation and management of such robots can only help protect the organic citizens of this world when such robots become commonplace. We can support this proposal, and will contact our Regional UN Delegate about their endorsement of it.

Meanwhile, what do we do about all those non-UN member nations who are building their own rogue military robots that don't conform to this set of Robotic Laws?

James Moehlman
Asst. Manager ico U.N. Affairs
Naleth
28-01-2004, 18:54
Meanwhile, what do we do about all those non-UN member nations who are building their own rogue military robots that don't conform to this set of Robotic Laws?

James Moehlman
Asst. Manager ico U.N. Affairs
As with all UN proposals, an unfortunate truth is that non-UN nations are not made to comply with them. This is an inherant limitation on the UN, and we will simply have to hope that they will choose to comply on their own, either for economic reasons (sale of robots to UN members will require implementation of the Laws) or simply for the benifit of their citizens.
28-01-2004, 19:17
28-01-2004, 19:17
I was given to believe that it was not possible to build stable solutions to the positronic field equations that would form non-Asenion brains?

Also, how would a robot decide what constitutes a human being?

- Jordan
1 Infinite Loop
29-01-2004, 05:26
32 more endorsements
Frisbeeteria
29-01-2004, 05:33
Also, how would a robot decide what constitutes a human being?
Logic dictates that the action/decision loop would have to be so incredibly complex that there would be no processing power left to operate the robot. Consider 'the butterfly effect'."The idea in meteorology that the flapping of a butterfly's wing will create a disturbance that in the chaotic motion of the atmosphere will become amplified eventually to change the large scale atmospheric motion, so that the long term behavior becomes impossible to forecast."How could any robotic mind, no matter how advanced, be certain that their action or inaction would not harm any human being? Once this effect has been explained to them, they would have no choice but to permanently shut down. Even that would have the possiblility of some future person tripping over them and skinning their knee. The First Laws of Robotics is impossible to obey.


That said, I'm approving it anyway. It's a fun resolution.
29-01-2004, 07:51
If a robot is intelligent and self-aware enough to desire to procreate, Squeezitlandia WILL NOT support mandatory restrictions on and removal of freedoms from sentient beings. As conjectural as the situation may be, if a sentient being is created its rights should not be automatically shirked away simply because the being is an android.

"Rogue" or no, Squeezitlandia would not want its hands tied by the UN to decide the fate of such androids on an individual basis.

edit: and if the issue is not robots becoming sentient--demanding that they be programed as to not harm humans seems pointless, if the robot is an entirely human-operated piece of machinery, then it is nothing more than a weapon and such restrictions are arbitrary.
Naleth
29-01-2004, 08:37
The proposal would only affect non-sentient robots. For the proposals sake, robots are defined as beings mechanical (or software) entities that can only preform a pre-programmed set of instructions. A sentient being of any kind (mechanical, electronic, or organic) would therefore not be subject to the Laws.

They are intended for application in medium-complexity robots. On the one end, a toaster is a mechanical device that preforms a pre-programmed action, but it need not be subject to the laws because it complecates its action by orders of a magnitude more then it does for an already fairly complex robot. The same for a peice of software like notepad.

On the other end, though, your country may be producing sentient robots. Since these are capable of doing things that are not pre-programmed into them, they would be exempt because they do not fit under the definition of robot.

And don't be to confused by the name "Procreation Law" .. it doesn't imply a desire to procreate. It simply means that anyone attempting to make a civillian robot not subject to the laws will have to do so without the assistance of other robots.
East Caelum
29-01-2004, 08:37
I support the proposal, to be sure, though I would like to ask a question: Why didn't you include the Zeroth Law (mentioned in some of Asimov's later works)...

For those who don't know, the Zeroth Law (in a nutshell) forces an android to think in terms of humanity as a whole when considering its actions.

Personally, I would have edited the First Law a bit (nay, a lot)...Frisbeeteria mentioned one of its problems. Still, the proposal has room for ammendments, so we can cross that bridge when we come to it.

Ou Yasen
Son of Heaven, Imperial Shogun, U.N. Delegate of the Regian Corner of the World
Naleth
29-01-2004, 10:55
I o know of the zeroeth law, but I decided against it for two reasons:

1) The level of complexity of the 3 core laws, plus the added laws, is already very very difficult to put into a computerized format (especially the definition of harm). The zeroeth law would make those look like a simple "Hello World" program by comparison. The load required to compute it was the first reason.

2) The ability to predict harm in that fashion and over that wide a time line would be a charecteristic of sentient robots, anyway. A construction worker robot, for example, would not really need to know how to prevent harm to humanity as a whole. The kinds of robots (actually, since there is really only one robot with the Zeroeth law unless you count .. jeez its been a while .. the one that fried itself at the end of the books) that would be able to make use of the zeroeth law would be exempt from the laws because of their sentience.

On another note, looks like the resolution is going to fail. 20 or so endorsements in the next hour or two, not incredibly likely. Hopefully loop will resubmit it with a bit better timing .. Since it's last wendsday, I think you submitted it right before the update ... instead of right after. With this kind of support, it would probably succeed with almots an entire extra day to get endorsements.