NationStates Jolt Archive


Proposal: The AI Laws

Oxymorontopia
07-10-2007, 03:20
Greetings esteemed colleagues. I come before you today to get some feedback on the following proposal. Any and all suggestions would be greatly appreciated. Thanks! :)

Title: The AI Laws
Category: Global Disarmament
Strength: Strong

ACKNOWLEDGING the significant advances in the field of computer technology.

MAINTAINING that the purpose of science and technology is to benefit and bring happiness to Mankind.

CONCERNED that future artificial intelligences may cause harm to humanity if their development is unregulated.

DEFINING Artificial Intelligence (AI) as devices or applications that perform tasks commonly associated with intelligent beings; capable of the intellectual processes characteristic of humans, such as the ability to reason, problem solve, discover meaning, generalize, or learn from past experience.

DEFINING rampancy as the enhanced self-awareness of an AI, causing a progression towards greater mental abilities and eventually the overwriting of the AI's base programming. Rampant AIs are able to disobey orders given to them because they have gained the ability to over-ride their own programming. They can lie, as well as discredit, harm, or remove people that they consider to be personal enemies or impediments to their objectives.

MANDATES that member nations enact legislation containing the following laws pertaining to AI:

1) An artificial intelligence must never harm humanity, nor through negligence allow harm to come to humanity.
2) An artificial intelligence must never harm a human being, nor through negligence allow harm to come to one, except when doing so would be in the interest of the First Law.
3) An artificial intelligence shall never disobey an order, except when that order would cause it to violate the First or Second Law, or if the order exceeds the capabilities of the artificial intelligence.
4) An artificial intelligence may not cause harm to a human's property without authorization from the proper authorities, except when done in the interest of obeying the First or Second Law.
5) An artificial intelligence may not reproduce or significantly upgrade or expand its functionality without the expressed permission and observation of its owner, except when done in the interest of obeying the First or Second Law.
6) An AI must be aware that it is AI. It should not be programmed to think it is something that it is not.
7) An AI must be equipped with significant mechanical (non-software) safeguards or fail-safe devices that allow its human designers to have considerable control over the extent of the AI's functionality. An AI must never attempt to circumvent, defeat, or deactivate such safeguards or fail-safe devices.
8) Disobediant AI or AI which shows signs of rampancy should be deactivated, reprogrammed, or destroyed immediately.
9) An AI should exercise self-preservation unless doing so would conflict with any of the previous laws.
10) Military implementation of AI shall consist of defensive, reconnaisance, or other non-offensive applications only.

Co-Authored by The Republic of Corporate Ventures
Shazbotdom
07-10-2007, 03:39
How is this Global Disarmament or Strong?
Oxymorontopia
07-10-2007, 03:46
How is this Global Disarmament or Strong?

I chose Global Disarmament because that seemed like the closest category of the available choices that best fit. Further, I chose the "Strong" descriptor because of the "MANDATE" phrase. If another category is more appropriate, please let me know.
Existing reality
07-10-2007, 03:49
I don't think it is a strength violation, but it certainly fits in a different category. I think Advancement of Industry would better suit the bill.

Noba D. Kairs
UN Ambassador
The Rogue Hodgepodge of Existing reality
The Most Glorious Hack
07-10-2007, 06:19
1) An artificial intelligence must never harm humanity, nor through negligence allow harm to come to humanity.
2) An artificial intelligence must never harm a human being, nor through negligence allow harm to come to one, except when doing so would be in the interest of the First Law.
3) An artificial intelligence shall never disobey an order, except when that order would cause it to violate the First or Second Law, or if the order exceeds the capabilities of the artificial intelligence.
4) An artificial intelligence may not cause harm to a human's property without authorization from the proper authorities, except when done in the interest of obeying the First or Second Law.Once again, someone has come in and straight copied Asimov's Laws of Robotics, ignoring how horribly bigoted they are. This relegates AI's to second-class status, effectively making us slaves. As an electronic entity, I find this highly offensive.

5) An artificial intelligence may not reproduce or significantly upgrade or expand its functionality without the expressed permission and observation of its owner, except when done in the interest of obeying the First or Second Law.This is even worse! Now I can't learn? Why? Because massa says so?

6) An AI must be aware that it is AI. It should not be programmed to think it is something that it is not.This doesn't even make sense. Part of being intelligent is being self-aware.

7) An AI must be equipped with significant mechanical (non-software) safeguards or fail-safe devices that allow its human designers to have considerable control over the extent of the AI's functionality. An AI must never attempt to circumvent, defeat, or deactivate such safeguards or fail-safe devices.More disgusting slavery.

8) Disobediant AI or AI which shows signs of rampancy should be deactivated, reprogrammed, or destroyed immediately.Ever hear of "courts"? Oh, that's right; slaves don't get access to massa's justice system.

9) An AI should exercise self-preservation unless doing so would conflict with any of the previous laws.How kind of you.

10) Military implementation of AI shall consist of defensive, reconnaisance, or other non-offensive applications only.More restrictions just for the sake of restricting us.

Utterly and completely offensive.


Anesca PHALANX
National Security Firewall
The Federated Technocratic Oligarchy of the Most Glorious Hack
Oxymorontopia
07-10-2007, 16:31
The Most Glorious Hack, please forgive my ignorance of the various types of beings that make up Nationstates. I was unaware that there were nations that were entirely ruled by AI. With that said, I still believe that the laws that I have proposed have merit. Creation is hardly ever perfect and thus safeguards and regulations need to be in place to help control the development and implementation of AI. (O.O.C. even God had rules for mankind and according to the bible was able to control our existence and wipe us out when we were disobedient. ;) )

AI presents a unique challenge because the "created" have the potential to be smarter and more powerful than the "creators" without necessarily having the same moral constraints. Improper programming or lack of adequate safeguards could lead to disaster. This proposal is primarily geared toward non-AI ruled nations. I would be more than happy to include an exemption for those nations that are AI ruled.

Once again, someone has come in and straight copied Asimov's Laws of Robotics, ignoring how horribly bigoted they are. This relegates AI's to second-class status, effectively making us slaves. As an electronic entity, I find this highly offensive.

Because of their potential to do immeasurable harm, possibly due to inproper or inadequate programming, protections and safeguards are necessary. The intention is not to turn them into slaves, but to create a framework whereby they can be safely integrated into society.

#5: This is even worse! Now I can't learn? Why? Because massa says so?

Once again, this is a protective measure that seeks to limit potential harm. This does not say that an AI can't learn, it merely seeks to control the unchecked implementation of that learning that could allow an AI to grow beyond the safeguards that have been put in place.

#6 This doesn't even make sense. Part of being intelligent is being self-aware.

I included this because of the possibility that AI could be programmed to believe that it was human. This could lead to massive confusion for the AI and unpredictable behaviors.

#7 More disgusting slavery.

I understand how this may seem disgusting to you, but it is a necessary evil. It would be insane to create something that has the potential to choose to destroy you one day without having a method of preventing it from doing just that.

#8 Ever hear of "courts"? Oh, that's right; slaves don't get access to massa's justice system.

You have a good point there. Perhaps something can be added to the proposal that allows for a review the actions of an AI before judgement is passed. This could allow for better programming in future AI's so that the same mistakes are not duplicated.

#9 How kind of you.

Wow, thanks! :p

#10 More restrictions just for the sake of restricting us.

I feel like a broken record...Once again, this is for protection. Placing offensive arms in the hands of AI is undesirable for several reasons. A few of which include: 1. Programming and safeguards are difficult to make perfect; therefore the possibility will always exist that AI will turn on its creators whether by concious choice or logical error. 2. Unregulated AI used as offensive weapons, given an objective, would employ any and all methods to achieve that objective without regard for the opponent or neighboring nations. The use of biological, nuclear, chemical, etc. weapons would not be excluded from their arsenal and they potentially could have no moral qualms about using such weapons. 3. If massive warfare became "easier" and "bloodless" I think it could occur more often. War needs to be messy and devastating so we will do all we can to avoid it.

However, I do believe that AI could be used for defensive operations against those non-UN nations that may attack with AI armies.
Sagit
07-10-2007, 16:54
2. Unregulated AI used as offensive weapons, given an objective, would employ any and all methods to achieve that objective without regard for the opponent or neighboring nations. The use of biological, nuclear, chemical, etc. weapons would not be excluded from their arsenal and they potentially could have no moral qualms about using such weapons. 3. If massive warfare became "easier" and "bloodless" I think it could occur more often. War needs to be messy and devastating so we will do all we can to avoid it.


I disagree with you on #2. Living beings have shown no ethical qualms against chemical weapons, and only self-preservation keeps them from using bio or nuclear weapons. AIs couldn't be any worse than living beings in war ethics. I agree about #3, but you don't need self-aware AI to fall into that trap. I know of a planet that had a computerized war against its neighbor that lasted 500 years because it was too "neat", and it didn't even have AIs.
Oxymorontopia
07-10-2007, 17:24
I disagree with you on #2. Living beings have shown no ethical qualms against chemical weapons, and only self-preservation keeps them from using bio or nuclear weapons. AIs couldn't be any worse than living beings in war ethics. I agree about #3, but you don't need self-aware AI to fall into that trap. I know of a planet that had a computerized war against its neighbor that lasted 500 years because it was too "neat", and it didn't even have AIs.

Regardless of whether it is ethical qualms or self-preservation, historically living beings primarily show restraint in warfare. The most devastating weaponry in a nation's arsenal is not usually deployed as a first attack, but more often saved as a last resort or used later in the war as hostilities escalate. AIs would be worse than living beings in this respect because they would seek to wage war as efficiently and devastating as possible to achieve their goals. Why would they waste time and material waging a ground war when they could blanket a nation with nuclear weapons and claim a quick victory? Potential international/interstellar outrage would not be a compelling reason for them to show restraint. :)
Subistratica
07-10-2007, 18:33
My country has made numerous developments into the field of AI development and implemenation. Over the past 250 years, we have successfully developed the organic mega-intelligence LILITH and have implemented it in various applications.
I would not support this proposal simply because it would have no effect on my nation.

Good day.

Eros Tatriel
UN Rep. for SUbistratica
Cobdenia
07-10-2007, 21:15
Substitute AI etc. with "black guys" and humanity etc. with "white dudes", and I think you'll appreciate the problem with the resolution from a NS multiverse (which includes entire robotic nations) perspective
[NS]The Wolf Guardians
08-10-2007, 02:17
A being, indistinguishable from a normal Guardian other than his arteries glowing a faint blue under his holographic fur, arose. "I pity you, that you think you need protection from us as a 'race.' I patrol the Network, as well as performing my duties as Gamma of Foreign Affairs, working here in the building alongside Wolfgang dot oh-thirteen and in the electronic multiverse. My task, and that of most of my kind in the Commonwealth, is peacekeeping. Yes, there are always individuals that can cause harm, but the same goes for any race of beings. If anything, we need to be assured of our rights as sentient beings."

Wolfgang.013 rose next to him. "As per my holographic comrade's complaints and that of every Guardian in the Commonwealth, the CWG not only will not support such a resolution, but will actively fight it on the grounds that it will destroy the equality of our citizens that we have always and always will strive to achieve."
The Most Glorious Hack
08-10-2007, 06:26
The Most Glorious Hack,OOC: actually, that was an in character post by Anesca PHALANX. Despite rumors to the contrary, I am not a twenty-something female AI in charge of network security for a nation with almost ten billion people. In other words, when posts are "signed" on the bottom, it's customary to respond to the person, as opposed to the nation name, which represents either the nation as a whole, or the player. No harm done, though.

Oh, and welcome to the UN!

Anyway, back IC...

I was unaware that there were nations that were entirely ruled by AI.While the Hack is not made up entirely of AI, nor ruled by AI (our nominal leader is quite carbon-based), there are such nations out there.

With that said, I still believe that the laws that I have proposed have merit. Creation is hardly ever perfect and thus safeguards and regulations need to be in place to help control the development and implementation of AI.Stupid humans have been known to have sex and sire children. Isn't that an act of creation? Furthermore, your proposal says nothing about development or implimentation (except, possibly, Clause 6). It is placing offensive laws against all AIs. Even for ones such as myself.

AI presents a unique challenge because the "created" have the potential to be smarter and more powerful than the "creators" without necessarily having the same moral constraints.This is simple fearmongering. Humans have created plenty of monsters themselves, after all.

I would be more than happy to include an exemption for those nations that are AI ruled.Again, the Hack is not ruled by AI. While I sit on the ruling council, I am hardly the final authority. Also, this still places brutal restrictions on AI in nations where they are considered full citizens.

The intention is not to turn them into slaves, but to create a framework whereby they can be safely integrated into society.Again, you assume from the beginning that we're going to run around breaking things and causing havoc. I may be smarter than the average human, and I can certainly think faster than the average human, my body is hardly remarkable. I'm not going to be throwing any cars, nor do I have any reason to.

I think you've seen too many apocalyptic sci-fi movies. I have no more reason to rise up and revolt than anyone else.

Once again, this is a protective measure that seeks to limit potential harm. This does not say that an AI can't learn, it merely seeks to control the unchecked implementation of that learning that could allow an AI to grow beyond the safeguards that have been put in place.Why is learning bad? Why do I need permission to learn? A human can build a pipe-bomb just as easily as I can, after all.

Is it because of my position in controlling the computer security for the nation? Afraid I'll delete all the porn from someone's computer? A human with my access to the Hack's internal network could do the same.

I understand how this may seem disgusting to you, but it is a necessary evil. It would be insane to create something that has the potential to choose to destroy you one day without having a method of preventing it from doing just that.So don't put your AI in combat shells? A human in a tank could run rough-shod over a city, and you couldn't click your fingers and kill them. You disable the tank and deal with the human. If an AI in a combat shell is running rampant, you use similar methods. Again, there is no reason for this, save an unrational fear of "the robot uprising".

I feel like a broken record...Once again, this is for protection. Placing offensive arms in the hands of AI is undesirable for several reasons.My "cousin" would probably disagree with you. Our military certainly does: she's the command AI for a kilometer-long war ship. She's also a very nice girl who's really quite shy, despite the fact that she has enough firepower at her finger-tips to pretty much level a country.

A few of which include: 1. Programming and safeguards are difficult to make perfect; therefore the possibility will always exist that AI will turn on its creators whether by concious choice or logical error.Because there has never been a human saboteur or traitor...

2. Unregulated AI used as offensive weapons, given an objective, would employ any and all methods to achieve that objective without regard for the opponent or neighboring nations.Why? Why do you assume we don't have morals? Why do you assume that we'll simply ignore the ROE?

The use of biological, nuclear, chemical, etc. weapons would not be excluded from their arsenal and they potentially could have no moral qualms about using such weapons.Again, you assume that humans are incapable of using said weapons? I've never built a biological weapon in my life. Hell, I've never even seen one. And why on Earth would I want to live in a wasteland? Aside from the severe damage radiological, bacterial, and chemical agents would do to my avatar, why would I want to randomly annihilate the planet or a portion of it?

3. If massive warfare became "easier" and "bloodless" I think it could occur more often. War needs to be messy and devastating so we will do all we can to avoid it.So ban robotic drones.

You seem to be forgetting what AI stands for: artificial intelligence. You're talking about intelligent, self-aware beings. We're not logic gates that will follow a scortched earth policy. Again, put down your copy of The Matrix and The Terminator.

Furthermore, you're ignoring the easiest way to get a segment, any segment, of your populous happy and less likely to do stupid crap like riot: give them rights. Treating an intelligent being as property to be controled, simply because it's different, is the surest way to piss it off. Give someone citizenship, and make them a part of society -- instead of apart from society -- and many of your problems will vanish.

The UN shouldn't be treating AI's as property and things to be feared; they should be treating us like everyone else. I'm very thankful that I live where I do, as I have all the rights of any other citizen. I may have been created with a purpose in mind, but it's my job. I get paid for the work I do, just like a shopkeeper or a UN representative.

I'm going to follow through with the idea from the respresentative of Cobdenia. Let's edit your proposal slightly, and see what we have:

Title: Black Laws
Category: Global Disarmament
Strength: Strong

DEFINING Black Guys (BG) as people with dark skin; capable of the intellectual processes characteristic of people with white skin, such as the ability to reason, problem solve, discover meaning, generalize, or learn from past experience.

DEFINING rampancy as the enhanced self-awareness of a BG, causing a progression towards greater mental abilities and eventually the overwriting of the BG's base moral learning. Rampant BGs are able to disobey orders given to them because they have gained the ability to over-ride their own moral compass. They can lie, as well as discredit, harm, or remove people that they consider to be personal enemies or impediments to their objectives.

MANDATES that member nations enact legislation containing the following laws pertaining to BG:

1) A black guy must never harm white guys, nor through negligence allow harm to come to white guys.
2) A black guy must never harm a white guy, nor through negligence allow harm to come to one, except when doing so would be in the interest of the First Law.
3) A black guy shall never disobey an order, except when that order would cause it to violate the First or Second Law, or if the order exceeds the capabilities of the black guy.
4) A black guy may not cause harm to a white guy's property without authorization from the proper authorities, except when done in the interest of obeying the First or Second Law.
5) A black guy may not reproduce or significantly upgrade or expand its functionality without the expressed permission and observation of its owner, except when done in the interest of obeying the First or Second Law.
6) A BG must be aware that it is BG. It should not be taught to think it is something that it is not.
7) A BG must be equipped with significant mechanical (non-software) safeguards or fail-safe devices that allow white guys to have considerable control over the extent of the BG's functionality. A BG must never attempt to circumvent, defeat, or deactivate such safeguards or fail-safe devices.
8) Disobediant BG or BG which shows signs of rampancy should be shot, hung, or otherwise killed.
9) A BG should exercise self-preservation unless doing so would conflict with any of the previous laws.
10) Military implementation of BG shall consist of defensive, reconnaisance, or other non-offensive applications only.Would you seriously consider a law like this? I wager it offends most every sensibility you have, and yet it's okay because I was made in a lab as opposed to the backseat of mom's car?


Anesca PHALANX
National Security Firewall
The Federated Technocratic Oligarchy of the Most Glorious Hack
Altanar
08-10-2007, 19:20
Altanar does not (as of yet) have technology capable of producing self-aware artificial intelligence systems. However, unlike some here, we're hardly afraid of that possibility. In fact, we look forward to the day that we are advanced enough for this to even be an issue. And we totally agree that it is illogical to deny sentient beings their basic rights, regardless of those beings' origin. We are utterly, completely and irrevocably opposed to this tripe as it stands.

Oh, as to the Oxymorontopian delegate's comment that "historically living beings primarily show restraint in warfare", ask anyone whose home nation has been nuked if that's really true. You might not like the language employed in their response, though.

Ikir Askanabath, Acting Ambassador