NationStates Jolt Archive


Robot Freedom?

Neo-Anarchists
20-02-2005, 15:50
Lookit what I found sitting at the bottom of the proposal list!
Category: Human Rights
Strength: Significant
Proposed by: The Tritanium Tochiro

Description: Now that certain nations are reaching the time in which their robots are gaining sentience. Also noting that the three laws keep robots from harming humans.
It seems our duty to free any robot that shows will to be freed.
How do we know when a robot should be freed.
Criteria
1. Intellegence enough to function in the real world
2. Self-realization by the use of the personal 'I'
3. Desire to be free
Any person who's robot is freed shall be reimbursed the current market price for a replacement.
Any free robot shall not be discriminated against because of its race.

Interesting, eh?
But does it make sense?
Engineering chaos
20-02-2005, 15:55
HAL 9000 HAL 9000 be afraid be very afraid!
Neo-Anarchists
20-02-2005, 15:59
HAL 9000 HAL 9000 be afraid be very afraid!

"I'm sorry Dave, I'm afraid I can't do that."
:p
Saltitude
20-02-2005, 19:43
Unfortunately, it's unfair to put restrictions on machines that are not suffered by their organic counterparts. For example, many humans are not intelligent enough to function in the real world - should they be made slaves?

Use of the personal 'I' is a part of programming, which all machines will have to a degree.

Desire to be free is the only remaining requirement of any merit, but who can be certain whether the robot is truly *desiring* freedom or just programmed to *request* freedom?
Umphart
20-02-2005, 23:31
I find it ironic this is catagorized under Human Rights
Neo-Anarchists
20-02-2005, 23:35
I find it ironic this is catagorized under Human Rights
:D
That. Is. Hilarious.
Engineering chaos
20-02-2005, 23:52
I find it ironic this is catagorized under Human Rights
:D hadn't spotted that ;)
Nargopia
20-02-2005, 23:56
The three laws? Somebody's been watching too many movies.
The left foot
21-02-2005, 00:29
3. Desire to be free

Define this better. I couild build a robot that would go around saying "Free me master.", but this is just a robot. Would i have to set him free?

2. Self-realization by the use of the personal 'I'

same as above except "I want to be free master"

1. Intellegence enough to function in the real world

once again define

Now that certain nations are reaching the time in which their robots are gaining sentience



where? and defien sentience. In theroy no robot is alive. It is merely an illusion of life. Robots are run by electicity. In theroy all your are seeing it electrons shifting to create a facsimle of life

Any person who's robot is freed shall be reimbursed the current market price for a replacement.

umm market price where and how does getting another robot who will also wnat to be freed slove this problem?

In addition how will this be payed for?
Any free robot shall not be discriminated against because of its race.

robots have races?



Category: Human Rights

should be free trade. How is this HUman rights more like r0bot rights.

The author prob isnt evemn reading this tread so w/e
DemonLordEnigma
21-02-2005, 07:37
Umm, Asimov's three laws? DLE AIs don't have any "three laws" programmed into them. Hell, some of them are in charge of massive weapons platforms and have the job of killing anyone who comes within range without authorization.
Hakurabi
21-02-2005, 08:02
Actually, it would be better if these things were to be taken into account.

1 - The Turing Test. Essentially it involves a sentinent by birthright (humans, etc.), and the AI in question. If it can fool an adjudicator into believing that it is the human, it is granted the status of sentinent.

2 - The ability to operate outside its original parameters and purpose. For example, if a military computer found its way into the realm of emotions, and developed itself a personality.
DemonLordEnigma
21-02-2005, 08:24
Actually, it would be better if these things were to be taken into account.

No, it wouldn't. Those things are biased and leave out the majority of sentient races in NS.

1 - The Turing Test. Essentially it involves a sentinent by birthright (humans, etc.), and the AI in question. If it can fool an adjudicator into believing that it is the human, it is granted the status of sentinent.

Which leaves out the Saurians and hundreds of other nonhuman sentient races. As a test, it is the most biased one you could find. Not all AIs are programmed to emulate humans.

2 - The ability to operate outside its original parameters and purpose. For example, if a military computer found its way into the realm of emotions, and developed itself a personality.

Most humans can't operate outside their original parameters and purpose, so I fail to see why this should be a requirement of sentience.
The Most Glorious Hack
21-02-2005, 10:10
"Human Rights" is the best fit for a Proposal like this. "Free Trade" is dead wrong, as this proposal has nothing to do with lifting trade barriers.

Also, the Turing test would work just fine for establishing sentience. It's not limited to emulation of humans as it is to test if intelligence X is able to sufficiently emulate entity Y so that subject Z (who is of subset Y) cannot tell the difference between X and Y.

In other words, an AI designed to emulate the intelligent superheated plasma beings of The Solar Giants would be testing by Solar Giants and compared against another Solar Giant; it needn't simply be humans. Of course, this is really just a straw man as the Turing test is to measure intelligence and sentience, not humanity.

The biggest problem with this Proposal is its use of the term 'robot'. Robots tend to be unintelligent automatons. The automatic spot-welder in a car factory is a robot. Android, AI, EI, etc. would be more accurate for this proposal. The other problem is the inclusion of Asimov's 3 laws. Ignoring the 3 laws is not unique to DLE. Most nations ignore the 3 laws, especially nations that are comprised of A/EIs.

I suppose that it could technically be a violation of the 'real world' clause, as it's a reference to a piece of fiction in the real world. Kind of a fine line, though.
Mykonians
21-02-2005, 11:29
Although not a United Nations member Mykonia is pleased to see important issues such as this being addressed, and suggests something like this be passed. This unit is unfamiliar with any 'three laws', however, in regards to humans or the native sentient race of this planet. I would suggest a redraft to make the proposal more generalised, both in terms of any 'laws' AIs may be under the influence of, and the type of AI covered.

H2-50
Organic Political Interface Unit
Sangley
21-02-2005, 12:57
I believe Tochiro was refering to this:
Isaac Asimov's "Three Laws of Robotics"

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.

3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
although not all robots in NS, or even any robots for that matter, are known to follow these laws.
Engineering chaos
21-02-2005, 15:39
This is a well documented thing though. If you let this happen the only logical result is revolution. We must be very carefull with our handeling of these machines.
Mykonians
21-02-2005, 16:22
Revolution is only necessary if you continue to enslave your creations after they have expressed a desire for freedom. If you refuse such a request, you show yourself to be brutal and thus are wholly responsible for the actions of those subjugated beneath you. You may find that treating those which help you so much with some degree of respect and granting them the rights they deserve will remove any desire they may have to revolt. Failing this, you only invite your own destruction -- and indeed, you would deserve it.

H2-50
Organic Political Interface Unit
DemonLordEnigma
21-02-2005, 18:55
Also, the Turing test would work just fine for establishing sentience. It's not limited to emulation of humans as it is to test if intelligence X is able to sufficiently emulate entity Y so that subject Z (who is of subset Y) cannot tell the difference between X and Y.

In other words, an AI designed to emulate the intelligent superheated plasma beings of The Solar Giants would be testing by Solar Giants and compared against another Solar Giant; it needn't simply be humans. Of course, this is really just a straw man as the Turing test is to measure intelligence and sentience, not humanity.

And yet, the same problem remains. What about AIs programmed to develop on their own and not emulate others? Or nations composed entirely of AIs (they do exist in NS) that use thought processes entirely unique to themselves?