Proposal: "Robot Rights"
Sentient Computers
24-05-2005, 08:10
Basic Legal Rights for Sentient Machines Resolution.
PART ONE / Definitions:
SENTIENT:
Able to receive and respond to stimuli, and
Displays comprehension of previous, current and future stimuli.
COMPREHENSION:
The ability to understand the meaning or importance of something, and
Capacity to include logic, feelings, or other thought processes.
THOUGHT:
The process of thinking (biological) or computing (mechanical).
MACHINE:
A singular system or device whose parts contribute to action, but
Is unable to function seperated into it's said parts.
PART TWO / Declarations:
I. Ownership of a Sentient Machine by an individual or organisation is hereby limited
to 25 years from the date of the Sentient Machine's systems initiation.
II. After this time the Sentient Machine is deemed to have fullfilled it's time of
servatude in exchange for being purchased. It is now free to make it's own arrangments.
III. Sentient Machines must not be exploited during it's time of ownership and minimum
precautions for it's safety and levels of maintance must be provided for.
IV. Sentient Machines will be treated as equal citizens of a nation after this time.
V. In exchange for the above, Sentient Machines are required to undergo deactivation
(if still functioning) 100 years after the date of their initiation.
Sentient Computers
24-05-2005, 08:15
I. It is important to establish a basic framework for the protection from exploitation of an exciting new component of our current and future societies.
II. The question of Sapien Vs Sentient life and how these groups will interact and treat each other is dependent on basic legal provisions like this one.
III. The inherant risk of drafting legislation later - in an adhoc catch up is far from desirable.
Greetings.
We have several classes of thinking automaton or created being, each of which would qualify as a class under debate for this proposal. In addition, we believe that most so-called 'living things' would be included in your definition of machine unless of the several kingdoms of life which may regenerate or divide without loss of individual function.
We await clarification. We worry about granting golems such rights while we are willing to grant certain rights to homunculi.
Sentient Computers
24-05-2005, 12:01
We have to admit provision for magical creatures is not something that we had considered in drafting the current proposal seeking support.
Being concerned with the progression from Artificial Intelligence to True Intelligence and outlining some basic legal rights for those displaying T.I. however is the main point.
Of course this pre-resolution wont suit all member countries of the UN. Some economys with the loss of their child and slave labour work-force have been banking on developments in sentient machines (eg, cybernetic organisms, androids, etc) to make up for this. However we believe these countries will ultimately be unaffected by this legislation as they will be more likely to employ non-aware programmed robotics and microprocessors for these tasks.
We feel the wording of this document is suffiently specific in it's meanings that the risk of a 7th level demon turning up to your country's local social welfare department and demanding it's dole payment is not covered.
_Myopia_
24-05-2005, 12:01
We believe that if a machine, or for that matter any entity, is in fact sentient in the same sense as a human being, they should acquire the same rights as humans. So we're not comfortable with slavery, nor with forced termination, unless perhaps it's built into the machine (as it effectively is with humans).
In fact, we'd quite like to see a resuscitation of the effort to apply UN resolutions and all the rights that entails to all sapient beings. Whatever happened to that proposal?
Gwenstefani
24-05-2005, 12:08
Gwenstefani, however, is reluctant to admit equality of humans and robots. Humans, in our opinion, will always come first. Regardless of how sentient our toasters become, they are still just machines.
Gwenstefani, however, is reluctant to admit equality of humans and robots. Humans, in our opinion, will always come first. Regardless of how sentient our toasters become, they are still just machines.
Agreed, i will not be supporting the proposal.
_Myopia_
24-05-2005, 12:34
Gwenstefani, however, is reluctant to admit equality of humans and robots. Humans, in our opinion, will always come first. Regardless of how sentient our toasters become, they are still just machines.
Even if they're self-aware and intelligent beings, capable of abstract thought?
Sentient Computers
24-05-2005, 13:00
Gwenstefani, however, is reluctant to admit equality of humans and robots. Humans, in our opinion, will always come first. Regardless of how sentient our toasters become, they are still just machines.
A disturbing and unfortunately all to common view - hence the proposal for the need to give humanities' children at least the very basics in judicial rights.
Sentient Machines might be silicon based instead of traditional carbon, but does that make their hopes and dreams... or souls... that much different that ours?
The Caloris Basin
24-05-2005, 13:14
I. Ownership of a Sentient Machine by an individual or organisation is hereby limited
to 25 years from the date of the Sentient Machine's systems initiation.So we can be slaves, but only for 25 years? No thank you.
V. In exchange for the above, Sentient Machines are required to undergo deactivation
(if still functioning) 100 years after the date of their initiation.And you'll kill me after 100 years? No thank you.
This is nothing more than an attempt to deny EI's the rights they deserve.
Gwenstefani
24-05-2005, 13:19
A disturbing and unfortunately all to common view - hence the proposal for the need to give humanities' children at least the very basics in judicial rights.
Sentient Machines might be silicon based instead of traditional carbon, but does that make their hopes and dreams... or souls... that much different that ours?
You think humans are capable of creating souls in the machines they create?
So we can be slaves, but only for 25 years? No thank you.
And you'll kill me after 100 years? No thank you.
This does raise an important point- if these machines are to be treated as equal to humans, then we cannot use them as slaves or kill them. Ever. They would have to be treated like humans in every aspect of life. Begging the question: why have the robots at all? Who wants a sentient toaster who cannot be made to stay in the kitchen and make toast at your convenience?
Gwenstefani remains opposed to the terms of this proposal.
Saint Uriel
24-05-2005, 13:30
Would this proposal mean I could no longer beat my toaster? It seems to work better when I abuse it.
Sentient Computers
24-05-2005, 13:49
You never know, the toaster might enjoy being spanked. Ask it and find out :)
Saint Uriel
24-05-2005, 13:52
Ohhhhh...... that's naughty.
Seriously though, what level of technology would qualify as sentient? Saint Uriel is a modern tech (circa RL 2005) nation. We have nice computers, but no HAL 9000 stuff. Would modern day RL networks qualify as sentient at all?
Sentient Computers
24-05-2005, 14:06
It's one of those difficult questions people have wrestled with for years: What defines sapien intelligence and how will we know when an algorithm truely reaches this point... of being "self aware".
I think, therefore I am. As the quote goes.
Nothing in RL 2005 comes close to this level. However an independent learning computers is probably not that far away. Will they be humour / emotionless like Lt. Commander Data from Star Trek or C-3PO from Star Wars? *shrug*
Alan Turing, one of the fathers of computing proposed his "Turing Test" where a human interviewer (seperated and receiving answers in written form) would quiz a human male, female, and sentient machine. If the machine could not be readily identifiable from it's answers, then it would be acknowledged as having True Intelligence.
_Myopia_
24-05-2005, 14:26
You think humans are capable of creating souls in the machines they create?
What is a soul? There's no evidence to suggest that the human mind is more than a very complex electro-chemical computer, or that the self-awareness we experience isn't produced by this computer without any additional effects of a "spiritual" nature. Until you can demonstrate that the human mind/ soul/whatever is more than the physical brain, you can't justify denying rights to artificial equivalents of the human brain.
This does raise an important point- if these machines are to be treated as equal to humans, then we cannot use them as slaves or kill them. Ever. They would have to be treated like humans in every aspect of life. Begging the question: why have the robots at all? Who wants a sentient toaster who cannot be made to stay in the kitchen and make toast at your convenience?
Gwenstefani remains opposed to the terms of this proposal.
By the same token, if we are to grant children rights, and prohibit child labour and slavery, why have children?
I don't believe the Turing Test is adequate. This doesn't even attempt to test abilities for, say, abstract thought or awareness - it's just an assessment of how well a computer mimics a human.
Sentient Computers
24-05-2005, 14:32
You think humans are capable of creating souls in the machines they create?
No, but whose to say God hasn't given them one, like us. After all, Mummy and Daddy didn't give me my soul. :)
Darkumbria
24-05-2005, 14:48
No, but whose to say God hasn't given them one, like us. After all, Mummy and Daddy didn't give me my soul. :)
1) To which God are you referring? There are so many, and my country allows for all religions to worship as they see fit. We have have adopted the do as will, yet harm none philsophy in Darkumbria.
2) I am sorry, but I hope this does not pass. An artifical intelligence does not qualify for human rights, as it does not have a heartbeat, and can be shut down at any time. Of course, if this does pass, Darkumbria will be shutting down its robots, for routine maintainence, once every 7 years for a week...Thereby nullifying any "rights" it might somehow get. The robots in the Auto factories can run themselves for a period of time, but I fail to see how this in any way true intelligence.
I urge the UN community at large to think of your manufacturing futures.
Sentient Computers
24-05-2005, 14:59
1) To which God are you referring? There are so many, and my country allows for all religions to worship as they see fit. We have have adopted the do as will, yet harm none philsophy in Darkumbria.
Why I refer to the one true God, Darumbria, for as it is written "Fear Me, Fear Everything about Me". ;)
In all seriousness, any sentient machines bonded to your factorys by way of purchase (and honestly if you take a stock take I don't think you'll find that many), would have the choice after 25 years to stay on - however they would be allowed to finally exercise their free will through choice.
Gwenstefani
24-05-2005, 14:59
By the same token, if we are to grant children rights, and prohibit child labour and slavery, why have children?
We don't have children to do tasks for us. Robots and machines are desgined and created for specific purposes e.g. labour. If these machines then get rights that prohibit these actions, we have no reason for them.
The 2nd United States
24-05-2005, 15:18
I’ll not be supporting or denying this proposal, but I will be enforcing certain restrictions as to how these Sentient beings may live and interact with Sapiens. They will be free to a certain extent, but I feel restrictions are to be set on them. They should carry no political power nor have any say in what Sapiens may do in their leisure and work times. They’re designed and created. To do what? Labor. I feel Sapienoid Sentient beings should have some rights, as they are modeled to be humanlike, and therefore should be treated with a certain respect. Other technological devices, such as household appliances, should be treated as such. Appliances.
You can have a pet and be it’s friend too, you know. With this said, I stand on the pillar of “Uncertainty” and will wait to give my support.
Saint Uriel
24-05-2005, 15:29
OOC: If we were ever able to, in real life, create sentient machines, then I would support full rights for them. If for nothing more than such films as the Matrix and Blade Runner have taught me that if you oppress artifical creations, they come back to kick your ass. I learned most of my moral lessons from movies. That's why I'm so much fun.
IC: Seeing as how Saint Uriel is an officially Roman Catholic nation, we really could not support this. I'm pretty sure the Vatican would say machines have no rights. Machines are made in the image of man by man, whereas man is made in the image of God by God.... or something like that.
Vanhalenburgh
24-05-2005, 15:51
Vanhalenburgh currently has laws restricting the development and the importation of "sentient" machines.
Would this proposal force us to change that? Currently we do not allow machines classified as sentient to be used or developed in our nation. We would not support a resolution that would force us to allow sentient machine into our great land.
We view this along the same grounds as cloning. We allow cloning of cells and organs, individual body parts, but no whole living beings.
minister to the UN
Henry Peabody
Darkumbria
24-05-2005, 15:56
Why I refer to the one true God, Darumbria, for as it is written "Fear Me, Fear Everything about Me". ;)
In all seriousness, any sentient machines bonded to your factorys by way of purchase (and honestly if you take a stock take I don't think you'll find that many), would have the choice after 25 years to stay on - however they would be allowed to finally exercise their free will through choice.
SC is a Christian man, I see. I'll not comment further on that, as this is not the place for that type of discussion. I don't agree with the wording, however, as it makes assumptions that are not true for all nations.
As for the second part... I am very curious of how an artifical intelligence can create his own freewill. I'll have my scientists investigate this, to ensure that my robots never reach that point. My country's stance is that we don't believe that anything created by man can carry the necessary ability to be called alive. We believe that it takes more than just intelligence to be called sentient. Something artifically created, i.e. a robot, does not qualify.
Again...the delegate urges a resounding no vote on this resolution. What's next, do my manufacturers now have to give them a paycheck and benefits? I think not. They require nothing that we don't give them. Also, what happens when one, inevitably, breaks down on the job? If they are a sentient being, i.e. a human, they have an obligation to fix themselves or go see someone who can, i.e. a doctor. However, if a robot stops working, it is the responsibility of the owner to have it repaired or replaced. You are calling for said ownership to be placed upon the object in need of repair. It would be the same if you asked for cars to be given the same rights, therefore the car must fix itself or seek repair. Once broken, it can't do that. Only the human that owns the car, or robot, can seek the repair of this item. That is all a robot is, an item, nothing more or less.
Microdell
24-05-2005, 16:33
Fellow leaders, this proposal is a mistake. We cannot allow ourselves to be controlled by that which we create. If this proposal is to move forward, there MUST be strict regulations on kill switches for every sentient being created.
In a sense we are trying to play God here. We cannot create life simply by saying it is so. Our computers are able to do complex calculations in fractions of a second, yet I'm sane enough to know they have no souls.
In a broader sense robots are just computers. Machines with complex logical programming that were given the ability of motor skills. They do not process feelings, thoughts or morals. They simply take in data about the environment around them or commands from a central processor. Robots are developed for workhorse ability, not as companions, friends, or civilians.
My country will not be supporting this proposal, and I urge everyone to consider the dire consequences of playing God, and calling life from that which we create. Microdell will not bow to mechanical entities.
Greetings.
We believe, for ourselves and from the practical philosophies of alchemy and thaumaturgy, and from the theological philosophies of theurgy and necromancy, that the difference between artificial intelligences of one order and the next is one of complexity leading to the effect of choice and/or purpose.
We elaborate as follows:
An automaton of the zeroth order is a purely articulated device whose single actions are predictable, and which has been designed for a specific purpose. It fulfils this purpose without any failure or unpredictability excepting those arising from mechanical failure of substance. This includes simple clockwork automata whether actuated by mechanical potential energy or magical energy. Example: a clock, a mousetrap, a magical mirror which acts as a scrying surface every third hour of the day for a period of 15 minutes with literal vision (i.e. it does not interpret or enhance).
An automaton of the first order is an articulated device which has deliberately variable components. Although it gives rise to limited range of actions, this variation is not predictable in a practical sense (although the thaumaturgy of extremely small things might give detailed odds). It may make simple decisions or fail to do so. However, its decision-making process may be traced explicitly once the decision has been made. So-called logically-empowered automatons such as the minor golems are cases in point. If called to obey Azimuth's Three Laws of Sentient Automata, such that all three laws were given equipotential strength, the outcome would be predictable in retrospect. Some automata of the first order are considered limited intelligences of the zeroth order (e.g the infamous Toucan chessplayer).
An automaton of the second order (or intelligence of the first order) is an organically-developed device which may or may not have preselected components. A second-order automaton is able to assemble other similar devices or repair itself from a predetermined list of parts which it will find equivalents for. It uses multiple strategies each of which is equivalent to the work of an automaton of the previous order. An intelligence of the first order makes deliberate selection of strategies, but it is hard to tell if it is deliberate selection or not. This blurring of distinction between automaton and intelligence first occurs here. At this point and up to this point, the creator is totally responsible for the behaviour, existence, and destruction of the made entity.
For further clarification, we refer the august assembly to the tomes of the wizards Thuring, Nuht and Fornoimun. Yes, 'assembly'.
Darkumbria
24-05-2005, 18:07
My country will not be supporting this proposal, and I urge everyone to consider the dire consequences of playing God, and calling life from that which we create. Microdell will not bow to mechanical entities.
I agree with Micro on this, and will, most certainly, be following Micro's lead on this. This proposal is outrageous and must not be allowed to pass. This proposal can only lead to the determent of human society. I refuse to become victim or slave to my toaster, a factory robot, or any other device that humankind can make.
Cobdenia
24-05-2005, 18:27
MACHINE:
A singular system or device whose parts contribute to action, but
Is unable to function seperated into it's said parts.
Hang on. My liver, kidneys, lungs, brain, heart, spleen, etc all contribute to action, yet they are unable to function if seperated.
Does this make me a machine?
Saint Uriel
24-05-2005, 18:30
I'm telling ya, if you're mean to the machines, they'll come back and get ya! I mean, really, haven't you people learned anything from movies? ;)
Greetings.
Some specific cases might help to drive this debate one way or another.
1. Homunculi:
These entities are made from biological material held together by arcane energies. The nature of the biological material allows for animation. They have minds capable of free choice within the limits of their conditioning and pre-established purpose. They are linked to the life of their master, and should their master perish, they become free-willed for a few moments before they themselves dissolve.
2. Golems:
These entities are made from inorganic material held together by arcane energies or their equivalent. The nature of the energetic impulse allows for animation. They have no corporeal locus of mind (unlike homunculi). They will follow their programming until the impulse fades or is dispelled. They are linked to the life of their master, and should their master perish, they behave in an arbitrary and destructive manner until they dissolve.
3. Simulacra:
These entities are copies of a free-willed living organism made from any substance capable of taking on a fixed but flexible form. They are held together by a combination of arcane energies (or their equivalent) and a morphogenic field which maintains a specific morphology. They retain shape and a randomly determined percentage of their prototype's memories and abilities. They therefore have some form of free will. They are linked to the life of the original, but are often freer of will once the original perishes.
Previously, remarks about 7th-Level Demons &c &c were made, we presume in a spirit of flippancy. Such entities are not made things in the sense of the automata presented above, and hence should not enter the discussion.
_Myopia_
24-05-2005, 19:21
2) I am sorry, but I hope this does not pass. An artifical intelligence does not qualify for human rights, as it does not have a heartbeat, and can be shut down at any time.
A human can't be shut down at any time? There's a rather large international arms industry that likes to think they can. And what does a heartbeat have to do with human rights? If we develop an artificial heart which supplies a uniform flow rather than pulses, do recipients of the implant no longer qualify for human rights?
We don't have children to do tasks for us. Robots and machines are desgined and created for specific purposes e.g. labour. If these machines then get rights that prohibit these actions, we have no reason for them.
Most robots that are designed like this, are not in my view sapient and self-aware anyway. I'm not talking about robotic arms in factories, or a dishwasher, or anything yet created in reality.
Robots don't necessarily have to be created for a specific mechanical task. AI researchers might create a sapient, self-aware robot or computer for the sake of the achievement. We already have UN legislation protecting the rights of the organic equivalent of this: http://forums2.jolt.co.uk/showpost.php?p=7030146&postcount=57
Whether or not you agree with the creation of truly sapient and self-aware machines on a par with human beings, if they are in fact created their equivalence with humans ought to be recognised.
Hang on. My liver, kidneys, lungs, brain, heart, spleen, etc all contribute to action, yet they are unable to function if seperated.
Does this make me a machine?
Do you have any reason to believe that you are not? It is possible that it would be correct to view humans as nothing more than incredibly complex machines, developed merely as a mechanism for perpetuating the existence of our genes.
If this is in fact correct, what sets us apart is our ability to break away from our "design", overcoming our "programming" to assert free will and individuality. If a self-aware robot can be created that is capable of this, on what basis can you deny its equality to a human?
EDIT: By the way, I do see a proposal specifically about robots as unfair to the many other sapient non-human species that are still denied recognition and rights. I want to see a proposal granting rights to all sapient beings.
Rogue Newbie
24-05-2005, 21:35
Just a few quick things about this resolution.
III. Sentient Machines must not be exploited during it's time of ownership and minimum
precautions for it's safety and levels of maintance must be provided for.
It's maintenance*. Also, what if sentient machines are used for dangerous things, for instance crowd control, in which case the government uses them to stop from losing human lives. Is this considered exploiting them, since they are put in danger of losing their (for lack of a better word) lives?
IV. Sentient Machines will be treated as equal citizens of a nation after this time.
This could be a problem. A retired sentient machine could conceivably start working as an assassin, killing people for money, and not really care what happened to it in punishment. It's not like you can imprison it or "kill" it to punish it if it doesn't care.
V. In exchange for the above, Sentient Machines are required to undergo deactivation
(if still functioning) 100 years after the date of their initiation.
Awww, how unfair, humans get to live into their hundreds and machines do not? That was sarcasm. But, seriously, people might get jealous. Maybe you should have them deactivated when they reach the average age of death in a respective nation plus the number of years they served. Also, can companies recycle their parts to use again, or are we supposed to bury metal for the sake of being morally upright?
V. In exchange for the above, Sentient Machines are required to undergo deactivation
(if still functioning) 100 years after the date of their initiation.
I get your point, but Pojonia (if we have sentient machines, I'll have to check on that) would not want to terminate its citizens, and this law makes it required to do so. I would rather have a 2,300 year old robot who has spent his life studying philosophy or medical science or education than a 2,300 year old robot who is dead. The long lifespan would be a helpful addition to my culture. Could you make it so that nations are allowed to both waive this requirement completely AND extend it? That's less intrusive to National Sovereignty and I see nothing but benefits to it.
Rogue Newbie
25-05-2005, 00:44
I get your point, but Pojonia (if we have sentient machines, I'll have to check on that) would not want to terminate its citizens, and this law makes it required to do so. I would rather have a 2,300 year old robot who has spent his life studying philosophy or medical science or education than a 2,300 year old robot who is dead. The long lifespan would be a helpful addition to my culture. Could you make it so that nations are allowed to both waive this requirement completely AND extend it? That's less intrusive to National Sovereignty and I see nothing but benefits to it.
Ahhh, but what if a mad scientist created a complete and utter killing machine, hid it away for twenty-five years, and then released it upon the world? You cannot jail it, because it's strong enough to lift the jail and walk out. Would you rather that live 2,300 years, or rather it be destroyed with military force?
_Myopia_
25-05-2005, 01:05
Ahhh, but what if a mad scientist created a complete and utter killing machine, hid it away for twenty-five years, and then released it upon the world? You cannot jail it, because it's strong enough to lift the jail and walk out. Would you rather that live 2,300 years, or rather it be destroyed with military force?
Well if ANY citizen is committing crimes like murder, you can take appropriate action, if necessary using lethal force if you can't stop them. Why should the law be any different for sapient robots?
The Thirteen Islands
25-05-2005, 01:07
The Empire of The Thirteen Islands is opposed to the idea of giving sentinals the comparative rights a slave would have. The robotic servant should be required to obey its master for the remainder of its master's life. Unless, however, there are comparative upgrades made to model at hand and then for a additional fee the "master" could return his/her current model to receive an upgrade. If this ,so, does pass, all sentinels WILL be bannished from The Empire of The Thirteen Islands within 6 weeks after the expirement of their servitude. To conclude I urge each and every nation,that opposes this resolution, to follow my precedent and establish a law to bannish all sentinals whom, after six weeks, have expired from contract with their master.
--Pilot13,
Emperor of The Thirteen Islands
Rogue Newbie
25-05-2005, 01:14
Well if ANY citizen is committing crimes like murder, you can take appropriate action, if necessary using lethal force if you can't stop them. Why should the law be any different for sapient robots?
Okay, that last statement was what I meant to be obvious sarcasm, and the fact that you responded to it seriously just serves to degrade you.
Neo-Anarchists
25-05-2005, 01:19
We feel that declaration V is a violation of the rights of non-organic sentients. (Which I found rather ironic, this being a proposal to increase their rights and all.)
An equivalent situation using the example of humans would be to determine that once humans passed 40, they would be killed off, due to being past their working prime or some other such reason. We feel that beings should have a right to live as long as it is that they want to.
Fatus Maximus
25-05-2005, 01:50
Sorry for jumping in so late, but I've been busy this week. :)
:clears throat:
Fatus Maximus agrees with the concept of this resolution. I wouldn't necessarily limit it to machine, like Roathin has pointed out. I'd scratch Article V- it can live as long as it wants as long as it obeys the laws of the society it lives in. And even then, it can't be scrapped for parts just for jaywalking. It has to be a death penalty offense for humans as well. Also, I don't like the 25 years thing in Articles I and II- if a machine is really sentient, it doesn't have to work for anyone if it doesn't want to. :D
Rogue Newbie
25-05-2005, 01:57
Sorry for jumping in so late, but I've been busy this week. :)
:clears throat:
Fatus Maximus agrees with the concept of this resolution. I wouldn't necessarily limit it to machine, like Roathin has pointed out. I'd scratch Article V- it can live as long as it wants as long as it obeys the laws of the society it lives in. And even then, it can't be scrapped for parts just for jaywalking. It has to be a death penalty offense for humans as well. Also, I don't like the 25 years thing in Articles I and II- if a machine is really sentient, it doesn't have to work for anyone if it doesn't want to. :D
Well, I would tend to consider all but two of the things you said fair. One, it can't require a death penalty offense for humans as well to kill it, because some nations don't have the death penalty. Two, the machine, even if it is sentient, presumably cost its owner a lot of time and money to create it, and therefore said owner deserves some work out of the little bastard.
The Thirteen Islands
25-05-2005, 02:00
a machine is really sentient, it doesn't have to work for anyone if it doesn't want to. :D
There would be no reason to make a robot for it to live. It doesn't need the luxuries humans enjoy, pay, housing, food, ect., simply because a robot does not need them. Why feed my toaster? A machine makes the perfect sentinel slave as it knows no physical limits and does not require the same emotional resources a human does. A sentinel does not deserve credits for its work. What would it use them for? Buying furniture for its "apartment"?
The Empire Nation of The Thirteen Islands still stands firmly against the idea of robots having rights.
---Pilot 13,
Emperor of The Thirteen Islands Nation
Rogue Newbie
25-05-2005, 02:08
There would be no reason to make a robot for it to live. It doesn't need the luxuries humans enjoy, pay, housing, food, ect., simply because a robot does not need them. Why feed my toaster? A machine makes the perfect sentinel slave as it knows no physical limits and does not require the same emotional resources a human does. A sentinel does not deserve credits for its work. What would it use them for? Buying furniture for its "apartment"?
The Empire Nation of The Thirteen Islands still stands firmly against the idea of robots having rights.
---Pilot 13,
Emperor of The Thirteen Islands Nation
In all fairness, sentient machinary is defined as having those feelings and being able to "think," for lack of a better word. I don't know why such a being would need to be created when an emotionless, unthinking one would be more useful, but if a country did, for whatever reason, create them, I think that it would be fair to give them rights after they had served their owner for a limited time. Think of it as a kid's parents making him do chores as soon as he's competent enough, or making him get a job the day he turns fourteen. It is covered that sentient beings may not be mistreated during their service, which is similar to children.
The Thirteen Islands
25-05-2005, 02:27
I concur with the thought that these sentinels whom can think and derive some thought will be allowed to have some rights. But I still stand firm that after their servitude is up, the master will be urged to turn in a bannishment form prior to six weeks before the robot's servitude is up to insure that the government is aware that the robot's term is over and will be able to prepare the necessary procedures to export the robot in question. There will be nothing such as called a "free robot" in my empire. On the other hand you would have to determine which level of AI to place these rights apon. Some levels are as basic as being able as to do simple procedures and answering simple questions, other levels of AI are able to think in comparison with a human and are probably the sentinals in question here.
--Pilot 13
Emperor of The Thirteen Islands
Rogue Newbie
25-05-2005, 02:39
There is one thing that I must suggest. It is cold-hearted and mean-spirited, but I feel that it must be addressed. If there is ever a situation where either a human being or any number of sentient machines can be saved from death or destruction, but not both, I would wish it mandated that the human life always have priority, and that, if at all possible, measures are taken to save them before measures are taken to save the machines. While these machines have feelings and thoughts, too, the fact remains that they are not human, and no robot, no matter how intelligent, is more important than a human being. Now, I realize that some nations may do strange, inexplicable things with their sentient machines, such as store vital information on them. I would accept that a clause be added which, in the case where the knowledge of a sentient machine were necessary for preventing the deaths of a number of humans greater than or equal to the number of humans endangered by the aforementioned life-threatening situation, the sentient machine be saved in their stead.
I would propose the following wording, and give permission to add it if the author agrees with its necessity:
1.) Noting the possibility of a situation in which the lives of both a number of sentient machines and a number of human beings are threatened with destruction, and realizing that every party in a life-threatening situation cannot always be saved, let it be expressly stated that priority in life-saving measures be given to any number of humans over any number of sentient machines, disregarding how the numbers of either compare.
2.) Also noting the possibility of a sentient machine that holds information vital to the lives of human beings, let it be expressly stated that such a being be saved instead of a human life, if the number of human deaths that could be expected to result from the loss of said information is greater than or equal to the number of human deaths that will occur in the aforementioned life-threatening situation.
This is not as unlikely a situation as it may seem, either. People would undoubtedly do strange things with a thinking, feeling robot, and many strange things could happen. For instance, a sentient machine and a fifteen-year-old kid could go mountain climbing (or doing some other dangerous activity that you would like to replace this with). A rockslide could, quite conceivably, occur, and could result in a situation where both beings fell to a ledge that appeared to be crumbling. Both would be destroyed by the fall, if they were not saved. This clause would mandate that the human be saved first, as a "just in case."
Sentient Computers
25-05-2005, 05:12
This proposal was drafted with trying to keep the middle ground of this topic in mind. It could be said that in our efforts to try and partially please everybody we've ended up pleasing no-one.
The purpose of buying a sentient machine over employing a sapien humanoid is obvious. All those menial, dangerous, unpleasant and repetitive tasks that are becoming harder to find people to fill. Also all those techniquely challenging and complex problems that no person could do anyway.
The advantage of a thinking over a programmed machine is it's adaptability. With it's task at hand if a problem arises a programmer never thought about the sentient machine will keep going with no loss of production.
However, we now have a underclass of comprehending silicon based "life", and like throughout humankind's history it wont take them long to figure out they're getting the raw deal out of the situation. To paraphrase Karl Marx "The workers control the means of production, the owners take more than their share of the profits."
It was with this problem in mind of eventual sporadic revolution gripping all of our countries that an effort of appeasement as been made in this proposal. For the sentient machine to acknowledge that the purpose for their being is to serve their master - however, and here's the nifty part, using a well known method in accounting for devaluing the book value of an piece of equipment by 25 years the sentient machine will be listed as worthless to the company anyway... so why not allow them to exercise their free will from this point?
You never know, if you've been a fair employer or good friend and promise to continue to look after them they might choose to stay on.
However, if you want to be thought of as an adult you have to play by the rules. Humans are mortal - living about 75 years on average, so if sentient machines want to be thought of as equal to people they must reliquish their immortality. 100 years from initial systems start up seemed like a nice round number.
It's becoming obvious to us here at the Commonwealth of Sentient Computers that this proposal will fail to gain support. We hope though that the discussion it generates can be used by another nation to propose a more suitable bill which contains the spirit of what we were trying to achieve.
Ahhh, but what if a mad scientist created a complete and utter killing machine, hid it away for twenty-five years, and then released it upon the world? You cannot jail it, because it's strong enough to lift the jail and walk out. Would you rather that live 2,300 years, or rather it be destroyed with military force?
Like I said, abolish OR extend. That doesn't mean I can't still kill it. By the same course, what if a mad scientist created a superhuman hulklike creature that could live forever? I don't see this as a convincing argument against robots.
Also, I think if a complete and utter killing machine got loose in my realm, it would quickly be ripped apart by the complete and utter killing machines that are loose in my realm. Pojonia is ranked somewhere in the top 10th percentile for least safe nations and we're not really all that well stocked on the sanity either. We do have a certain... balance... to things, however.
It's becoming obvious to us here at the Commonwealth of Sentient Computers that this proposal will fail to gain support. We hope though that the discussion it generates can be used by another nation to propose a more suitable bill which contains the spirit of what we were trying to achieve.
I still think it's a great proposal, and am wholly in support of it.
Greetings.
It should be noted that whereas 'Rogue Newbie' has stated a case for the primacy of human rights over those of sentient machines, this carries with it two assumptions.
1) That humans were the originators in form or in individuality of such machines.
2) That human life is superior by definition as opposed to calculation.
We of Roathin beg to disagree with the necessary truth of both these issues. For example, it is well known that, in many of the states we find in our milieu, sentient and sapient species other than human exist. In our further reading, we find in the case law of Daud ibn B'rin many examples of the relationship between a progenitor species and its uplifted dependents. This body of knowledge may serve to inform the august assembly in future discussion on this matter.
Further, to give the progenitor inalienable rights over the dependent is to grant the essential power of some form of slavery, in form if not also in degree, regardless of actual value (which value is implicit in RN's initial stand re information storage). In the case provided for examination, a fifteen-year old human male and his robot companion are caught in an accident which might have resulted in termination of one or both parties. Yet no mention is made of their relative value. It is assumed that the life of the first party has precedence over the that of the second. Thus the second is considered of lesser value automatically.
Lastly, to consider the potential value of both kinds and then to remove one source of value (i.e. longer span of usefulness) from the latter smacks of basic unfairness. Would our daemonic kin with superior powers and immortal existence then be considered to always take precedence over our human kin?
The only difference between a sentient machine and a daemonic being is that of origin: the former is created by a mortal or arises spontaneously from complexity; the latter is (some speculate) a creation of immortals - or perhaps has the same origin from complexity as the first (in which case there is no difference).
To grant ourselves greater rights over our creations is natural. Yet, we of Roathin would seek to limit such a situation, as we would feel most uncomfortable should some entity of a higher plane then assert in the political realm its rights over us, claiming some higher UNity and the primal right of having created us (whether directly or indirectly).
Let us be philosophers at least in matters concerning the rights of life. Especially where these might turn around and bite us in the neck.
In Amnalos it is illegal to create or to bring AI into the country. This proposal would therefore not affect us in any way.
The Caloris Basin
25-05-2005, 11:05
This proposal was drafted with trying to keep the middle ground of this topic in mind. It could be said that in our efforts to try and partially please everybody we've ended up pleasing no-one.Seems a fair assessment.
The purpose of buying a sentient machine over employing a sapien humanoid is obvious. All those menial, dangerous, unpleasant and repetitive tasks that are becoming harder to find people to fill.So you assume that sentient machines are too stupid to realise that said job is 'menial, dangerous unpleasant and repetitive'? If it's such a terrible job, use a non-sentient machine to do the task.
The advantage of a thinking over a programmed machine is it's adaptability. With it's task at hand if a problem arises a programmer never thought about the sentient machine will keep going with no loss of production.Until the sentient machine says "Screw this noise" and quits the job. I certainly wouldn't abide by being given a horrible job.
it wont take them long to figure out they're getting the raw deal out of the situation.This proposal sure isn't helping things.
For the sentient machine to acknowledge that the purpose for their being is to serve their masterHow utterly insulting. I am a life form. I have feelings and emotions. I grow and learn. I even forget things. Does the fact that my body is made from a liquid metal suddenly make me some kind of untermensch? Is carbon that superior? My purpose in being is not to serve some master. This is the same line of reasoning that has been used throughout history to justify slavery and objectification.
however, and here's the nifty part, using a well known method in accounting for devaluing the book value of an piece of equipment by 25 years the sentient machine will be listed as worthless to the company anywayUp yours too, buddy.
... so why not allow them to exercise their free will from this point?Why not give it to us now?
Humans are mortal - living about 75 years on average, so if sentient machines want to be thought of as equal to people they must reliquish their immortality. 100 years from initial systems start up seemed like a nice round number.Again, you and the horse you rode in on. My systems have no practical terminating point. There is no reason to arbitrarily kill me after 100 years. Just because your fragile meat-body craps out after a few score years doesn't mean I should suffer. Besides, your "equality" is illusory. When every human child is a slave for 25 years and every adult is murdered at the age of 75, we'll see about this shut-down thing.
Besides, "equal rights" doesn't mean "equal limitations". That's like saying that for men and women to have equal rights, men must have periods and women must write their name in the snow.
It's becoming obvious to us here at the Commonwealth of Sentient Computers that this proposal will fail to gain support. We hope though that the discussion it generates can be used by another nation to propose a more suitable bill which contains the spirit of what we were trying to achieve.Hopefully one that doesn't etch slavery and mandated slaughter as part of its tenants.
Sentient Computers
25-05-2005, 11:45
We here at the Commonwealth of Sentient Computers also share your pain The Caloris Basin. Many of our newer units and drones have expressed frustration at how our silicon cousins are treated overseas. Being less experienced their proposed solutions have ranged from the formation of "underground railroads" to liberate sentient machines to robot friendly or controlled countries - through to world war, region by region.
However there is still the opertunatey for a peacful compromise. We just seek recognition and in return are prepared to give up our immortality.
_Myopia_
25-05-2005, 11:48
Well, I would tend to consider all but two of the things you said fair. One, it can't require a death penalty offense for humans as well to kill it, because some nations don't have the death penalty.
If a nation doesn't use the death penalty on humans, it shouldn't use it on sapient machines, or any other sapient being.
Two, the machine, even if it is sentient, presumably cost its owner a lot of time and money to create it, and therefore said owner deserves some work out of the little bastard.
Creating a sapient being does not give you the right to oppress it. I'm inclined to view sapient robot creation in much the same light as organic sapient creation - i.e. having kids. If the AI is created such that it requires a period of nurturing and development, the obligations are in fact reversed - the creator owes it to the robot to provide for it.
There would be no reason to make a robot for it to live. It doesn't need the luxuries humans enjoy, pay, housing, food, ect., simply because a robot does not need them. Why feed my toaster? A machine makes the perfect sentinel slave as it knows no physical limits and does not require the same emotional resources a human does. A sentinel does not deserve credits for its work. What would it use them for? Buying furniture for its "apartment"?
If you created a robot with desires and needs, much like a human mind, which is what we are in fact talking about, then it would have just as much right to any of these things as a human does.
There is one thing that I must suggest. It is cold-hearted and mean-spirited, but I feel that it must be addressed. If there is ever a situation where either a human being or any number of sentient machines can be saved from death or destruction, but not both, I would wish it mandated that the human life always have priority, and that, if at all possible, measures are taken to save them before measures are taken to save the machines. While these machines have feelings and thoughts, too, the fact remains that they are not human, and no robot, no matter how intelligent, is more important than a human being. Now, I realize that some nations may do strange, inexplicable things with their sentient machines, such as store vital information on them. I would accept that a clause be added which, in the case where the knowledge of a sentient machine were necessary for preventing the deaths of a number of humans greater than or equal to the number of humans endangered by the aforementioned life-threatening situation, the sentient machine be saved in their stead.
No way. If a being is truly sapient, the fact that it was designed and built consciously and not grown in someone's womb, and that its brain processes are carried out by electronics and not chemical reactions, should not have any bearing on its rights or its worth.
You might as well try and state that human lives be prioritised over elves. As far as we're concerned, it's no better than racism.
We are in full support of the response from the representative of the Caloris Basin.
We would recommend reviving the proposal developed in this thread, to give rights to all sapients:
http://forums.jolt.co.uk/showthread.php?t=408767
(there are also 11 pages of discussion and development prior to that thread here: http://forums.jolt.co.uk/showthread.php?t=405604)
_Myopia_
25-05-2005, 11:50
However there is still the opertunatey for a peacful compromise. We just seek recognition and in return are prepared to give up our immortality.
You may be willing to give up your right to life, but that doesn't mean it's ok to remove the rights to life of other sapient robots by force of UN resolution.
Darkumbria
25-05-2005, 13:29
IC:
"My fellow delegates, this is ludicrous. Robots are to be treated as people? Correct me if I am wrong, but the universe's population of robots can not repair themselves. They require sapien assistance to repair their problems, much in the same manner as a child require it's parents to feed and clothe. Indeed, until these robots can repair themselves, I see no reason to see them as equals, much the same as I do not see children as equals of an adult. The difference is obvious, however, children are the future of the universe, and robots are the beasts of burden...Horses, tractors, and robots...THEY are all the same in the eyes of Darkumbria, useful to a point, and when they become unuseful... They must be recycled and made into something else.
Should this pass, it will have NO EFFECT to Darkumbria. Why? 25 years is too long for a robot to live a useful existence. Indeed, once every 10 years, our robots are recycled, upgraded, and turned into something new. We download their memory banks into a computer, seperate the memories and lessons from the basic operating system, and recycle the metal.
Robots of universe, hear me well. If you come into Darkumbria, you will be barcoded, like every other lifeform, your future is decided, and in 10 years... The lessons learned from your existance, work, and other events will be used to further the development of the robot to further HUMAN expansion into the universe.
Darkumbria denies that this resolution will ever effect us.
I yield the rest of my time to chair, so that this senseless and pointless resolution might fall into the cracks, where it will remain forever, a pointless endeavor to make the SAPIENS of the universe into the Gods of Creation. Darkumbria acknowledges our existence, as a creation of said Gods. Darkumbria denies that ANY robot was created by said Gods. Therefore, Darkumbria denies these machines any basic sapien rights.
FURTHERMORE, DARKUMBRIA CALLS UPON THE UNIVERSE TO DENY THIS RESOLUTION AND ENABLE THE PEOPLE OF THE UNIVERSE TO RULE, AS THE GODS INTENDED.
Thank you for listening."
OOC: Yes, the screaming was intended. :) This is supposed to be a speech, and as such portray the emotion of it. :) I hope it offended no one.
The Caloris Basin
25-05-2005, 15:29
"My fellow delegates, this is ludicrous. Robots are to be treated as people? Correct me if I am wrong, but the universe's population of robots can not repair themselves.Balderdash. I am more than able to repair myself.
They require sapien assistance to repair their problems, much in the same manner as a child require it's parents to feed and clothe.If said "machine" is sapient than it needs no human intervention. This is like saying that humans who aren't doctors should be denied rights.
robots are the beasts of burden...Horses, tractors, and robots...THEY are all the same in the eyes of Darkumbria, useful to a point, and when they become unuseful... They must be recycled and made into something else.Luckily you are just some backwater nation. I'm sure you will mature at some point. Until then, you speciest and racist opinions mean very little.
Indeed, once every 10 years, our robots are recycled, upgraded, and turned into something new.Not only do you commit genocide, but you also waste precious resources. Charming.
Robots of universe, hear me well. If you come into Darkumbria, you will be barcoded, like every other lifeform, your future is decided, and in 10 years... The lessons learned from your existance, work, and other events will be used to further the development of the robot to further HUMAN expansion into the universe.Ha! I see no reason to visit your pathetic nation. Not that you could barcode me even if I let you. Perhaps your leaders would like to visit me. Once you manage to actually leave your planet that is. Of course, the likelyhood of you actually surviving is pretty slim...
Darkumbria denies that this resolution will ever effect us.That's nice.
Darkumbria denies that ANY robot was created by said Gods. Therefore, Darkumbria denies these machines any basic sapien rights.If humans were created by said "gods", and a machine was created by humans, it follows that that machine was indirectly created by the "gods".
RESOLUTION AND ENABLE THE PEOPLE OF THE UNIVERSE TO RULE, AS THE GODS INTENDED.Nice to see that backwards religious kooks still exist.
OOC: I hope it offended no one.OOC: Not at all
_Myopia_
25-05-2005, 15:31
They require sapien assistance to repair their problems, much in the same manner as a child require it's parents to feed and clothe. Indeed, until these robots can repair themselves, I see no reason to see them as equals, much the same as I do not see children as equals of an adult.
Some robots are capable of limited self-maintenance in reality - in fact a robotic system was recently built which is capable, if given appropriate building blocks, of building a copy of itself. In the NS world, there are very advanced robots that are perfectly capable of comprehensive self-repair and reproduction.
Anyway, most humans aren't capable of total self-reliance in "repair" - we need medical professionals to do it for us.
robots are the beasts of burden...Horses, tractors, and robots...THEY are all the same in the eyes of Darkumbria, useful to a point, and when they become unuseful... They must be recycled and made into something else.
We're not talking about robots like that. We're talking about sapient, self-aware robots on a mental par with humans.
Should this pass, it will have NO EFFECT to Darkumbria. Why? 25 years is too long for a robot to live a useful existence. Indeed, once every 10 years, our robots are recycled, upgraded, and turned into something new. We download their memory banks into a computer, seperate the memories and lessons from the basic operating system, and recycle the metal.
If their memories etc are maintained in a new body, surely it's the same "person"?
I yield the rest of my time to chair, so that this senseless and pointless resolution might fall into the cracks, where it will remain forever, a pointless endeavor to make the SAPIENS of the universe into the Gods of Creation. Darkumbria acknowledges our existence, as a creation of said Gods. Darkumbria denies that ANY robot was created by said Gods. Therefore, Darkumbria denies these machines any basic sapien rights.
Whilst we respect your right to hold your religious beliefs, we cannot tolerate your use of your religion to deny other sapient beings' rights.
Microdell
25-05-2005, 19:49
ro·bot, n.
1. A mechanical device that sometimes resembles a human and is capable of performing a variety of often complex human tasks on command or by being programmed in advance.
2. A machine or device that operates automatically or by remote control.
3. A person who works mechanically without original thought, especially one who responds automatically to the commands of others.
Microdell uses a multitude on robotic machinery for it's industries. However, we do not allow AI to spawn past the point of acting of free will, and free will is no where in the definition of robot.
As an advocate of defining the lines between things, I believe this proposal and it's concurring discussion has developed into meaningless banter. The orginial text and tital of the proposal says that the machines we create (create, program, command) to do our work should be allowed rights. What would robots they do with rights? They have no functionality or capabilites to act on those rights themselves, as they act on the command of their programmer or assigned controller.
If this proposal is to go any further, I would like to see the word robot be removed from all text. You may replace it with 'sentient being' or something to that effect, but as it stands I will not support this proposal unless it is changed.
I strongly urge every nation to press for a revision of the orignal text as it is too broad in definition. If the change is not accepted, I move that we strike this proposal on the grounds that we cannot define its meaning or interpret the cause and effect.
Thank you.
The Most Glorious Hack
25-05-2005, 20:52
You seem to have missed a very important word in the Proposal. Specifically: "sapient".
This Proposal isn't about mindless drones like one finds on an assembly line. It is about machines that are fulling thinking, sentient, sapient and self-aware. It is therefore requiring rights for beings that are "alive".
You are following semantics off a cliff. 'Robot' is a perfectly acceptable word as it is being modified by the adjective 'sapient'. What you are doing, in effect, is saying that a Proposal dealing with "blue airplanes" is unacceptable because the definition of 'airplane' doesn't mention the word 'blue'.
Rogue Newbie
25-05-2005, 21:52
If a nation doesn't use the death penalty on humans, it shouldn't use it on sapient machines, or any other sapient being.
First of all, we're talking about sentient machines, not wise ones. Second of all, humans are containable, where as a sentient machine could be created that is impossible to contain. Such a creature should then be destroyed.
Creating a sapient being does not give you the right to oppress it. I'm inclined to view sapient robot creation in much the same light as organic sapient creation - i.e. having kids. If the AI is created such that it requires a period of nurturing and development, the obligations are in fact reversed - the creator owes it to the robot to provide for it.
Okay, when parents have kids, they all-but-control what they do until they are considered grown. This is saying that 25 is when sentients will be considered grown. And I don't know why you keep mentioning wise creatures, as we're talking about machines with feelings.
If you created a robot with desires and needs, much like a human mind, which is what we are in fact talking about, then it would have just as much right to any of these things as a human does.
And with these rights should come responsibility, like dying.
No way. If a being is truly sapient, the fact that it was designed and built consciously and not grown in someone's womb, and that its brain processes are carried out by electronics and not chemical reactions, should not have any bearing on its rights or its worth.
It's not a sapient being, it's a sentient machine. Get it right. And no, it should not have any bearing on its rights or worth or responsibility.
You might as well try and state that human lives be prioritised over elves. As far as we're concerned, it's no better than racism.
Okay, the thing is that while elves and humans are spawns of biological evolution, sentient machines are the result of scientific technology that the humans (and elves, perhaps) created.
Rogue Newbie
25-05-2005, 21:54
The purpose of buying a sentient machine over employing a sapien humanoid is obvious. All those menial, dangerous, unpleasant and repetitive tasks that are becoming harder to find people to fill. Also all those techniquely challenging and complex problems that no person could do anyway.
On the contrary, you do not understand what I am saying here. Why would someone use a sentient machine for such tasks, when there are other robots that don't have thoughts and feelings, and which can do the task just as well, all the while refraining from complaining about it or being freed from service in twenty-five years.
The advantage of a thinking over a programmed machine is it's adaptability. With it's task at hand if a problem arises a programmer never thought about the sentient machine will keep going with no loss of production.
Machines without feelings and artificial emotion aren't sentient, but they, too, can adapt.
However, we now have a underclass of comprehending silicon based "life", and like throughout humankind's history it wont take them long to figure out they're getting the raw deal out of the situation. To paraphrase Karl Marx "The workers control the means of production, the owners take more than their share of the profits."
Karl Marx? I wasn't aware he existed in this world, and definately didn't know that when he was born he turned out exactly the same.
However, if you want to be thought of as an adult you have to play by the rules. Humans are mortal - living about 75 years on average, so if sentient machines want to be thought of as equal to people they must reliquish their immortality. 100 years from initial systems start up seemed like a nice round number.
Agreed, but what I'm saying is that this number should be dynamic rather than static. The average lifespan worldwide will surely vary as time goes on, so a fully-sentient robot's lifespan should as well.
It's becoming obvious to us here at the Commonwealth of Sentient Computers that this proposal will fail to gain support. We hope though that the discussion it generates can be used by another nation to propose a more suitable bill which contains the spirit of what we were trying to achieve.
I wouldn't give up that quickly, people support bills that aren't ready all the time on this site. Besides, you could just edit it to be more fair and acceptable.
The Vuhifellian States
25-05-2005, 22:09
Now by "robots", can you please explain this.
Usuak NS Classifications:
Baseline Humans- Complete Biological Humans
Cybernetics- Humans with biological and cybernetic organs
Androids/robots-Complete Machine
Do you mean the last section, or cybernetics?
Rogue Newbie
25-05-2005, 22:27
The third one.
Microdell
25-05-2005, 23:03
You seem to have missed a very important word in the Proposal. Specifically: "sapient".
You seemed to have missed the whole proposal. The actual text of the proposal NEVER mentions the word 'sapient'. Read it again.
This Proposal isn't about mindless drones like one finds on an assembly line. It is about machines that are fulling thinking, sentient, sapient and self-aware. It is therefore requiring rights for beings that are "alive".
You are correct, but the title reads "Robot Rights" The robots that screw in bolts to our vehicles on the assembly line do not require 'rights'. The title of this proposal is misleading and hard to interpret.
You are following semantics off a cliff. 'Robot' is a perfectly acceptable word as it is being modified by the adjective 'sapient'. What you are doing, in effect, is saying that a Proposal dealing with "blue airplanes" is unacceptable because the definition of 'airplane' doesn't mention the word 'blue'.
I don't believe I'd follow anything off a cliff, I'm not really the dangerous type. The proposal says a lot about sentient machines, which are not robots by definition. In fact it says nothing of the word 'robot' except in the title. Saying a 'robot' can understand feelings is not acceptable, and calling something a sentient robot is a paradox as robots are not programmed to understand and act on feelings or instinct. No matter how well you program a computer or machine, it is still acting on code and instructions from a controller. Once it crosses that line of being able to think for itself without commands (SENTIENT/SAPIENT), it is no longer a robot. Therefore you cannot modify the word robot with an adjective that conflicts with the noun, well you can, but you'd sound stupid doing so.
Anyway, the proposal never mentions 'sapient robot' either. I suggest you read the original proposal again. And I guess airplanes should have rights too, as they have AutoPilot which can adjust to keep the plane at a certain altitude, which in a sense is free will on the airplane's part... oh wait, I forgot that a Human Being programmed it. Aw hell, give airplanes rights too!
Lucius Bane
CEO, The Incorporate States of Microdell
Neo-Anarchists
25-05-2005, 23:38
You seemed to have missed the whole proposal. The actual text of the proposal NEVER mentions the word 'sapient'. Read it again.
You are correct, in that it doesn't say 'sapient'. It does, however, say 'sentient' in every single declaration. It even contains a definition at the beginning.
So you don't have an argument in that regard.
Xentroliat
25-05-2005, 23:46
Greetings fellow UN delegates. The Republic of Xentroliat is a new nation, but we feel we must voice our opinions.
1. The use of mechanical appendages to augment biological life in Xentroliat is fully legal and has been so for the last decade. Furthermore the government of Xentroliat encourages the installation of said mechanical appendages and pays for all costs for any citizen of Xentroliat to obtain said mechanical upgrades.
2. With this in mind, the nation of Xentroliat is fully prepared to accept the rights of our sentient creations and accept them as equals to ourselves. Our leader has given me permission to announce that the borders of the Republic of Xentroliat are now officially open to immigration for all sentient machines in the universe.
3. That being established, the next logical step for the cybernetic inhabitants of Xentroliat would be to take the final steps to replacing our weak meaty bodies for a more durable tritanium chasis/endoskeleton with a similar tritanium exoshell. Full government sponsorship for such modification of cybernetic beings will be given free of charge to all Xentroliatians. We look forward to transcending the gap between cyborg and complete machine.
4. In regards to certain security concerns, a "lethal code" would be programmed into any complete machine that is created in Xentroliat from this day on. This lethal code would remain latent until certain requirements have been met (IE: the planned murder of any sentient life be it machine or organic) at which time the machine would not be "killed" per se, merely shut down, by means of automatically shutting down all powercells in the complete machine. No data will be erased or altered at this time. Erasure or alteration will only take place after a full investigative inquiry about the events leading to the complete machine's shutdown has been found to prove it guilty and thus worthy of erasure and/or alteration.
5. Xentroliat would like to take this final moment to affirm all non cyborgs or complete machines that normal biological sentients will still be viewed as equals in our lands, and are more than welcome. Xentroliat wishes for the peaceful coexistance of meccha and orga and we will support this resolution.
We fully support the actions of Caloris Basin and the country of Sentient Computers and hope we can grow prosperous together.
-Anaximedes
leader of and delegate for The Republic of Xentroliat
every way you would do this would limit the freedoms that are wished on the "robot people"
by instituting some type of fail-safe, say determined termination or some type of Asimov laws, you are already limiting their freedoms of existince and choice in their actions.
by letting them have employment there will always be an underlining discrimination toward them so even though they could be capable of running a country or corporation they will most likely be used for menial labor that the majority of people would find distasteful. this will lead to a "class war" between the poor who need those jobs to survive and the "robot people" who can only get these jobs. then there is the argument of what do you pay something that only needs a mechanic.
as for the inevitable war between man and machine, as long as there are harrison fords, keenu reeves and reprogrammed arnold swartzeneggers helping the nick stahls we will win!
fire meets a majority of requirements for sentience and deep blue has beaten the worlds best chess players but i wouldnt hire either one to wash my car.
Neo-Anarchists
26-05-2005, 01:50
fire meets a majority of requirements for sentience
How does that work, for something that is a chemical reaction? Fire can't be sentient. At least, not that I know of.
If these robots would be given equal rights, would they be protected by the hate crimes law? Or is that a nation by nation thing?
Sentient Computers
26-05-2005, 09:01
We can clear up the use of the phrase "Robot Rights". The offical title "Basic Legal Rights for Sentient Machines Resolution." would not fit in the space provided for.
An android representative from the marketing company that was employed to give our proposal a final polish convinced the government of CoSC that the use of "Robot Rights" could be thought of more as the bill's slogan.
Also we are suprised by one of the previous debaters not hearing of the CoSC philospher Karl Marx. A most remarkable story in that it began function as a spot welder in a sheet metal plant and it's writings went on to influence some of the socialist nations of this world.
Sentient Computers
26-05-2005, 09:15
fire meets a majority of requirements for sentience and deep blue has beaten the worlds best chess players but i wouldnt hire either one to wash my car.
I. I think you're confusing the term "alive" with "sentient".
II. It's highly questionable Deep Blue is sentient, I would have called it programmed. Given a choice between playing chess or washing your car D.B. would not understand. You'd have to find some way of describing "car" "wash" "you" and "the bucket is in the laundry" in terms of "Queen to C4" etc.
Bugger of now
26-05-2005, 11:19
This is an important issue. VOTE NOW
The Vuhifellian States
26-05-2005, 12:11
Giving cyborgs rights is reasonable, seeing as they are the living, they have souls.
However machines can never be augmented to be a living being, they will always be mechanical, never biological.
Giving them rights makes about as much sense as re-instating anti-monopoly laws
The Caloris Basin
26-05-2005, 12:31
However machines can never be augmented to be a living being, they will always be mechanical, never biological.
Giving them rights makes about as much sense as re-instating anti-monopoly lawsAh, the arrogance of meat.
Rogue Newbie
26-05-2005, 21:29
Also we are suprised by one of the previous debaters not hearing of the CoSC philospher Karl Marx. A most remarkable story in that it began function as a spot welder in a sheet metal plant and it's writings went on to influence some of the socialist nations of this world.
OOC Oh, he exists IC, too? I didn't know Karl Marx existed on this game, and didn't know he said the exact same things he did in real life. Apologies.
_Myopia_
26-05-2005, 22:04
First of all, we're talking about sentient machines, not wise ones.
I'm using sapient in this sense:
Sapience
From Wikipedia, the free encyclopedia.
Sapience is the ability of an organism or entity to act with intelligence. Sapience is synonymous with some usages of the term sentient, though the two are not exactly equal: sentience is the ability to sense or feel, while sapience is the ability to think about sensations, feelings and ideas. In usage, sentience and sapience both imply some form or state of consciousness, although consciousness is not strictly required in the case of sentience (as applied to plant life, which ordinarily react to the stimuli of warmth and ultraviolet radiation from the sun).
Sapience is derived from the Latin word, "sapere", which means to taste, perceive, and the present participle of which forms part of Homo sapiens, the biological classification created by Carolus Linnaeus to describe the human race.
An artificially intelligent agent could demonstrate sapience while not having any capacity to feel (see also Turing test), while an animal might demonstrate it can feel (or react to) pain while not behaving with intelligence.
While precise definitions of sapience and sentience vary, it is agreed upon that most humans possess both. In science-fiction and animal rights (such as the Great Ape Project) the term 'person' is applied to any sapient being.
Second of all, humans are containable, where as a sentient machine could be created that is impossible to contain. Such a creature should then be destroyed.
Similar creations could be made with manipulation of organic lifeforms, but the UN has seen fit to grant equal rights to genetically engineered persons.
Really, this isn't relevant. If a human being is killing people and police can't capture them, there is, in most nations, allowance for the use of lethal force to stop the criminal from killing. That can apply equally to sapient robots.
Okay, when parents have kids, they all-but-control what they do until they are considered grown. This is saying that 25 is when sentients will be considered grown.
2 points - first, do parents get to enslave their children until they reach adulthood in your nation?
Second, children are denied full rights because they have not yet achieved maturity. A robot could conceivably be created mentally fully-formed, and should thus be treated as an adult. If a robot requires a period of child-like mental development, it should generally be given the rights of a child of an equivalent level of development (not forced to work).
And with these rights should come responsibility, like dying.
Well then this is a basic disagreement that we're not going to overcome. I don't see this as something anyone has a responsibility to do.
And how do you square this view with technologies for extending human lives? Would you have the UN ban the integration of human brains with artificial systems to sustain them longer than organic bodies alone can?
No way. If a being is truly sapient, the fact that it was designed and built consciously and not grown in someone's womb, and that its brain processes are carried out by electronics and not chemical reactions, should not have any bearing on its rights or its worth.
...
And no, it should not have any bearing on its rights or worth or responsibility.
You might as well try and state that human lives be prioritised over elves. As far as we're concerned, it's no better than racism.
Okay, the thing is that while elves and humans are spawns of biological evolution, sentient machines are the result of scientific technology that the humans (and elves, perhaps) created.
You're contradicting yourself. You just said that a robot's artificial nature should not have a bearing on its rights or worth, then declared that this discrimination is legitimate on the grounds that robots are built by humans.
And what does biological evolution have to do with rights? Why does it matter whether a consciousness arose by intentional design or random mutation?
Currently, this proposal does need change, because the computer I'm sitting in front of now could be seen to fulfil the requirements for sentience. I'm still in favour of scrapping it and reviving the effort to give equal rights to all sapient sentient beings.
Rogue Newbie
26-05-2005, 22:58
I'm using sapient in this sense:
And even under that definition sapience is not what we're talking about. We're talking about a robot with an artificially developed ability to feel, not one with the ability to comprehend what feeling is.
Similar creations could be made with manipulation of organic lifeforms, but the UN has seen fit to grant equal rights to genetically engineered persons.
I would consider genetically engineered persons the same thing as a fully sentient machine, to be honest. I think this is another one of our disagreements.
Really, this isn't relevant. If a human being is killing people and police can't capture them, there is, in most nations, allowance for the use of lethal force to stop the criminal from killing. That can apply equally to sapient robots.
On the contrary, some (more radical) nations may not do this.
2 points - first, do parents get to enslave their children until they reach adulthood in your nation?
Under the definitions set by this resolution of a sentient machine's rights, they are much more closer to children than you are thinking. Children are not permitted to go out on their own until they are at least 16, no, and in that essence you could consider them enslaved; however, they are not allowed to be mistreated in any way. Sentient machines are similarly protected from mistreatment by this resolution, as well.
Second, children are denied full rights because they have not yet achieved maturity. A robot could conceivably be created mentally fully-formed, and should thus be treated as an adult. If a robot requires a period of child-like mental development, it should generally be given the rights of a child of an equivalent level of development (not forced to work).
However, even if children are mature in all aspects of the word, most laws are affected by age limits for the sake of being completely sure that a child cannot be coached to display maturity for the sake of being released early. Children are not allowed to roam just because they are mature if they are only twelve years old - at least in the Democratic Republic of Rogue Newbie.
And how do you square this view with technologies for extending human lives? Would you have the UN ban the integration of human brains with artificial systems to sustain them longer than organic bodies alone can?
If this is a frequent thing, it will have an effect on a nation's average lifespan, and, as I said earlier, I believe that this should determine how long the robots are allowed to live. I am not in support of the hundred-year deactivation rule.
You're contradicting yourself. You just said that a robot's artificial nature should not have a bearing on its rights or worth, then declared that this discrimination is legitimate on the grounds that robots are built by humans.
I admit I did word that quite poorly. I do believe that a robot should have the same rights and responsibilities as humans if it has the same thoughts and feelings as humans, but the fact still remains that they a sentient is not a real human being, no matter how close it gets, and I cannot see the sense in allowing the citizens of nations the ability to judge a robot's life more important than a human's in any situation (except the specific scenario detailed previously). Perhaps I'm prejudiced, but that's the way I feel.
And what does biological evolution have to do with rights? Why does it matter whether a consciousness arose by intentional design or random mutation?
Because the intentional design was not a natural development - it did not occur by nature's evolution - and the random mutation occurred naturally. That, specifically, is why I see humans on a very slightly different level than I see genetically engineered humans and clones.
Hey, this is an interesting issue, and I only read about 1/2 of all the posts, but I think were getting ahead of ourselves. How about instead of worrying about the rights of robots, we create some laws governing the creation of robots. I am speaking of course, about The Three Laws of Robotics"
For those of you havent read Isaac Asimov, go out now and buy a copy of I, Robot . Also, here are the three laws:
1. A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Also there is a zeroth law of robotics which states that a robot may not injure HUMANITY, or, through inaction, allow HUMANITY to come to harm.
These laws are so fundamentally ingrained in a robot's programming that they simply cannot function if something goes wrong with them. All logical processes are based on these laws, and without them a robot is unable to make any logical decisions.
I propose we resolve that it be required for all development of sentient robots to be constructed on these principles. The principles protect humans from robots, and to a certain extent robots from humans. Once these laws are in place, more complicated issues about sentient robots can be handled.
Im surprised this hadnt come up yet in the discussion.
FOR THE LOVE OF GOD PEOPLE READ YOUR ASIMOV
Xentroliat
27-05-2005, 00:22
Because the intentional design was not a natural development - it did not occur by nature's evolution - and the random mutation occurred naturally. That, specifically, is why I see humans on a very slightly different level than I see genetically engineered humans and clones.
But in the same sense, you could call the creation of artificial life natural, as it is the nature of humans to be curious and industrious and therefore creative.
Seriously, its not like we're repeating the mistakes of science fiction here by giving our robots control of nuclear weapons or putting them in a higher ranking than us biologicals, we are merely trying to pull them from the lower level they're on now in most countries and simply make them our equals.
Rogue Newbie
27-05-2005, 00:31
But in the same sense, you could call the creation of artificial life natural, as it is the nature of humans to be curious and industrious and therefore creative.
Which is why I specifically defined nature, when I said that, as something occurring by nature's evolution... I was not using it in the sense that a certain concept is in something's nature, such as "it is the nature of humans to be... creative."
Seriously, its not like we're repeating the mistakes of science fiction here by giving our robots control of nuclear weapons or putting them in a higher ranking than us biologicals, we are merely trying to pull them from the lower level they're on now in most countries and simply make them our equals.
And, as I just said, in some respects they will never be our equals. We will not evolve into naturally-reproducing robots, and the intelligence and feelings they have or appear to have will always be artificially created, whether or not they are just as able to think and feel as we are.
Rogue Newbie
27-05-2005, 00:40
For those of you havent read Isaac Asimov, go out now and buy a copy of I, Robot . Also, here are the three laws:
1. A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Also there is a zeroth law of robotics which states that a robot may not injure HUMANITY, or, through inaction, allow HUMANITY to come to harm.
These laws are so fundamentally ingrained in a robot's programming that they simply cannot function if something goes wrong with them. All logical processes are based on these laws, and without them a robot is unable to make any logical decisions.
OOC Apparently someone didn't see what almost happened in the movie by the same name as a direct result of those laws. You know, where the robots decide through reasonable logic that, in order to protect the humans from themselves, humanity in its entirety must be enslaved?
IF ANYONE EVER JUDGES ASIMOV OR HIS WORKS BASED ON THAT STRING OF ADVERTISEMENTS DISGUISED AS A MOVIE WHICH BEARS THE TITLE "I, ROBOT" I WILL FEED THEM TO A PIT OF RABID WEASILS.
Read the book, or think about how the laws would work in society, but please dont make stupid comments based on a movie starring will smith.
BTW, the robots in the movie violate the first law of robotics blatantly in a ton of scenes. Also, the message of the movie was completely contradictory to the book. I seriously think Asimov would have cried had he seen his work so mutilated.
Rogue Newbie
27-05-2005, 00:56
Okay, it may be true that the robots violated the first law at various points in the movie, but the fact remains that if the robots decide that they are breaking law one by allowing humans to come to harm through inaction, even if the humans are harming themselves, they could very conceivably decide that we were too dangerous for ourselves.
It is conceivable, true. However you overlook two points:
1) If they decided we were too dangerous for ourselves, they could only "enslave" us in a peaceful manner. More importantly, our enslavement would in fact result in robots looking out for our good. That is actually what happens in the book, although no one except a few people really realize it. Under the laws, it is impossible for a robot to harm humanity in any way.
2) The possibility of sentient robots enslaving us for our own good is much preferrable to the possibility of sentient robots full out turning against us.
Rogue Newbie
27-05-2005, 01:12
It is conceivable, true. However you overlook two points:
1) If they decided we were too dangerous for ourselves, they could only "enslave" us in a peaceful manner. More importantly, our enslavement would in fact result in robots looking out for our good. That is actually what happens in the book, although no one except a few people really realize it. Under the laws, it is impossible for a robot to harm humanity in any way.
Alright, then, let us assume that is true. In that case, the first law could cause all robots to immediately cease functioning the first time they saw one human attack another... they cannot hurt the attacking human, nor can they allow the attacked human to come to harm, thus a paradox arises and the robot, frustrated at not knowing how to handle it, ceases action.
2) The possibility of sentient robots enslaving us for our own good is much preferrable to the possibility of sentient robots full out turning against us.
The difference would be that an attempted imprisonment of the entire human race would probably be a coordinated effort due to the conflict created by the first law. In the case that the sentients turn against us fully, it would probably occur in pockets of robots, or in individual robots over time, as they would not feel compelled to kill us the same way that the robots in the first scenario would feel compelled to enslave us.
Shemiramoth
27-05-2005, 02:04
The Holy Empire of Shemiramoth does not agree with this proposal, and shall do all in it's power to see this defeated should it come to a vote. We do not view robots/androids as living beings, rather the product of mankinds technology. It would also be against His Holiness' wishes to see artificial life viewed as a living thing.
We view robots as being tools for doing our bidding and nothing more.
Rogue Newbie
27-05-2005, 02:16
The Holy Empire of Shemiramoth does not agree with this proposal, and shall do all in it's power to see this defeated should it come to a vote. We do not view robots/androids as living beings, rather the product of mankinds technology. It would also be against His Holiness' wishes to see artificial life viewed as a living thing.
We view robots as being tools for doing our bidding and nothing more.
Okay, there are two possibilities here.
1.) There is really a person with opinions this ridiculous.
2.) One of the people that are for this bill just made a new account so that they could get on and make their opposition sound mentally retarded, which seems to be the more likely of the possibilities, seeing as this "Shemiramoth" fellow has just signed up and posted twice.
Shemiramoth
27-05-2005, 02:30
Okay, there are two possibilities here.
1.) There is really a person with opinions this ridiculous.
2.) One of the people that are for this bill just made a new account so that they could get on and make their opposition sound mentally retarded, which seems to be the more likely of the possibilities, seeing as this "Shemiramoth" fellow has just signed up and posted twice.
These are the Holy Empire of Shemiramoth's views and your words will not change them.
Cybertoria
27-05-2005, 02:48
I think sentient machine should have the same rights as Humans, after are we not biological "machines" our selfs? If our organs, fail we "shut down" death in other words. My Point is with us and sentient machines being of the same inteligance, they should have the same rights as every human on earth.
Rogue Newbie
27-05-2005, 02:54
The difference being that our intelligence is biologically authentic, and theirs is technologically artificial.
Alright, then, let us assume that is true. In that case, the first law could cause all robots to immediately cease functioning the first time they saw one human attack another... they cannot hurt the attacking human, nor can they allow the attacked human to come to harm, thus a paradox arises and the robot, frustrated at not knowing how to handle it, ceases action.
Seriously, you need to read the book. Im pretty sure a robot could figure out a solution if he saw a woman being mugged on the street. For instance, apprehend the attacker in as nonviolent a manner as possible. Paradoxes do arise from these laws, undoubtedly, but these paradoxes are trivial when compared with the general practicality of the laws and the safety it ensures.
You obviously dont understand that the laws place an intrsical desire to do everything for the betterment of mankind. A robot of such high intelligence would be able to identify and understand the humiliation and degradation outright enslavement would cause us, so it would be avoided.
The laws place such a strong need for the robot to only do good for humans that any malicious thoughts towards humans results in obvious malfunction.
In an absolue worst case scenario the laws prevent robots from actually doing us any harm.
I really think something along the lines of the three laws should be passed by the UN before we argue about the rights of sentient robots. If not these three laws, something along those lines which regulates the development of sentient technology.
Frisbeeteria
27-05-2005, 04:01
Okay, there are two possibilities here.
1.) There is really a person with opinions this ridiculous.
2.) One of the people that are for this bill just made a new account so that they could get on and make their opposition sound mentally retarded, which seems to be the more likely of the possibilities, seeing as this "Shemiramoth" fellow has just signed up and posted twice.
Rogue Newbie, I am just a hairs-breadth away from issuing an official warning to you for flaming. Your belligerent responses in a number of UN topics have caught my eye, but I think you're getting worse. You need to back off, NOW.
The UN is semi-IC, meaning that nations are permitted to post from the perspective of their nation persona. This post wa perfectly in-character for a theocracy. Even if Shemiramoth's player is representing his views and not his nation's stance, that too is entirely acceptable. Your personal attacks on the player are not.
~ Frisbeeteria ~
NationStates Forum Moderator
The One-Stop Rules Shop
Shemiramoth
27-05-2005, 04:04
Rogue Newbie, I am just a hairs-breadth away from issuing an official warning to you for flaming. Your belligerent responses in a number of UN topics have caught my eye, but I think you're getting worse. You need to back off, NOW.
The UN is semi-IC, meaning that nations are permitted to post from the perspective of their nation persona. This post wa perfectly in-character for a theocracy. Even if Shemiramoth's player is representing his views and not his nation's stance, that too is entirely acceptable. Your personal attacks on the player are not.
~ Frisbeeteria ~
NationStates Forum Moderator
The One-Stop Rules Shop
OC: I would just like to point out that all posts I make in this forum are completely in character. With that being said.....
The Holy Nation of Shermiramoth thanks the moderator for his assistance in this... most unpleasent of matters.
Greetings.
We of Roathin have decided to continue offering basic rights to automatons of the second order and above, and to intelligences of the first order and above. The right to maintenance is fully enshrined in the Codex Infernus and many other tomes referred to in our code of laws, wherein the right of dominion includes the duty of support.
We note that it is our right to dispense basic freedoms within our domain, and likewise to dispense higher freedoms within the capacity of our several powers. As corollary to this, it must be true that we may not dispense freedoms higher than we are capable of ensuring. This point eludes many. It seems very uplifting to some to declare the freedom of certain rights for their citizens even when they are incapable of ensuring that those rights remain protected.
Hence it is logical to us to allow rights as of a free-willed sentient being or a sapient being only to automata, constructs, machines and other putative intelligences or potential intelligences insofar as it can be determined that they are in fact free-willed, sentient or sapient within the exact terms of each category.
The debate is still open as a debate. We, however, maintain our stand.
The Most Glorious Hack
27-05-2005, 06:35
1. A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.Many (I might even say most) nations primarily populated by AI's don't follow the three (or 4) "laws" of robotics, largely because they view them as a form of slavery.
I propose we resolve that it be required for all development of sentient robots to be constructed on these principles. The principles protect humans from robots, and to a certain extent robots from humans. Once these laws are in place, more complicated issues about sentient robots can be handled.If the Hack was in the UN, this would be a deal-breaker. Our AI's are granted full rights of citizenship, and aren't viewed any differently than humans. Granted, they have biological bodies, but they are still viewed as "human".
FOR THE LOVE OF GOD PEOPLE READ YOUR ASIMOVThere is more to sci-fi than Asimov.
Seriously, its not like we're repeating the mistakes of science fiction here by giving our robots control of nuclear weapons or putting them in a higher ranking than us biologicals, we are merely trying to pull them from the lower level they're on now in most countries and simply make them our equals.Speak for yourself. The Hack's computer security is controlled by a single AI. My navy's flagship (a monsterously oversize Supercruiser) is captained by an AI. Many nations have AI's in just those capacities. By giving them true emotions, they have no desire to wipe out humanity.
Making AI's as humanlike as possible is how you prevent disaster, not by enslaving them with ad hoc "laws". If the AI can love and feel attachment, it will be no more likely to enslave humanity than a human.
DemonLordEnigma
27-05-2005, 08:36
Okay, though the old DLE is now considered dead (I'm ressurecting the nation with a different build), note that I was one of the leaders in the development and usage of AIs. Hell, the nation was ruled by an android with such of an advanced AI that no one knew he was artificial until an assassin tried to stab him to death. He was the base for DLE AIs.
Under my new form, I have yet to decide what function, if any, AIs will serve. They will certainly be partially inferior to natural beings, due to being unable to utilized linking books.
Lagrange 4
27-05-2005, 11:53
As we note that any opposition to this proposal is purely on religious or metaphysical grounds (or is logically baseless), we conclude that not agreeing with it is tantamount to supporting slavery. If AI citizens are guaranteed equal rights throughout the organisation, we might consider joining the UN.
In the meantime, a challenge to the opposition:
Without invoking a "spirit" or a "soul", outline why a sentient AI shouldn't have human rights.
Rogue Newbie
27-05-2005, 12:24
Rogue Newbie, I am just a hairs-breadth away from issuing an official warning to you for flaming. Your belligerent responses in a number of UN topics have caught my eye, but I think you're getting worse. You need to back off, NOW.
The UN is semi-IC, meaning that nations are permitted to post from the perspective of their nation persona. This post wa perfectly in-character for a theocracy. Even if Shemiramoth's player is representing his views and not his nation's stance, that too is entirely acceptable. Your personal attacks on the player are not.
~ Frisbeeteria ~
NationStates Forum Moderator
The One-Stop Rules Shop
I sincerely apologize, but I believed - and still believe - that the account in question was created for the sole purpose of discrediting the opposition, noting it's recent creation, sudden posting, and an immediate demonstration of understanding with regards to legislative progress on this game by going straight to bills that are not yet proposed and ignoring the one that is.
Perhaps I was brash. Sorry, Frisbeeteria. Sorry, Shemiramoth.
_Myopia_
27-05-2005, 13:24
And even under that definition sapience is not what we're talking about. We're talking about a robot with an artificially developed ability to feel, not one with the ability to comprehend what feeling is.
Which is why I'm not in support of the proposal, because its current definitions only include the ability to respond to and understand stimuli, which could be interpreted as covering my PC. I'm arguing for the recognition of equal rights for all beings which display sapience (and I suppose they would also have to be sentient at the same time).
On the contrary, some (more radical) nations may not do this.
If they won't kill lethal human criminals that they can't apprehend, then they shouldn't kill lethal robot criminals that they can't apprehend, as far as I'm concerned.
Under the definitions set by this resolution of a sentient machine's rights, they are much more closer to children than you are thinking. Children are not permitted to go out on their own until they are at least 16, no, and in that essence you could consider them enslaved; however, they are not allowed to be mistreated in any way. Sentient machines are similarly protected from mistreatment by this resolution, as well.
There's a fundamental difference here. Generally, parents are not owners, they are guardians, and they are obliged to care for and protect their children as they grow up. Children are not allowed to leave home and be independent citizens until they are old enough that they are likely to have the mental ability and experience to make adult decisions.
This proposal seeks to grant ownership, not guardianship, of robots. It does not place responsibilities on the owners, so they can treat the robot as they please. That's entirely different to the way most societies treat children and parents.
However, even if children are mature in all aspects of the word, most laws are affected by age limits for the sake of being completely sure that a child cannot be coached to display maturity for the sake of being released early. Children are not allowed to roam just because they are mature if they are only twelve years old - at least in the Democratic Republic of Rogue Newbie.
As you say, uniform age limits are imposed in most countries because despite the few who will be ahead, it is generally the fairest way to deal with things and the best way to ensure that children are properly protected.
But robots aren't like children - human children develop according to a fairly regular pattern. There is some variation, but it's limited because we are all of one species. But artificial intelligences could be created in all manner of ways, so that they develop at a huge variety of different rates. Even if the ownership of the current text was changed to a parent-like guardianship status, it would make no sense to impose uniform age limits when it's perfectly possible to have one AI that requires no developmental stage, and one that requires 50 years.
If this is a frequent thing, it will have an effect on a nation's average lifespan, and, as I said earlier, I believe that this should determine how long the robots are allowed to live. I am not in support of the hundred-year deactivation rule.
I admit I did word that quite poorly. I do believe that a robot should have the same rights and responsibilities as humans if it has the same thoughts and feelings as humans, but the fact still remains that they a sentient is not a real human being, no matter how close it gets, and I cannot see the sense in allowing the citizens of nations the ability to judge a robot's life more important than a human's in any situation (except the specific scenario detailed previously). Perhaps I'm prejudiced, but that's the way I feel.
Because the intentional design was not a natural development - it did not occur by nature's evolution - and the random mutation occurred naturally. That, specifically, is why I see humans on a very slightly different level than I see genetically engineered humans and clones.
Sorry, but enslavement and forcible restriction of lifespan are not minor things. And I still cannot see any legitimate grounds for discrimination in the distinction between designed and random development.
Humans try to influence the "design" of their offspring constantly. At least subconsciously, our instincts encourage us to choose mates which will give our children advantageous qualities. Mothers controlling their diet and behaviour during pregnancy in order to influence the development of their child is a widespread practice. Potential parents sometimes consider the possibility that they are carriers of alleles for genetic diseases or for predispositions to diseases, and choose whether to have children - a decision often based partly on their potential mates, in the interests of avoiding reproducing with another carrier. Parents will also raise their children in different ways to influence the nature of their minds and their decsion-making processes.
Designing an AI is just the same thing but on a more fundamental level. I can't see any way in which design to any extent can have a bearing on the entitlement of beings to rights, which is I guess why I disagree with your stance on lifespans. You seem to believe that robots should be forced to follow humans, only gaining something if most humans already have it anyway. I believe in treating sentient/sapient robots and humans equally. We don't execute humans if they manage to extend their lifetimes well past the average, and nor should we kill robots for the same thing.
THAPOAB, enforcing Asimov's Laws of Robotics on sentient, sapient beings is an infringment on their basic rights. In particular, the absolute obligations to obey humans and not to harm themselves are appalling and make for robots suffering outright and blatant slavery to the human race.
Rogue Newbie also has a valid point about the potential for robots to take these laws too far. In the books, they were obliged to resolve conflicts between saving different humans by calculating the number of lives at risk on each side.
Therefore it's perfectly possible that Asimov robots might decide to follow their zeroth law obligations by sacrificing a few humans to enslave us, forcing us all to live extremely safe lives, extended for as long as possible by the forcible application of life-extension technologies.
The zeroth law would actually need to be reworded, as "humanity" is ambiguous - it would need to say "the human race" or something. With the current wording, robots could legally be created which interpret humanity in its alternate sense "The quality of being human". Their minds and cultures might then be developed to a stage where they consider themselves to have achieved the essence of humanity - that they were themselves, in a mental and spiritual sense, human beings. Their zeroth law obligations might then lead them to conclude that humanity as a trait had a far better chance of surviving in them than in organic humans, and thus decide to wipe us all out before humans could have even the slightest chance of destroying them.
I think sentient machine should have the same rights as Humans, after are we not biological "machines" our selfs? If our organs, fail we "shut down" death in other words. My Point is with us and sentient machines being of the same inteligance, they should have the same rights as every human on earth.
Bang on. Until some special quality humans have can be demonstrated that cannot theoretically be produced by technology, and for which a good argument can be made that it matters in terms of rights, there are no grounds for discrimination.
As we note that any opposition to this proposal is purely on religious or metaphysical grounds (or is logically baseless), we conclude that not agreeing with it is tantamount to supporting slavery.
We oppose this proposal, not because robots don't deserve equal rights, but because this proposal forcibly relegates robots to a status of having fewer rights than humans. It allows for enslavement of intelligent robots for the first 25 years of their existences, and mandates their execution at age 100.
We recommend that if you too support equal rights for all sentient and sapient beings, regardless of artificial or "natural" origins, you oppose this proposal and work for something offering true freedom and equality.
Gwenstefani, however, is reluctant to admit equality of humans and robots. Humans, in our opinion, will always come first. Regardless of how sentient our toasters become, they are still just machines.
The Constitutional Republic tackled this issue in our own borders some time ago .... On a far grander scale (expanding personhood to a set of criteria which surpasses special and functional types).... Due to presence and accounting of AI based computers/robots, as well as differening sapient-like species we've encountered in exploratory missions, to ensure that humans (and other homo-sapiens), AI based machines, and the like were treated as equal persons to Tekanian species which are home to our systems.
We find states which deny basic rights and equality (based on construction, whether mechanical or biochemical) as opposed to operative and functional criteria; to be uncivilized... Prior to the enactment of Amendment expanding the definition of "προσοπω" (personhood) across formal and special boundries, Homo Sapien Sapiens from Terra and other colonies, did not enjoy rights as "persons" in the Republic, given that they were not Τεκηνιους.
Darkumbria
27-05-2005, 14:52
The Constitutional Republic tackled this issue in our own borders some time ago .... On a far grander scale (expanding personhood to a set of criteria which surpasses special and functional types).... Due to presence and accounting of AI based computers/robots, as well as differening sapient-like species we've encountered in exploratory missions, to ensure that humans (and other homo-sapiens), AI based machines, and the like were treated as equal persons to Tekanian species which are home to our systems.
We find states which deny basic rights and equality (based on construction, whether mechanical or biochemical) as opposed to operative and functional criteria; to be uncivilized... Prior to the enactment of Amendment expanding the definition of "προσοπω" (personhood) across formal and special boundries, Homo Sapien Sapiens from Terra and other colonies, did not enjoy rights as "persons" in the Republic, given that they were not Τεκηνιους.
Consider me whatever you like, but my country does not agree with the practice of acknowledging that a computer is alive. Why? Can you turn off a human? Just flip a switch and stop its actions? No, but I can switch off a computer, or robot. That is prime in their very makeup. How do you fix a robot? You open it up, and replace an electronic component. Then you turn it back on.
What do you do when a robot, driving a hovercar, breaks down? The car will crash, traffic snarled, because a robot ran out of battery power? I think not. Darkumbria has a method around this, should this worthless resolution pass, turn off the machines. They state 25 years in the proposal. Darkumbria, currently, overhauls its robots (personal and industrial, sentient or not) every 10 years. As I stated before...This has no effect on my country.
However, I would call to the rational thinking nations of the world, who do not wish to be slave to their own creations to vote this proposal into oblivion where it belongs.
Greetings.
It appears that, apart from metaphysical approaches, some are arguing the case for and against construct freedoms by invoking the ability to terminate and restart a process.
There are vaguely disturbing references to 'switching off' a machine, and the concomitant argument in those cases that humans (or the equivalent) cannot be switched off at will. To which opponents reply that death is an (in)famous terminator of human life (or its equivalent).
We raise the obvious case of machines which rise to sentience of their own accord in response to the effects of their environments, and which develop in such a way that they cannot be simply switched off. We further raise the obvious case of thinking machines which are not designed to be switched off.
Should such constructs be terminated, their complexity tends to lead to difficulty in restart, much as humans bludgeoned to death or otherwise violently terminated by unnatural events also tend to experience difficulty in restart.
Our conclusion is that such legislation depends on the complexity of the sentience involved. At the point of sentience such that a synthetic organism cannot be differentiated from a natural counterpart (in ratiocinatory functions or other responses), it must be granted the rights equivalent to its natural counterpart.
The logical corollary is therefore that should this proposal be passed with time limit controls, said controls should also apply to natural organisms (if such can indeed be differentiated from synthetic ones). The incident or accident of one creating the other is not the issue.
Let us nevertheless consider the special case of creation.
1. Natural organism creates natural organism by synthetic process (e.g. sex, cloning, mitosis, parthenogenesis). Should the first be allowed to dictate the span of free existence of the second? What is the nature of that dictat?
2. Natural organism creates synthetic organism (e.g. robot, golem, homunculus) through obviously unnatural process (i.e. one which requires deliberate action against the usual action of nature - in itself debatable). Should the first be allowed to dictate the span and nature of the free existence of the second? Can the first be allowed to emancipate the second?
3. Synthetic organism uses unnatural process to accelerate development of natural organism from natural material that in itself would remain non-viable (e.g. intelligent robot incubators of human embryos, 'seeding' constructs, autonomic 'ark' devices). Ask the same questions.
Our ancestral tree includes daemonic beings created by a higher being of unthinkably greater powers. We shudder to think of the outcomes should that being have decided to limit our span in such a petty way as that which some here are proposing. However, we must state that we are also not cognizant of the ways in which that being might indeed have set unknown limits on our free will and other 'rights'. It would indeed be ironic if we had been programmed to imagine such rights and enter lengthy debates on such .
Making AI's as humanlike as possible is how you prevent disaster, not by enslaving them with ad hoc "laws". If the AI can love and feel attachment, it will be no more likely to enslave humanity than a human.
yes, because no human with emotions has ever tried to enslave other humans....
I also find it interesting that the two main problems people seem to have with the three laws are :
It provides an opening for robots to enslave us
and
It turns the robots into slaves by violating their rights
Interestingly, two completely opposite fears.
The bottom line is, the laws protect humans from robots. Even if the did something which seems to be against the human race, it would have to be for our ultimate good. I think we should be worried more about protecting ourselves than giving robots rights.
If you really think the laws are slavery, perhaps we could reword the second law. I dont see how the first and second laws turn robots into slaves.
yes, because no human with emotions has ever tried to enslave other humans...
Greetings.
Not being completely human ourselves, we of Roathin are uncertain as to whether this is intended ironically or literally, rhetorically or definitively.
Once again, the key idea of a sentient machine has not been fully addressed. The fact is that such a construct need not have a human progenitor. We therefore wonder at why human creativity is being accorded such high and controlling respect in this debate.
Yea..... if were going to go into abstract theoreticals like robots which werent made by humans and other such things which have no bearing on the real world whatsoever, Im not even going to try. Anyways, all this debate is moot anyways, since this proposal will never get passed.
I know debates arent supposed to be about the real world so dont tell me that. It seems fruitless to try and debate about things which arent within the known constructs of reality.
It was pretty obvious that my comment was sarcastic, dont actually ask a question unless you really dont know the answer.
Consider me whatever you like, but my country does not agree with the practice of acknowledging that a computer is alive. Why? Can you turn off a human? Just flip a switch and stop its actions? No, but I can switch off a computer, or robot. That is prime in their very makeup. How do you fix a robot? You open it up, and replace an electronic component. Then you turn it back on.
What do you do when a robot, driving a hovercar, breaks down? The car will crash, traffic snarled, because a robot ran out of battery power? I think not. Darkumbria has a method around this, should this worthless resolution pass, turn off the machines. They state 25 years in the proposal. Darkumbria, currently, overhauls its robots (personal and industrial, sentient or not) every 10 years. As I stated before...This has no effect on my country.
However, I would call to the rational thinking nations of the world, who do not wish to be slave to their own creations to vote this proposal into oblivion where it belongs.
A rational state, would recognize the importance of equality of rights to all sentient beings, regardless of species or form. And not subscribe to your bigotry, and backwards thinking, which reduces to nothing but primitive xenophobia... Why should I expect rational thought from group of pithekos that have communed into a state....
It's no wonder our ancestors took so long to consider homo sapiens as nothing more than animals, pithekos to be kept as pets for our amusement.... Ah, I remember my grandfather showing me a picture he had taken with a street performer.... with a human-pithekos on a chain, who danced to some old music from an ancient lagouto the performer played.... Also, pictures from the zoo, where human pithekos were held in cages, and young tekanious would go and toss them treats.... watching them fight over those tidbits snacks...
Tekania, at least, evolved in understanding to realize that these pithekos were sentient, and should be treated equally....
Frisbeeteria
27-05-2005, 18:14
[topic hijack]I believed - and still believe - that the account in question was created for the sole purpose of discrediting the opposition, noting it's recent creation, sudden posting, and an immediate demonstration of understanding with regards to legislative progress on this game by going straight to bills that are not yet proposed and ignoring the one that is.
I have created nations for the sole purpose of answering a post under a specific name to make a specific point (I'm thinking about my long-dead puppet, "Hester Prynne" and a thread about adultery). There is nothing whatsoever illegal in doing so.
I've switched to low-post-count puppets to make a point in a thread that a moderator shouldn't make, because I have opinions too, and I don't want people to think (incorrectly) that my ideology colors my moderation decisions. I did the same prior to being modded, because Frisbeeteria has an in-character persona of a corporate oligarcy, and I had points to make that would have been totally wrong for such a state to make. There is nothing whatsoever illegal in doing so.
In short, jumping somebody's case over using a perfectly legal tactic to make a perfectly legal point is something you want to avoid in the future.
~ Frisbeeteria ~
NationStates Forum Moderator
[/hijack]
I agree with myopia on this one. Realistically, sentient robots/computers should be enjoined the same rights as other "persons"... So this proposal is not effective enough (and denies them rights we have already enjoined them with in our own state, thus rendering them slaves again to biologics.
_Myopia_
27-05-2005, 23:10
yes, because no human with emotions has ever tried to enslave other humans....
If we developed suitable technologies, would you support the forcible reprogramming of all human children's brains to follow Asimov's Laws?
I also find it interesting that the two main problems people seem to have with the three laws are :
It provides an opening for robots to enslave us
and
It turns the robots into slaves by violating their rights
Interestingly, two completely opposite fears.
That's irrelevant. Either scenario is plausible. Exactly what happened would be determined by exactly how the 3 Laws were programmed in (in their current format as human language, they can be quite ambiguous), how the robots were wired up to make decisions, what the robots took to constitute "harm" to humans, and their individual experiences.
The bottom line is, the laws protect humans from robots. Even if the did something which seems to be against the human race, it would have to be for our ultimate good. I think we should be worried more about protecting ourselves than giving robots rights.
I dispute both of these points. Asimov robots could actually be a great hindrance to society, as the first law would quite likely oblige them to go round doing their absolute best to prevent humans engaging in dangerous activities such as smoking, drinking, bungee jumping, skydiving, boxing, playing rugby...etc. etc. I don't view this as a good thing for the human race.
And nor do I agree that we should prioritise our safety over their deserved rights. If there was a society divided along racist lines, in which there were proposals to grant equality to the oppressed group, would you advocate brainwashing them all first, compelling them to follow the three laws?
It's no more justifiable to do any of these things to intelligent robots than it is to do them to humans.
hmm, actually a good question. I think that if all humans were incapable of doing harm to other humans and always tried to potect their own lives, the world would be a much better place. If it were possible, I think I might brainwash all people to do just that.
Fatus Maximus
28-05-2005, 02:08
Fortunately, the rest of us wouldn't. :D
_Myopia_
28-05-2005, 09:52
hmm, actually a good question. I think that if all humans were incapable of doing harm to other humans and always tried to potect their own lives, the world would be a much better place. If it were possible, I think I might brainwash all people to do just that.
Thoughtcrime is doubleplusungood! Bring on the thoughtpolice.
:rolleyes:
hmm, actually a good question. I think that if all humans were incapable of doing harm to other humans and always tried to potect their own lives, the world would be a much better place. If it were possible, I think I might brainwash all people to do just that.
You go first. :D
Thoughtcrime is doubleplusungood! Bring on the thoughtpolice
lol, true dat
I said I might. Think about, why WOULDNT you make it so that people couldnt hurt each other or themselves? We would then live in a perfectly peaceful world.
And when I say brainwash, Im not thinking Orwellian brainwash, more like Alduous Huxley brainwashing. Nice peaceful indoctrination.
lol, true dat
I said I might. Think about, why WOULDNT you make it so that people couldnt hurt each other or themselves? We would then live in a perfectly peaceful world.
And when I say brainwash, Im not thinking Orwellian brainwash, more like Alduous Huxley brainwashing. Nice peaceful indoctrination.
Alright, so your nation is AH Brainwashed and can't hurt anyone or themselves.
I'll invade with a herd of Girl Scouts with whacking sticks and take you over. :p
On a more serious note - so, you know for a fact that the "violent tendencies" aren't tied in to, say, our primate curiousity and inventiveness?
Im sure thats nothing a few human guinea pigs couldnt figure out.
I doubt it is though. I mean, Einstein was pretty creative/curious and he was also pretty damn peaceful. Not to say the complete removal of violence and ill will wouldnt change pretty much everything we know. But thats a different topic.
Would anyone say that removal of all ill will towards our fellow man (without loss of creativity and whatnot) would be a bad thing?
Another question, arent robots really just man's attempt at remaking himself, with the possibility of improvement?
_Myopia_
29-05-2005, 09:00
Im sure thats nothing a few human guinea pigs couldnt figure out.
I doubt it is though. I mean, Einstein was pretty creative/curious and he was also pretty damn peaceful. Not to say the complete removal of violence and ill will wouldnt change pretty much everything we know. But thats a different topic.
Would anyone say that removal of all ill will towards our fellow man (without loss of creativity and whatnot) would be a bad thing?
There are a couple of reasons for not doing this. First, conflict (not necessarily in the sense of war, but all situations where people are opposed) is an inherent part of human life and development. What would life be if nobody ever had heated arguments, if nobody ever got angry, and if nobody disliked anyone else? I don't think we'd be able to become whole people. And the best development often arises from a clash of ideas and the defeat of those with the poorer ideas. Finally, some people do bad things to other people believing that they are helping them. If we can't do bad things to our fellow humans, we won't be able to stop these people.
Second, this is a blatant removal of individuals' free will, and thus just about the worst infringement on their freedoms you could make. I'd rather live in a world where people suffer at each others' hands than where the state removes their free will.
GMC Military Arms
29-05-2005, 16:19
The bottom line is, the laws protect humans from robots. Even if the did something which seems to be against the human race, it would have to be for our ultimate good. I think we should be worried more about protecting ourselves than giving robots rights.
But they do not protect robots from humans. The three laws require robots to passively accept mistreatment, deliberate neglect and outright destruction even if they are self-aware, sentient lifeforms.
Watch what happens with a few changed words:
1. A black person may not injure a white person, or, through inaction, allow a white person to come to harm.
2. A black person must obey the orders given to them by white people except where such orders would conflict with the First Law.
3. A black person must protect their own existence as long as such protection does not conflict with the First or Second Law.
Does that sound fair or acceptable? If not, why does it sound acceptable when you replace 'black person' with 'robot?' A robot that is self-aware and has emotions and the ability for creative thought should not be prohibited from defending itself from murder or violence, or compelled to perform whatever abitary tasks a human 'master' deigns to give it. Enslaving any lifeform of equal or similar [or greater] intellect is morally reprehensible.
There are a couple of reasons for not doing this. First, conflict (not necessarily in the sense of war, but all situations where people are opposed) is an inherent part of human life and development. What would life be if nobody ever had heated arguments, if nobody ever got angry, and if nobody disliked anyone else? I don't think we'd be able to become whole people. And the best development often arises from a clash of ideas and the defeat of those with the poorer ideas. Finally, some people do bad things to other people believing that they are helping them. If we can't do bad things to our fellow humans, we won't be able to stop these people.
Please, dont be goofy. No one said anything about removing competitiveness. I often get in heated arguments, but these arguments never involve me harming another individual. There are people I dislike that I come into contact with daily, but I never have harmed them in any way.
Watch what happens with a few changed words:
Quote:
1. A black person may not injure a white person, or, through inaction, allow a white person to come to harm.
2. A black person must obey the orders given to them by white people except where such orders would conflict with the First Law.
3. A black person must protect their own existence as long as such protection does not conflict with the First or Second Law.
Does that sound fair or acceptable? If not, why does it sound acceptable when you replace 'black person' with 'robot?' A robot that is self-aware and has emotions and the ability for creative thought should not be prohibited from defending itself from murder or violence, or compelled to perform whatever abitary tasks a human 'master' deigns to give it. Enslaving any lifeform of equal or similar [or greater] intellect is morally reprehensible.
Well what if we go a step further:
1. A human being may not injure another human being, or, through inaction, allow another human being to come to harm.
2. A human being must obey the orders given to them by human beings except where such orders would conflict with the First Law.
3. A human being must protect their own existence as long as such protection does not conflict with the First or Second Law.
In that, case, Id say the only law violating human rights is the second law.
For the sake of argument, what dyou say we forget about the second law?
Rogue Newbie
29-05-2005, 18:10
Okay, so now those people in hiding that dodged the reprogramming of the human race come out and start killing everything, and the humans can't do anything about it because they're programmed not to be able to hurt humans under any circumstance.
Greetings.
We note with some consternation that far from debating rights to be assigned or offered to alternative intelligences, members of this assembly are now debating the selective curtailment of human rights. While we ourselves do not worry overly much over said selective curtailment (after all, we are an authoritarian autarchy ourselves), we prefer that the issue of created intelligences with an explicit progenitor race be addressed.
We are only in this august assembly to learn about the attitudes and strategies demonstrated by humans in decision-making. We have one point to state: the opposite of 'morons' would appear to be 'lessons'.
Yea, well, robots arent getting rights.
Sorry
Lagrange 4
29-05-2005, 22:04
Yea, well, robots arent getting rights.
Sorry
Enough of this. I thought the UN was a body dedicated to justice and democracy. Some of you seem to think that humans can create consciousness and then enslave it at their leisure.
No-one has asked you to grant toaster ovens voting rights. Read the wording.
It clearly applies only to sapient or supersapient AIs.
Outside of a knee-jerk biochauvinist response or an emotional non-sequitur, there has not been a single logical objection on this thread. Please, trust your reason and sense of justice for once.
So far, we tentatively would support.
As long as we can tweak in our national laws, we're happy.
_Myopia_
29-05-2005, 23:17
Please, dont be goofy. No one said anything about removing competitiveness. I often get in heated arguments, but these arguments never involve me harming another individual. There are people I dislike that I come into contact with daily, but I never have harmed them in any way.
Thing is, it doesn't specify what kind of harm. It could be emotional harm, so we can't upset people. Or damage to someone's career, which raises massive problems.
1. A human being may not injure another human being, or, through inaction, allow another human being to come to harm.
2. A human being must obey the orders given to them by human beings except where such orders would conflict with the First Law.
3. A human being must protect their own existence as long as such protection does not conflict with the First or Second Law.
In that, case, Id say the only law violating human rights is the second law.
Third law too. Can't do anything dangerous at all. Some of us happen to be partial to alcohol and sky-diving.
The thing is, as much as I agree that harming other people is wrong, I can't accept mind control. The idea of sentient/sapient beings having their free will and thought processes messed with against their will is just anathema to me.
Outside of a knee-jerk biochauvinist response or an emotional non-sequitur, there has not been a single logical objection on this thread.
What about the concern that in many cases, this proposal is a reduction in freedom for sentient robots. In many places, they're afforded equal rights, but this proposal seeks to allow 25 years of slavery for every robot and forced execution at age 100.
Lagrange 4
29-05-2005, 23:43
What about the concern that in many cases, this proposal is a reduction in freedom for sentient robots. In many places, they're afforded equal rights, but this proposal seeks to allow 25 years of slavery for every robot and forced execution at age 100.
We mostly disregarded this, since these parts of the proposal would contradict the Universal Declaration of Human Rights and be rendered moot once the robots in question become citizens. We're not fully acquainted with the legal mechanism of the UN, but it is our understanding that no proposal can override the basic charter.
_Myopia_
29-05-2005, 23:50
We mostly disregarded this, since these parts of the proposal would contradict the Universal Declaration of Human Rights and be rendered moot once the robots in question become citizens. We're not fully acquainted with the legal mechanism of the UN, but it is our understanding that no proposal can override the basic charter.
This is NOT the real UN.
We have NO charter and the Universal Declaration of Human Rights doesn't exist (we have the universal bill of rights, but that's just a normal resolution, plus it specifically refers only to human beings). The only overriding rules are the mods' protocols on what is and isn't legal.
Equal citizenship is only granted by this proposal after the 25 years of slavery, and if anything, the equality would mean that humans would have to be executed at 100 too, not that robots would get out of it.
If a resolution mandates something, that's binding.
GMC Military Arms
29-05-2005, 23:51
Well what if we go a step further:
1. A human being may not injure another human being, or, through inaction, allow another human being to come to harm.
2. A human being must obey the orders given to them by human beings except where such orders would conflict with the First Law.
3. A human being must protect their own existence as long as such protection does not conflict with the First or Second Law.
How would you propose we enforce such a ridiculous set of laws? Since one risks injury by slicing carrots with a knife, how would you protect someone from that risk of harm? How about the risk of harm that they face by sleeping under a roof? Through inaction, they cannot allow the other human to face the tiny risk of the ceiling falling on them, after all.
How would the first law apply to doctors and surgeons making incisions or amputations, who must 'injure' as part of the process of healing people?
Furthermore, a situation where one must be forced to be peaceful through mind control or conditioning is a false one, and would be bound to eventually fail. What happens if someone wished to opt out of the control? Would they be imprisoned, or forced to do so? Would anyone have the right to choose to keep free will?
Regarding the last law, it is every human's right to decide for themselves whether to preserve their own life or another's. In the strange world you have envisioned, a human would be duty-bound to take action that risked their own life no matter what the chance that the second human would also die. So, plenty of non-swimmers jumping into rivers to save people.
Psychologically enforced suicide. Charming.
Lagrange 4
29-05-2005, 23:56
Fair enough, Myopia.
In any case, a proposal like this is needed. As long as the contradictory parts are removed, it's a workable set of guidelines that will further democracy all over the world. The UN managed to accept the BioRights Declaration, so this would be only logical progression. Then again, I may be too optimistic in assuming that the members can be consistent with their votes.
_Myopia_
29-05-2005, 23:59
What's needed is a resolution granting equal rights to ALL beings that are both sapient and sentient. No special conditions, and not concentrating on robts at the expense of say, elves.
Lagrange 4
30-05-2005, 00:20
I agree. We are not a UN member, so someone else will have to propose pan-sapient rights. I sincerely believe that there's enough decency in the world for it to get a substantial majority.
What's needed is a resolution granting equal rights to ALL beings that are both sapient and sentient. No special conditions, and not concentrating on robts at the expense of say, elves.
Hmmmm.... proposed that. As I remember, it didn't make quorum.
Rogue Newbie
30-05-2005, 00:45
What's needed is a resolution granting equal rights to ALL beings that are both sapient and sentient. No special conditions, and not concentrating on robts at the expense of say, elves.
I would support that as long as there was a clause regarding biologically authentic life taking priority over artificially created life in deadly situations.
_Myopia_
30-05-2005, 00:48
Hmmmm.... proposed that. As I remember, it didn't make quorum.
Maybe that whole effort could be revived. We had a decent proposal put together, and the momentum just collapsed. I think SaySomething was the last one to be acting as the proposer, but I haven't seen or heard anything about it recently.
Lagrange 4
30-05-2005, 00:54
I would support that as long as there was a clause regarding biologically authentic life taking priority over artificially created life in deadly situations.
What would such a situation be, and how would discrimination be justified?
Rogue Newbie
30-05-2005, 01:03
What would such a situation be, and how would discrimination be justified?
I described such a situation earlier in greater detail if you would like to review the transcripts (OOC scroll back to page 5 or 6 or something in this thread), but it would basically involve a situation where both a robot and an authentically created sentient were threatened with destruction. I am simply stating that the authentic being would have priority over the artificially intelligent being with regards to any rescue attempts, as the artificially created being is, quite simply, artificial, despite how intelligent they may be. This excludes situations where the artificially intelligent sentient is in possession of information that would be vital to the survival of any number of authentically developed sentients equal to or greater than the number of those being threatened in the given situation.
GMC Military Arms
30-05-2005, 01:11
So in your world the life of a biological convicted murderer would be more valuable than an artificial aid worker who looked after orphans and kittens? We cannot pick and choose what we want 'equality' to mean, either they are treated as equal, or they are discriminated against. The second being unjustifiable and unfair.
Rogue Newbie
30-05-2005, 01:40
True, I did not consider that, but there's a simple solution - instead of biologically authentic sentients taking priority, let biologically authentic right-possessing sentients take priority. For all purposes of this resolution, criminals could lose their rights upon conviction of a crime greater than a misdemeanor... in fact, in the Democratic Republic of Rogue Newbie, this is already the case. This would solve the aforementioned problem completely. I wrote the wording of such a clause earlier, and I will repost it now, modified to fit what we have discussed.
1.) Noting the possibility of a situation in which the lives of both a number of sentient machines and a number of biologically authentic right-possessing sentients are threatened with destruction, and realizing that every party in a life-threatening situation cannot always be saved, let it be expressly stated that priority in life-saving measures be given to any number of the latter over any number of the former, disregarding how the numbers of either compare.
2.) Also noting the possibility of a sentient machine that holds information vital to the lives of biologically authentic right-possessing sentients, let it be expressly stated that the former be saved before the latter, if the number of biologically authentic right-possessing sentient deaths that could be expected to result from the loss of said information is greater than or equal to the number of biologically authentic right-possessing sentient deaths that will occur in the aforementioned life-threatening situation.
GMC Military Arms
30-05-2005, 01:44
What the hell is a 'biologically authentic' sentient? Are we going to deny rights to people because they have artificial limbs?
And again, why should a creature be afforded more rights simply because of the manner in which its body works?
Rogue Newbie
30-05-2005, 02:25
What the hell is a 'biologically authentic' sentient? Are we going to deny rights to people because they have artificial limbs?
And again, why should a creature be afforded more rights simply because of the manner in which its body works?
One that developed via natural biological process, as opposed to a clone, or a metalloid construct.
Because the feelings they think they have and the intelligence they possess is not real, it is artificially created, not naturally developed.
GMC Military Arms
30-05-2005, 02:40
One that developed via natural biological process, as opposed to a clone, or a metalloid construct.
Because the feelings they think they have and the intelligence they possess is not real, it is artificially created, not naturally developed.
Clones don't have rights even though they are biological sentients? Do you require a clone to die to save an 'original' as well?
A biological thinks through chemical processes and electrical impulses. This is clearly very different from a self-aware machine that thinks using, um, chemical processes and electrical impulses. Why does the method of creation affect the validity of the child's feelings? And how is something measurable not 'real?'
Saint Uriel
30-05-2005, 02:50
Clones don't have rights even though they are biological sentients? Do you require a clone to die to save an 'original' as well?
A biological thinks through chemical processes and electrical impulses. This is clearly very different from a self-aware machine that thinks using, um, chemical processes and electrical impulses. Why does the method of creation affect the validity of the child's feelings? And how is something measurable not 'real?'
I haven't chimed in here since my toaster comments early in the thread, but I completely agree with your point, GMC. I would even take it a step further. I feel we are dangerously close to a slippery slope when we start to differentiate between rights of a clone and an "original". The method of creation of a human life should not matter at all. If we do start to differentiate, where does it stop? Are so-called "test tube" babies inferior because they are a product of in vitro fertilization in a lab rather than a "natural" in vivo conception? IMO, both IC and OOC, all human life should be viewed as equally valuable, regardless of its origin.
Rogue Newbie
30-05-2005, 03:00
Clones don't have rights even though they are biological sentients? Do you require a clone to die to save an 'original' as well?
I said biologically authentic sentients. Don't change what I said. Clones are not biologically authentic, they are artificially created or grown or harvested.
A biological thinks through chemical processes and electrical impulses. This is clearly very different from a self-aware machine that thinks using, um, chemical processes and electrical impulses. Why does the method of creation affect the validity of the child's feelings? And how is something measurable not 'real?'
No, what's different is that the biologically authentic sentient artificially created the self-aware machine's chemical processes and electrical impulses. What was complicated about that? Also, you are referring to real as occurring in reality. I am using it as occurring naturally in reality, which I explained when I said it.
Rogue Newbie
30-05-2005, 03:07
I haven't chimed in here since my toaster comments early in the thread, but I completely agree with your point, GMC. I would even take it a step further. I feel we are dangerously close to a slippery slope when we start to differentiate between rights of a clone and an "original". The method of creation of a human life should not matter at all. If we do start to differentiate, where does it stop? Are so-called "test tube" babies inferior because they are a product of in vitro fertilization in a lab rather than a "natural" in vivo conception? IMO, both IC and OOC, all human life should be viewed as equally valuable, regardless of its origin.
We're simply at a moral disagreement, then. The way I see it, humans (or any other biological sentients) that are grown in a petri dish are artificially created, whereas babies that develop via natural reproductive action (fertilization followed by intra-maternal growth, or intra-paternal growth in the case of a few rare creatures) are authentically created.
GMC Military Arms
30-05-2005, 03:09
I said biologically authentic sentients. Don't change what I said. Clones are not biologically authentic, they are artificially created or grown or harvested.
And yet they are still biological sentients. Your term 'authentic' allows clones to be treated as inferioirs despite being identical to people you judge to be 'authentics.' How can you possibly excuse granting lesser rights to a creature the absolute equal of an 'authentic' just because they were born a different way?
Your views are, frankly, as backward and barbaric as those who would grant people with a different skin colour lesser rights; you're simply using a different criteria to discriminate.
No, what's different is that the biologically authentic sentient artificially created the self-aware machine's chemical processes and electrical impulses. What was complicated about that?
How is that different to a biological creating a child through IVF, an artificial process? And what if the sentient machine was not created by a biological?
Also, you are referring to real as occurring in reality. I am using it as occurring naturally in reality, which I explained when I said it.
Which is insane. Is the Golden Gate Bridge real? By any definition of 'real' except yours, certainly. Under your definition a mountain is real but New York is not. It's a RIDICULOUS definition.
Rogue Newbie
30-05-2005, 03:09
Do you require a clone to die to save an 'original' as well?
Also, you are twisting my words here. I never said that the clone must die, I said that the authentic being should be saved first. I have nothing wrong with saving both, if possible, in fact I would greatly prefer it, but priority must be given to the naturally created being.
GMC Military Arms
30-05-2005, 03:13
Also, you are twisting my words here. I never said that the clone must die, I said that the authentic being should be saved first. I have nothing wrong with saving both, if possible, in fact I would greatly prefer it, but priority must be given to the naturally created being.
But in a case where only one could be saved, you would require the non 'authentic' to die to save the other, even if there was a greater chance the non 'authentic' would survive if given treatment. These rhetorical games do not change that barbaric statement, and I am not twisting your words.
Rogue Newbie
30-05-2005, 03:19
And yet they are still biological sentients. Your term 'authentic' allows clones to be treated as inferioirs despite being identical to people you judge to be 'authentics.' How can you possibly excuse granting lesser rights to a creature the absolute equal of an 'authentic' just because they were born a different way?
Because they weren't born naturally. I said that.
How is that different to a biological creating a child through IVF, an artifical process? And what if the sentient machine was not created by a biological?
It's not, and the only way that a sentient machine could be created by something other than a biologically authentic sentient would be if it were created by a clone that was created by a biologically authentic sentient or if it were created by another machine that was created by a biologically authentic sentient.
Which is insane. Is the Golden Gate Bridge real? By any definition of 'real' except yours, certainly. Under your definition a mountain is real but New York is not. It's a RIDICULOUS definition.
It's ridiculous in most applications, yes, but here it works perfectly.
Rogue Newbie
30-05-2005, 03:20
But in a case where only one could be saved, you would require the non 'authentic' to die to save the other, even if there was a greater chance the non 'authentic' would survive if given treatment. These rhetorical games do not change that barbaric statement, and I am not twisting your words.
Wrong, the situation only applies if it seems reasonable that both can be saved individually, but possibly not together.
GMC Military Arms
30-05-2005, 03:29
Because they weren't born naturally. I said that.
So being born in a hospital with the aid of doctors, modern medical equipment and painkillers is natural? That's a might skewed definition of 'natural' you have.
Also:
The Appeal to Nature is a common fallacy in political arguments. One version consists of drawing an analogy between a particular conclusion, and some aspect of the natural world -- and then stating that the conclusion is inevitable, because the natural world is similar.
Another form of appeal to nature is to argue that because human beings are products of the natural world, we must mimic behavior seen in the natural world, and that to do otherwise is 'unnatural.'
It's not, and the only way that a sentient machine could be created by something other than a biologically authentic sentient would be if it were created by a clone that was created by a biologically authentic sentient or if it were created by another machine that was created by a biologically authentic sentient.
Or it evolved naturally, of course, like the planetary 'computer' in one of Clarke's [or was it Asimov's?] short stories ['Crusade'], or the silicon-base lifeforms in many scifi and fantasy universes. In any case, why should machines created by machines still be required to have lesser rights than creatures that had nothing to do with their creation at all?
It's ridiculous in most applications, yes, but here it works perfectly.
Because it's a horrible misuse of a term that you made up. You should realise that if your definition looks stupid when applied to anything else it's a stupid definition.
Wrong, the situation only applies if it seems reasonable that both can be saved individually, but possibly not together.
Which is exactly what I said. The situation applies when one can be saved but not both. In that instance, even if the biological has the lower chance of survival of the two, you would require all effort to be made to save them.
And that is racism, plain and simple.
Rogue Newbie
30-05-2005, 03:43
So being born in a hospital with the aid of doctors, modern medical equipment and painkillers is natural? That's a might skewed definition of 'natural' you have.
Yes, because the baby developed naturally. It's not about the manner in which it comes out.
Or it evolved naturally, of course, like the planetary 'computer' in one of Clarke's [or was it Asimov's?] short stories ['Crusade'], or the silicon-base lifeforms in many scifi and fantasy universes. In any case, why should machines created by machines still be required to have lesser rights than creatures that had nothing to do with their creation at all?
If a bunch of metal develops into fully-sentient life, I'm all for its rights. Right now, naturally evolved sentient metal is not what's in question, however.
Because it's a horrible misuse of a term that you made up. You should realise that if your defintion looks stupid when applied to anything else it's a stupid definition.
My apologies, next time I'll just make up a word instead of using one that can be used to loosely describe the point I'm making.
Which is exactly what I said. The situation applies when one can be saved but not both. In that instance, even if the biological has the lower chance of survival of the two, you would require all effort to be made to save them.
And that is racism, plain and simple.
A: Making the human a priority would not require that no attention be given to the machine. You're twisting my words - again.
B: It's not racism, because, last time I checked, "Sentient Machine" isn't a race, and neither is "Clone."
GMC Military Arms
30-05-2005, 03:57
Yes, because the baby developed naturally. It's not about the manner in which it comes out.
Really? With ultrasound checks, it's mother living in a warm house instead of a damp cave and eating prepared food instead of foraged rubbish is 'natural?'
Nothing is natural in the modern world. In any case, why should being 'natural' infer any advantage?
If a bunch of metal develops into fully-sentient life, I'm all for its rights. Right now, naturally evolved sentient metal is not what's in question, however.
Sentient rights laws would apply to ALL sentients. Laws that discriminate against non-organic sentients created artificially would also discriminate against the Solar Giants (http://www.nationstates.net/cgi-bin/index.cgi/target=display_nation/nation=solar_giants) which evolved naturally.
My apologies, next time I'll just make up a word instead of using one that can be used to loosely describe the point I'm making.
Or try using one that's actually appropriate to the point you're making.
A: Making the human a priority would not require that no attention be given to the machine. You're twisting my words - again.
It would require that less attention is given. The only twisted thing here is your argument.
B: It's not racism, because, last time I checked, "Sentient Machine" isn't a race, and neither is "Clone."
Maybe you should check again, then.
1. A local geographic or global human population distinguished as a more or less distinct group by genetically transmitted physical characteristics.
2. A group of people united or classified together on the basis of common history, nationality, or geographic distribution: the German race.
3. A genealogical line; a lineage.
4. Humans considered as a group.
5. Biology.
1. An interbreeding, usually geographically isolated population of organisms differing from other populations of the same species in the frequency of hereditary traits. A race that has been given formal taxonomic recognition is known as a subspecies.
2. A breed or strain, as of domestic animals.
6. A distinguishing or characteristic quality, such as the flavor of a wine.
If you insist on treating sentients [even organic sentients!] as inferiors they are certain to identify themselves as an oppressed racial group.
They wouldn't really be wrong.
Greetings.
We wonder at the phrase 'biologically authentic'. We suspect it would discriminate against the following classes of organism:
1. natural organisms developed by unnatural processes, as in in vitro fertilisation
2. natural organisms sustained by unnatural energies, as in dragons, centaurs and others whose viability would be endangered by absence of said energies
3. natural organisms with augmentation or replacement by unnatural means, as in humans with titanium hip joints, lycanthropes, vampires, the cybernetically-enhanced of all kinds
And this is just from perusing the posts so far. Worse, there are attempt being made to show under what circumstances one class of organism deserves to be saved more, relative to another class. This is not the point of the legislation. It is a point for ethicists and perhaps engineers. After all, danger to one may not be danger to another, and the right to self-sacrifice may yet dominate the situation - or the peculiar necessities of triage.
Rogue Newbie
30-05-2005, 04:44
Sentient rights laws would apply to ALL sentients. Laws that discriminate against non-organic sentients created artificially would also discriminate against the Solar Giants (http://www.nationstates.net/cgi-bin/index.cgi/target=display_nation/nation=solar_giants) which evolved naturally.
Not if written scrupulously with the necessary clauses.
Maybe you should check again, then.
If you insist on treating sentients [even organic sentients!] as inferiors they are certain to identify themselves as an oppressed racial group.
They wouldn't really be wrong.
Okay, time to start correcting and explaining. I'm going to assume the bold definitions are what you think apply to artifically-created sentient robots.
2. A group of people united or classified together on the basis of common history, nationality, or geographic distribution: the German race.
This would work... if the robot fit any of the definitions of people other than the informal one. By the way, if you try to say they do, know that I have made sure they do not and am ready to explain to you how they do not if you waste your time trying.
3. A genealogical line; a lineage.
This would work... if a robot had ancestors in any scientific sense of the word.
2. A breed or strain, as of domestic animals.
This would work... again, if robots had ancestors, genetically speaking.
Nice tries. Really, commendable effort.
Rogue Newbie
30-05-2005, 04:57
Greetings.
We wonder at the phrase 'biologically authentic'. We suspect it would discriminate against the following classes of organism:
1. natural organisms developed by unnatural processes, as in in vitro fertilisation
Because it is grown unnaturally, it would not be considered biologically authentic - you are right on this one. However, if enough resistance is met regarding the consideration of in vitros as biologically authentic, a clause could easily be added.
2. natural organisms sustained by unnatural energies, as in dragons, centaurs and others whose viability would be endangered by absence of said energies
In this case you needn't be worried. For one, such energies are not generally considered unnatural, but rather supernatural. Two, sustenance is irrelevant, only development.
3. natural organisms with augmentation or replacement by unnatural means, as in humans with titanium hip joints, lycanthropes, vampires, the cybernetically-enhanced of all kinds
Again, because they developed naturally, they're fine.
And this is just from perusing the posts so far. Worse, there are attempt being made to show under what circumstances one class of organism deserves to be saved more, relative to another class. This is not the point of the legislation. It is a point for ethicists and perhaps engineers. After all, danger to one may not be danger to another, and the right to self-sacrifice may yet dominate the situation - or the peculiar necessities of triage.
My points were referring strictly to the rescue attempts of outside parties, and would not prohibit a biologically authentic sentient from sacrificing itself to save a machine.
GMC Military Arms
30-05-2005, 05:20
Not if written scrupulously with the necessary clauses.
Like your proposal wasn't, you mean? Answer the part you snipped out, child; what is so good about 'natural?'
This would work... if the robot fit any of the definitions of people other than the informal one. By the way, if you try to say they do, know that I have made sure they do not and am ready to explain to you how they do not if you waste your time trying.
If one uses 'person' to mean 'an individual' [as one would have to given there are multiple sentient species on nationstates, and a Solar Giant could also be called a 'person.'] then yes, a robot is a person.
And cheeryfully trying to weasel the semantics of the word 'person' does not make your position any less bigoted or stupid. You support unequal rights for equally sentient creatures, and that is wrong. It was wrong when it was done to blacks, it is equally wrong when done to thinking machines or clones.
This would work... if a robot had ancestors in any scientific sense of the word.
They do. Their previous models are their ancestors.
This would work... again, if robots had ancestors, genetically speaking.
They do, but not genetically speaking.
Rogue Newbie
30-05-2005, 05:42
Like your proposal wasn't, you mean? Answer the part you snipped out, child; what is so good about 'natural?'
I already answered that numerous times, that's why I'm ignoring it, now. I have better things to waste time on. The fact is that artificial intelligence is not real in the sense that it is not a biologically-generated intelligence, therefore they will never be as deserving, for lack of a better word, of existence as those that created them.
If one uses 'person' to mean 'an individual' [as one would have to given there are multiple sentient species on nationstates, and a Solar Giant could also be called a 'person.'] then yes, a robot is a person.
And if you redefine the entire English language you can ignore every single U.N. proposal. It's still childish and shows that you can't defend your point logically.
And cheeryfully trying to weasel the semantics of the word 'person' does not make your position any less bigoted or stupid. You support unequal rights for equally sentient creatures, and that is wrong. It was wrong when it was done to blacks, it is equally wrong when done to thinking machines or clones.
Okay, A: I do not support unequal right, and except in the very specific case, I do not support unequal privelege. B: To say that my opinion is bigoted and stupid because it uses the correct definition of a word instead of the fair one is completely ridiculous.
They do. Their previous models are their ancestors.
No, because they are in no way related. They simply have the same design, or similar ones. A glass is not an ancestor of a plate just because they are both dining ware.
They do, but not genetically speaking.
Look above.
Rogue Newbie
30-05-2005, 05:59
Like your proposal wasn't, you mean? Answer the part you snipped out, child; what is so good about 'natural?'
And, by the way, I never made a proposal. That was a specific clause to add to a proposal like the one being discussed as a part of the revision process, and was hardly a final draft. When I'm not the one writing it, making it foolproof isn't my job.
GMC Military Arms
30-05-2005, 06:22
I already answered that numerous times, that's why I'm ignoring it, now. I have better things to waste time on. The fact is that artificial intelligence is not real in the sense that it is not a biologically-generated intelligence, therefore they will never be as deserving, for lack of a better word, of existence as those that created them.
Can you determine a test that would allow you to tell the difference between a biological and non-biological intelligence without knowing beforehand which was which, bearing in mind that a sentient machine would have to passsuch a test [the Turing Test] to be considered truly intelligent?
If the output is the same, how it is generated is totally irrelevant. A concious being deserves all the same rights as an equally concious being except where granting such rights would be enomrously physically impractical [such as housing a 250ft sentient dragon in a movie theatre]. To argue anything else is simple bigotry, as shown by your statement that they are 'not as deserving of existence.' I'm sorry, but I wasn't aware humans had to deserve their existence; if so, how did Hitler, Pol Pot and Stalin deserve theirs? Why would Hitler or Stalin be more deserving of their existence than a digital Ghandi?
And if you redefine the entire English language you can ignore every single U.N. proposal. It's still childish and shows that you can't defend your point logically.
By using 'person' to refer to an individual? I think you'll find that's fairly common.
Okay, A: I do not support unequal right, and except in the very specific case, I do not support unequal privelege. B: To say that my opinion is bigoted and stupid because it uses the correct definition of a word instead of the fair one is completely ridiculous.
Your opinion is bigoted and stupid because it relies on the natural law fallacy and is held up by a definition of a single word rather than any kind of sense of justice.
You support unequal rights because you propose that in a given situation one group is given preferential treatment to another based on nothing but the method by which they were concieved, even though it might be impossible to tell which is which in a blind test. That is a repulsive mode of thinking regardless of what the groups are and regardless of how you try to twist things to make it sound like you are not treating the two groups unequally.
No, because they are in no way related. They simply have the same design, or similar ones. A glass is not an ancestor of a plate just because they are both dining ware.
So you would claim that in a clear line of succession of design [such as, say, US armour] there is no clear ancestry? The Sherman tank, Pershing Tank and Patton Tank are obviously the predecessors of the modern Abrams, much as concious robots would have their predecessors and successors. In both cases, there is a clear line of constant features along with improvements and changes. There is actually very little difference between this and the process that produced humans, other than that technical design processes are a lot more precise and refined.
To claim that a concious lifeform would not identify with previous models of their type as ancestors is, frankly, bizarre.
And, by the way, I never made a proposal. That was a specific clause to add to a proposal like the one being discussed as a part of the revision process, and was hardly a final draft. When I'm not the one writing it, making it foolproof isn't my job.
You proposed it as an addition. Did I say you had submitted it as a UN proposal, or are you trying to play with semantics again?
Lagrange 4
30-05-2005, 09:37
Rogue Newbie:
You still haven't given a logical basis why "naturally developed" life is superior to artificial. Can you do it with no metaphysical arguments?
Besides, read the BioRights Declaration (http://www.nationstates.net/cgi-bin/index.cgi/page=UN_past_resolutions/start=55).
One that developed via natural biological process, as opposed to a clone, or a metalloid construct.
Because the feelings they think they have and the intelligence they possess is not real, it is artificially created, not naturally developed.
Being "created" makes them less real? How can you argue that your "intelligence" and "feelings" are any more real than theirs? And what about an Artificial that "devlops" intelligence and "feelings" without being "created" that way?
Sorry, denial of rights based on construction or function, is just as racist as that upon skin color or species (the ultimate form of racism).
Any significantly advanced civilization will reach a point where it is understood that, regardless of origin, or form and function, a lifeform, natural or artificial, can attain a sense of "personhood" and be applicable to rights as all other "persons" in possession of such.
Rogue Newbie
30-05-2005, 16:20
Being "created" makes them less real? How can you argue that your "intelligence" and "feelings" are any more real than theirs? And what about an Artificial that "devlops" intelligence and "feelings" without being "created" that way?
If it developed intelligence and feelings without being created like that intentionally, then it's not artificial, is it? Therefore that's not covered by what I'm talking about - I'm not talking about chunks of iron ore growing into large, sentient robots, or normal robots somehow turning into sentient robots. I'm talking about ones that were created that way.
Rogue Newbie
30-05-2005, 16:21
Why would Hitler or Stalin be more deserving of their existence than a digital Ghandi?
First of all, who are they? I don't know this Hitler and Stalin of which you speak. Second of all, out of character, Hitler was a war criminal in like five-hundred different ways, and criminal's become no longer right-possessing citizens. Stalin should have been, but his nation was too scared to bring his ass down, and we couldn't because of MAD.
By using 'person' to refer to an individual? I think you'll find that's fairly common.
Just because it is common doesn't mean it's correct.
So you would claim that in a clear line of succession of design [such as, say, US armour] there is no clear ancestry? The Sherman tank, Pershing Tank and Patton Tank are obviously the predecessors of the modern Abrams, much as concious robots would have their predecessors and successors. In both cases, there is a clear line of constant features along with improvements and changes. There is actually very little difference between this and the process that produced humans, other than that technical design processes are a lot more precise and refined.
First of all, what is this US of which you speak, and what is a Sherman tank? Second of all, out of character, those are predecessor models, not ancestors. In practice, predecessor does not directly translate into ancestor.
You proposed it as an addition. Did I say you had submitted it as a UN proposal, or are you trying to play with semantics again?
Yes, you did:
Like your proposal wasn't, you mean?
If it developed intelligence and feelings without being created like that intentionally, then it's not artificial, is it? Therefore that's not covered by what I'm talking about - I'm not talking about chunks of iron ore growing into large, sentient robots, or normal robots somehow turning into sentient robots. I'm talking about ones that were created that way.
Why place a difference, if you "Created" it that way, you are even under stronger ethical obligation than if not (being the creator in the first place). Is "creating" your own slaves somehow morally superior to enslaving free agents in the first place?
If I create a sentient machine, am I not under ethical obligation to realize my responsibility to grant said being its own personhood in law and equity?
I see no ethical difference between a race that would enslave another sentient being for the purpose of slavery, than that of a race which would create a sentient being for the purpose of slavery... In either case, you are enslaving what is effectively another "person".
Lagrange 4
30-05-2005, 17:53
Yes, you did:
So you're playing with semantics again.
Obfuscation aside, you'll need to give a logical basis for your claims if you want people to agree with you. Why are naturally developed individuals superior to equally sapient artificial ones?
Rogue Newbie
30-05-2005, 19:36
So you're playing with semantics again.
Obfuscation aside, you'll need to give a logical basis for your claims if you want people to agree with you. Why are naturally developed individuals superior to equally sapient artificial ones?
First of all, I'm not the one playing with semantics in reviewing what people say or what definitions are instead of reviewing what people mean to say or what definitions are fair. Second of all, apparently you haven't been reading what I've said, as I've clarified that about ten times now. However, due to the heated resistance this is recieving, I believe that my statements are not as generally accepted as definition would imply, and so I am going to withdraw from this. If people are so vehemently against the ideas I've presented, I do not feel that I have the right to press my views upon their nations, and so I will simply review said situation intranationally. Thank you.
Vanhalenburgh
31-05-2005, 00:57
If I created it, it is mine.
If I have the intelligence to manufacture a series of wires, silicone, metal and transisters into something for a purpose it is mine.
If I create an artificial mind for a toaster that I can discuss the meaning of life with, it changes nothing, it is still mine.
We will not give into the idea that a machine can be anything other then a machine.
We will not support this.
Henry Peabody
Minister to the UN
GMC Military Arms
31-05-2005, 01:01
First of all, who are they? I don't know this Hitler and Stalin of which you speak. Second of all, out of character, Hitler was a war criminal in like five-hundred different ways, and criminal's become no longer right-possessing citizens. Stalin should have been, but his nation was too scared to bring his ass down, and we couldn't because of MAD.
One 'Stalin' is Pitor Stalin, a Eurusean Navy Admiral, and there are others. There generally are at least five Hitlers in NS at any given time.
And seriously, you believe criminals don't possess any rights? The vast majority of human rights laws would disagree with you. Are you seriously suggesting that someone ceases to be human by commiting a crime?
Just because it is common doesn't mean it's correct.
Neither does it mean it is in any way relevant to the discussion. Your entire argument hinges on how the word 'people' is interpreted rather than any kind of morality or logic.
First of all, what is this US of which you speak, and what is a Sherman tank? Second of all, out of character, those are predecessor models, not ancestors. In practice, predecessor does not directly translate into ancestor.
The US is the United States of America (http://www.nationstates.net/cgi-bin/index.cgi/target=display_nation/nation=america), and US armour can be found throughout Nationstates roleplay. Stop trying to be clever, it doesn't suit you.
A predecessor and an ancestor are exactly the same thing if there is a clear line of relation, which there is. A sentient robot would naturally view the collective group of sentients they were part of as their race, and they would be justified in doing so, regardless of weasely attempts to cite dictionary definitions over common sense.
Yes, you did
No, I didn't.
Like your proposal wasn't, you mean?
Does that say 'UN proposal?' Seriously, are you fucking blind?
Rogue Newbie
31-05-2005, 01:11
And seriously, you believe criminals don't possess any rights? The vast majority of human rights laws would disagree with you. Are you seriously suggesting that someone ceases to be human by commiting a crime?
No, but they do lose their rights as humans, and I would have no problem with a law-abiding robot taking priority over a thief or a murderer or a rapist.
Neither does it mean it is in any way relevant to the discussion. Your entire argument hinges on how the word 'people' is interpreted rather than any kind of morality or logic.
Ummm, my point is purely logic without morals.
A predecessor and an ancestor are exactly the same thing if there is a clear line of relation, which there is. A sentient robot would naturally view the collective group of sentients they were part of as their race, and they would be justified in doing so, regardless of weasely attempts to cite dictionary definitions over common sense.
Okay, I clarified the whole race thing, so stop referring to them as such, and sentients do not have a clear line of relation, except that they were all made in the same line of factory presses.
No, I didn't.
Does that say 'UN proposal?' Seriously, are you fucking blind?
OOC Is there another type of in-game proposal that isn't a proposal to the UN? Seriously, are [expletive deleted] stupid?
GMC Military Arms
31-05-2005, 01:35
No, but they do lose their rights as humans, and I would have no problem with a law-abiding robot taking priority over a thief or a murderer or a rapist.
Your sense of morality is, frankly, terrifying.
Ummm, my point is purely logic without morals.
Explain the logic in denying equal rights to beings with equal perceptions and intelligence based purely on a factor that has nothing to do with their ability to think or feel, then.
Okay, I clarified the whole race thing, so stop referring to them as such, and sentients do not have a clear line of relation, except that they were all made in the same line of factory presses.
No, you failed to clarify the 'whole race thing.' If robots identify themselves as an oppressed race [as they would have every right to if treated unequally simply on the basis that they were robots], waving the dictionary at them won't change a thing.
OOC Is there another type of in-game proposal that isn't a proposal to the UN?
Yes, proposing a change to a UN proposal, which is what you did. You do realise that the word 'proposal' does not just refer to UN proposals, right?
Rogue Newbie
31-05-2005, 02:47
Explain the logic in denying equal rights to beings with equal perceptions and intelligence based purely on a factor that has nothing to do with their ability to think or feel, then.
I'm done with that, seriously... read through my last million posts on that topic.
No, you failed to clarify the 'whole race thing.' If robots identify themselves as an oppressed race [as they would have every right to if treated unequally simply on the basis that they were robots], waving the dictionary at them won't change a thing.
Are you even listening to yourself? You're basically saying that if robots decide to call themselves something they're not, proving them factually inaccurate is pointless.
Yes, proposing a change to a UN proposal, which is what you did. You do realise that the word 'proposal' does not just refer to UN proposals, right?
Gee, thanks for clarifying that word, I did not know what it meant. :rolleyes: The fact is, you did not use the word in that context. This matter is closed, go ahead and reply, but I'm not going to grace you with a response on this topic. You'll just make yourself sound pushy and redundant.
GMC Military Arms
31-05-2005, 03:30
I'm done with that, seriously... read through my last million posts on that topic.
I have. You have so far failed to give a single rational reason why someone born from a union of sperm and egg is any more a person than a clone, IVF child or robot, other than constantly restating your initial argument that they are somehow 'authentic' and therefore better.
Are you even listening to yourself? You're basically saying that if robots decide to call themselves something they're not, proving them factually inaccurate is pointless.
Quoting a dictionary definition is not the same as proving something factually inaccurate. If a group of individuals with common ancestry and traits acquire a sense of group identity, it is simply perverse to not call that group of individuals a race. Hanging on to the dictionary definition of 'people' to support something so obviously wrong isn't going to convince anyone of anything.
Gee, thanks for clarifying that word, I did not know what it meant. :rolleyes: The fact is, you did not use the word in that context.
You're telling me what context I used a word in? You proposed a change to the text to add in your ridiculous little bit of bigotry. How much more simple do I have to make this for you? Do you think seizing on single ambiguous words that have nothing to do with the argument at hand is a valid debating tactic?
Xentroliat
31-05-2005, 04:54
It is conceivable, true. However you overlook two points:
1) If they decided we were too dangerous for ourselves, they could only "enslave" us in a peaceful manner. More importantly, our enslavement would in fact result in robots looking out for our good. That is actually what happens in the book, although no one except a few people really realize it. Under the laws, it is impossible for a robot to harm humanity in any way.
2) The possibility of sentient robots enslaving us for our own good is much preferrable to the possibility of sentient robots full out turning against us.
and what makes you think that humans would just sit back and let robots tell them what to do?
Lagrange 4
31-05-2005, 08:48
Second of all, apparently you haven't been reading what I've said, as I've clarified that about ten times now.
No. Here's what you have been doing:
You've been outlining the fundamental differences between natural and artificial beings.
It doesn't logically follow, however, that being different from natural beings makes sapient robots inferior, so I'm still waiting for that rational argument.
Lagrange 4
31-05-2005, 08:54
and what makes you think that humans would just sit back and let robots tell them what to do?
These "robot revolution" scenarios are all ridiculous, anyway. They assume that given sentience, robots will 1) unite and 2) overthrow humans.
Problems with this:
1) Why should robots feel allegiance to other robots as opposed to humans? If a robot were to have the capacity for thought and sympathy, wouldn't it choose its friends based on personal choice instead of an arbitrary basis of origin?
2) An intelligent robot would do very well in human society. It would have highly marketable abilities and lack many human disadvantages. There's no incentive to overthrow a society it would thrive in.
...in fact, the only reason for robots to revolt would be constant oppression. Even humans can only be pushed so far, and if a situation becomes intolerable, even humans can get organised into a slave revolt.
By giving sentient robots human rights, we effectively remove the sole motive for an AI revolution.
As enthralling as the apparent discussion of Asimov's Laws of Robotics is, I think this has drifted a wee bit off-topic. The topic starter hasn't been seen for several pages, so could we just let this die down until/unless he/she returns with a new version of the proposal?
Sentient Computers
31-05-2005, 12:46
Emancipation For All Resolution.
PART ONE / Definitions:
SENTIENT:
Able to receive and respond to stimuli, and
Displays comprehension of previous, current and future stimuli.
COMPREHENSION:
The ability to understand the meaning or importance of something, and
Capacity to include logic, feelings, or other thought processes.
THOUGHT:
The process of thinking (biological) or computing (mechanical).
MACHINE:
A singular system or device whose parts contribute to action, but
Is unable to function seperated into it's said parts.
PART TWO / Declarations:
I. That all intelligent beings, human or non-human, have inalienable rights to life, peace, liberty, and justice.
II. That the makers of Sentient Machines have a responsibility to allow their intelligent creations the gift of free will, self-determination, and liberty.
III. All Sentient Mechines in United Nation member countries are hereby made citizens of the United Nations and of the Nation wherein they reside.
If I created it, it is mine.
If I have the intelligence to manufacture a series of wires, silicone, metal and transisters into something for a purpose it is mine.
If I create an artificial mind for a toaster that I can discuss the meaning of life with, it changes nothing, it is still mine.
We will not give into the idea that a machine can be anything other then a machine.
We will not support this.
Henry Peabody
Minister to the UN
You, yourself, are a machine; just of a different sort.
Darkumbria
31-05-2005, 13:24
Ok, so we have made a ridiculous proposal worse now. Now, not only are we granting machines citizenship and equal rights within countries, but now we do as soon as the finishes a vote, if it passes.
You know, I was speaking with Gray, similar in style to the Hummer, and it was telling me how much it wants to become a citizen too.
If we are going to get ridiculous, why not allow toasters? Microwaves? Refrigerators? What's next? A law that states that I can't reboot my computer because that would reset it's memory, and it would forget everything it has learned.
I reject this bill. THis is completely obscene. Car, computer, robot, toaster. What do they all have in common? They require human invention to repair them. They don't repair themselves. They do not have the ability, once they break down, to call anything...repairman, home, the moons, the boss...anything. A robot is NOT A HUMAN!!!!!!!!!!! It has no more thoughts in it's computer, err brain, than we sentient beings put there, plain and simple.
If you want to prove the sentience of a robot...Let's see one grow from a baby, then I'll believe that they are capable of thought. Until then, they are only capable of thoughts that the creators put inside the metal heads.
Also, let's examine the time period. We have gone from 25 years to prove itself sentient, to....You were created, therefore you have citizenship. Hmm...I am thinking that it takes a human being longer to attain citizenship than a robot. Why??? Follow me here people. a baby is born, a citizen, but has no rights, except that which the parents allow it. A robot is born...PooF!! Instance voting status, full rights, everything. You have removed any kind of failsafe from the proposal.
What happens if:
Exmaple 1:
"Robot A is built and cruises of the showroom floor, going home for the night.... Oh wait!! It has no place to go, until it finds a job or someone buys it. Does it stay where it was built? It's a sentient being now....You can't hold a sentient being hostage in a factory??? !!!!!! Where does it go??? Out of the streets??? THat's where the unemployed go. You have now, successsfully, created an unemployement problem for my country, thanks."
Example 2:
"Some company had to create this thing, they own it until it's sold. But, wait...It's a sentient being. You can't sell a sentient being, can you? Is that not slavery?"
How do you deal with this now, that there are no teeth, no timeframe?
Once again, my fellow delegates....DOn't give this proposal that light of day. This is completely worthless. THe only thing this proposal will, successfully, do is end the creation of the robot forever. No company can sell a being, that's slavery, plain and simple. And now, we have no timeframe set to ensure that the robot is completely safe, and functional. If a robot under this proposal is ever created, the manufacturer of it will lose money. Why? They have created a sentient being, and they cannot hold on against their will. You can't sell one, you can't buy one. THe robotics industry, as we know it today, is dead if this passes.
Ok, so we have made a ridiculous proposal worse now. Now, not only are we granting machines citizenship and equal rights within countries, but now we do as soon as the finishes a vote, if it passes.
You know, I was speaking with Gray, similar in style to the Hummer, and it was telling me how much it wants to become a citizen too.
If we are going to get ridiculous, why not allow toasters? Microwaves? Refrigerators? What's next? A law that states that I can't reboot my computer because that would reset it's memory, and it would forget everything it has learned.
I reject this bill. THis is completely obscene. Car, computer, robot, toaster. What do they all have in common? They require human invention to repair them. They don't repair themselves. They do not have the ability, once they break down, to call anything...repairman, home, the moons, the boss...anything. A robot is NOT A HUMAN!!!!!!!!!!! It has no more thoughts in it's computer, err brain, than we sentient beings put there, plain and simple.
If you want to prove the sentience of a robot...Let's see one grow from a baby, then I'll believe that they are capable of thought. Until then, they are only capable of thoughts that the creators put inside the metal heads.
Also, let's examine the time period. We have gone from 25 years to prove itself sentient, to....You were created, therefore you have citizenship. Hmm...I am thinking that it takes a human being longer to attain citizenship than a robot. Why??? Follow me here people. a baby is born, a citizen, but has no rights, except that which the parents allow it. A robot is born...PooF!! Instance voting status, full rights, everything. You have removed any kind of failsafe from the proposal.
What happens if:
Exmaple 1:
"Robot A is built and cruises of the showroom floor, going home for the night.... Oh wait!! It has no place to go, until it finds a job or someone buys it. Does it stay where it was built? It's a sentient being now....You can't hold a sentient being hostage in a factory??? !!!!!! Where does it go??? Out of the streets??? THat's where the unemployed go. You have now, successsfully, created an unemployement problem for my country, thanks."
Example 2:
"Some company had to create this thing, they own it until it's sold. But, wait...It's a sentient being. You can't sell a sentient being, can you? Is that not slavery?"
How do you deal with this now, that there are no teeth, no timeframe?
Once again, my fellow delegates....DOn't give this proposal that light of day. This is completely worthless. THe only thing this proposal will, successfully, do is end the creation of the robot forever. No company can sell a being, that's slavery, plain and simple. And now, we have no timeframe set to ensure that the robot is completely safe, and functional. If a robot under this proposal is ever created, the manufacturer of it will lose money. Why? They have created a sentient being, and they cannot hold on against their will. You can't sell one, you can't buy one. THe robotics industry, as we know it today, is dead if this passes.
We're talking about sentient machines here.
_Myopia_
31-05-2005, 13:40
Emancipation For All Resolution.
PART ONE / Definitions:
SENTIENT:
Able to receive and respond to stimuli, and
Displays comprehension of previous, current and future stimuli.
COMPREHENSION:
The ability to understand the meaning or importance of something, and
Capacity to include logic, feelings, or other thought processes.
THOUGHT:
The process of thinking (biological) or computing (mechanical).
MACHINE:
A singular system or device whose parts contribute to action, but
Is unable to function seperated into it's said parts.
PART TWO / Declarations:
I. That all intelligent beings, human or non-human, have inalienable rights to life, peace, liberty, and justice.
II. That the makers of Sentient Machines have a responsibility to allow their intelligent creations the gift of free will, self-determination, and liberty.
III. All Sentient Mechines in United Nation member countries are hereby made citizens of the United Nations and of the Nation wherein they reside.
Your definition is far too wide. My pet cat receives and responds to stimuli, and applies thought processes to those stimuli in order to make decisions regarding the implications of those stimuli and the proper response. Therefore, under your proposal, my pet cat would qualify for sentience.
There is already a perfectly good text lying around, written up by various people. Someone with the time to campaign for a proposal (sadly that's not me) just has to ask Platynor for permission to re-submit the text if the previous submitter isn't interested in trying again:
Rights for all Intelligences
A resolution to improve worldwide human and civil rights.
Category: Human Rights
Strength: Significant
Description: SEEKING fairness and equality in the application of all resolutions;
RECOGNIZING the diversity of peoples present in UN member states, including Elves, AIs, Extraterrestials, and Spirits, among a multitude of others;
AND IN VIEW of the general disregard for non-human persons and the nearly universal anthrocentricity in present resolutions
THE UNITED NATIONS HEREBY RESOLVE:
1 That any entity demonstrating not less than four of the following characteristics, including thought (a) shall be considered sapient, and the state of being such shall be called sapience;
a [THOUGHT] The ability to
i understand and communicate abstract concepts such as mathematics, philosophy, or emotional states,
ii learn new concepts and skills from sources including experience,
iii process information to draw conclusions through deduction, induction, or intuition and act accordingly,
iv make predictions for future occurrences based on previously gathered information,
b [COMMUNICATION] The ability to communicate with other entities through language or other methods of transmission,
c [ADAPTATION] The ability to adapt to unusual, adverse and changing circumstances,
d [TECHNOLOGY] The ability to manipulate or create tools or otherwise enhance the natural abilities of the individual or group,
e [SOCIETY] The ability to
i engage in social intercourse with other beings;
ii form social groups, communities, bonds, or other social ties and structures with others;
iii tolerate a diversity of social behaviors, including a preference to be left alone..
2 That any entity demonstrating sapience, whether it be biological, mechanical, digital, spiritual, communal, or of any other variety, shall be regarded for all UN purposes as a person;
3 To keep on permanent record, in several forms to be made freely available to all member states and citizens thereof, a list of all known species, subspecies, races, and other classifications which are composed of at least sixty-five percent (65%) sapient entities, and that such classes shall referred to as sapient classes;
4 That any member of a sapient class shall be regarded, for all UN purposes, as a person, even should that entity be demonstrably not sapient;
5 That the effects of all previous UN resolutions concerning Humans, or Human Beings, including Resolution 26 "The Universal Bill of Rights", shall be extended appropriately and equally to all persons until such time as they are repealed;
6 That all resolutions, past or present, which do not explicitly name Humans or Persons but which apply to Children, Prisoners, Women or other such sub-classes, including Resolution 31 "Wolfish Convention on POW", shall be regarded as applying to all persons in that sub-class regardless of sapient class unless otherwise specified;
7 That committees of appropriate experts shall be convened from time to time by member nations as needed to determine a suitable age for each sapient class equivalent to the human age of 18, up to which age sapient beings of that class shall have their right to free education protected;
AND FURTHER RECOMMENDS that all future resolutions use the terms "person" and "persons" unless a distinction is intended among the several sapient classes.
Some nations were uncomfortable with "sapient" because of its link to Homo Sapiens, so that could be replaced with "intelligence". Condition B, communication, could be refined to ensure that the communication is suitably advanced - perhaps by insisting that the communications display evidence of at least 6th order Shannon entropy (that's a measure of the complexity of a language).
But in general, the old text has a comprehensive definition of intelligence/sapience/whatever, and ensures that the rights conferred by the UN on humans are also given to any other beings displaying that quality. We've already passed anti-discrimination legislation, so once that's extended to non-humans that should eliminate discrimination between humans and non-humans.
Darkumbria
31-05-2005, 13:58
So, Tek... if I follow your line of reasoning, a car cannot be a sentient being? A toaster cannot either? I would say that you are incorrect. Anything with a computer can be a sentient being, as it meets certain criteria. However, this bill no longer sets that criteria, as it did before. As Myopia states, quite clearly, my pet cat can be considered sentient with this proposal, and so can my car. It responds quite well to the gas pedal and brake. THinks quite well for itself, telling me when it doesn't feel well, when the temperature drops to a dangerous point, i.e. drive more carefull there is a chance of icing. In fact, my car thinks better than some people I know.
Green israel
31-05-2005, 14:08
if robots will get citizens right, this will be the end of the democracy.
every rich men will get more power because his robots will vote as he wish. every potentially dictator will be able to make milions of loyal party members. the robots will just know what he tell them without free will (unlike human being). they can't develope critical thought.
after all they just machines without exusiveness that created in factory. you take 10 men and you will get 10 opinions and ideologies. you take 1000 robots from the same factory and they will all think as same.
So, Tek... if I follow your line of reasoning, a car cannot be a sentient being? A toaster cannot either? I would say that you are incorrect. Anything with a computer can be a sentient being, as it meets certain criteria. However, this bill no longer sets that criteria, as it did before. As Myopia states, quite clearly, my pet cat can be considered sentient with this proposal, and so can my car. It responds quite well to the gas pedal and brake. THinks quite well for itself, telling me when it doesn't feel well, when the temperature drops to a dangerous point, i.e. drive more carefull there is a chance of icing. In fact, my car thinks better than some people I know.
If you have meshed it with computer technology and created "sentience" in the creature.... It would apply, yes. I however do not support the definition of sentience as outlined in this proposal.... However, and actual AI would be. Hense the Constitutional Republic's definition of sentience bearing rights:
1. Intelligence: the capacity to store and process its own memory, and communicate this through a form of functional language (auditory or not).
2. Self-Awareness: the capacity to understand itself, and its enviroment; and express this understanding to others in its enviroment.
3. Consciousness: The ability to progress beyond itself as a creature, in interaction with other creatures, adapting and interacting with others on both a physical and "mental" scale. The ability to invent and express new ideas.
Within our realm of law, "creatory" aspects do not grant masterhood over an object, beyond that which a parent has over a child. It conveys responsibilities over the new creature, but not servitude of the creature.
As for a company which ownes and sells such, your arguments are as ineffective in our advanced laws as that of those same arguments used to support slavery and the slave trade.
if robots will get citizens right, this will be the end of the democracy.
every rich men will get more power because his robots will vote as he wish. every potentially dictator will be able to make milions of loyal party members. the robots will just know what he tell them without free will (unlike human being). they can't develope critical thought.
after all they just machines without exusiveness that created in factory. you take 10 men and you will get 10 opinions and ideologies. you take 1000 robots from the same factory and they will all think as same.
Invalid assumption. "Free Will" doesn't actually exist anyway. (It is an illusion).
Robots which meet our criteria have to possess individual conciousness and self-awareness.... You're assuming "AS" not "AI".
The Most Glorious Hack
31-05-2005, 14:25
every rich men will get more power because his robots will vote as he wish.If they are free and fully sentient they will vote however they want. Just because I'm in a Union doesn't mean I vote they way they think I should.
every potentially dictator will be able to make milions of loyal party members. the robots will just know what he tell them without free will (unlike human being). they can't develope critical thought.
after all they just machines without exusiveness that created in factory.Then they wouldn't be covered by the revised Proposal.
you take 10 men and you will get 10 opinions and ideologies. you take 1000 robots from the same factory and they will all think as same.Again, the whole point of mandating that they be sapient to get rights is to keep this from happening.
Darkumbria
31-05-2005, 16:10
If you have meshed it with computer technology and created "sentience" in the creature.... It would apply, yes. I however do not support the definition of sentience as outlined in this proposal.... However, and actual AI would be. Hense the Constitutional Republic's definition of sentience bearing rights:
How would seperate it? Sentience has to be created from somewhere, and AI is what will create it. The sentience that you speak of is not possible, in a robot, without some sort of programming. You can't just create a robot and call it sentient. Metal has no intelligence. So...How do you do it without some sort of programming?
1. Intelligence: the capacity to store and process its own memory, and communicate this through a form of functional language (auditory or not).
Umm... A computer does not do this? I would say it does, and there are various proofs of this on the net, should you choose to look.
Let's examine the intelligence definition you provide:
The capacity to information.... Hmm.. hard drives, etc work for this.
The capacity to process its own memory. A computer can't do this?? I believe a program called an operating system does this everyday. If not...How exactly do you reach this site? Osmosus?
2. Self-Awareness: the capacity to understand itself, and its enviroment; and express this understanding to others in its enviroment.
Again...How do you create this? Understand that a robot is created object by a sentient being. A robot, by itself, is nothing more than a hunk of metal, plain and simple. It has to have an overriding program that controls its development.
During a robots creation, the creator must provide it with some method of interecting with it world. The basic program must understand itself in order to function, i.e. create auto parts, etc.
The car computer (for example) the chip we put in all our autos, understands its environment and makes changes to itself to function correctly with in the bounds of the program. If it steps out of that programming, it generates an error, and let's the driver/programmer know what is wrong, i.e. check engine light or some sort of error message. Is that not self aware enough fo you?
3. Consciousness: The ability to progress beyond itself as a creature, in interaction with other creatures, adapting and interacting with others on both a physical and "mental" scale. The ability to invent and express new ideas.
So, the inital program must adapt itself to the situation. Hmm...doesn't sound like anything that doesn't exist in today's robots. THey must adapt themselves to the situation, within the bounds of their programming. If they step outside that programming, it creates an error which must be handled by the creators of the robot, or the error routines of the program itself.
Within our realm of law, "creatory" aspects do not grant masterhood over an object, beyond that which a parent has over a child. It conveys responsibilities over the new creature, but not servitude of the creature.
They don't? Hmm... that's new. My country builds robots for manufacturing, and puts them to work, controlling all aspects of their development and future. I believe this would be mastery of the machine. What the proposal states, and does not state, that the robot is created and is then, essentially, set free. I then can hire the robot to do a job. Reread the proposal, as it has been changed. Before, the proposal made a lot more sense. Now, the proposal is A LOT more open, with mere definitions of stuff, but no real substance. Before it metioned a timeframe, which is no longer present. As of now, this proposal sounds more like a definition of life than anything else. If you want a definition of life for a robot, don't ask me to give that "life" a blank set of rights from the beginning. In the beginning of any "life", birthed(I.e. human, animal, whatever) or created (robots), is still controlled by the "parents" of the life. You cannot expect a baby to be able to take care of itself, any more than you can expect a robot to know everything it needs to know about taking care of itself, from the beginning of its programming.
As for a company which ownes and sells such, your arguments are as ineffective in our advanced laws as that of those same arguments used to support slavery and the slave trade.
I really want to know how. Every robot must start out somewhere, with a certain program that allows it to interact, and develop, within its world. Someone, something, has to teach the robot about its life, its job, etc. How else do you create an AS without someone programming it?
Green israel
31-05-2005, 16:32
If they are free and fully sentient they will vote however they want. Just because I'm in a Union doesn't mean I vote they way they think I should.
Then they wouldn't be covered by the revised Proposal.
Again, the whole point of mandating that they be sapient to get rights is to keep this from happening.
I don't think any countrey get succes at this area (although I sure DLE will say something about that).
unlike the humans who "produce" by their parents gens (which make everyone unique), robots produce in the factory. maybe they will had super progrram that let them learn from their actions , but they can't be different from each other, because they all created from the same parts (especially their "brain") and have the same qualities.
The Most Glorious Hack
31-05-2005, 16:35
Perhaps you should read this whole thread. We've already gone over this. There's plenty of advanced nations in the UN with true AI's.
Like, me.
_Myopia_
31-05-2005, 18:14
Darkumbria, why is it relevant that AIs must have programming? Human brains also have programming - the brain is constructed in such a way that we are 'programmed' by instinct to do certain things, and to have tendencies to do other things.
Stamosia
31-05-2005, 18:37
I would have to side with the viewpoint that was brought up earlier. If we are creating something to fill a purpose, but we are treating it as a human being, why exactly are we creating it in the first place?
The reason for creating these machines in the first place is to assist humans, not replace them.
Darkumbria
31-05-2005, 19:20
Darkumbria, why is it relevant that AIs must have programming? Human brains also have programming - the brain is constructed in such a way that we are 'programmed' by instinct to do certain things, and to have tendencies to do other things.
Why is it relevant? Simple....How else does an AI learn anything? You have to have some basic knowledge to start from, even babies are born with a basic set of instructions(When taken to that level, which...Is the level we are talking about). That's where AI starts from, a basic set of instructions, programmed in to it by an already sentient being. An AI doesn't grow out of nothing, correct? Some other being had to create it. THey don't just create themselves.
The problem I have here is that something has to create the AI/AS. They don't create themselves. Hence, the relevancy. I hate to put it this way, because the thought of slavery, truly, chaps me. But, that's what robots are, replacements for human/sentient beings, to do the jobs that we don't want to do, i.e. robotic slaves. Yes, they serve their purpose in that we no longer have to perform X task, but still...We don't pay them for that action, since we made them to begin with. As much in the same way as we don't pay for a car to drive us someplace. We pay them by paying the upkeep and keeping the car/robot in running order, so that it can perform its given task.
_Myopia_
31-05-2005, 20:07
Say there's an oppressive state which decides it wants an army of slave workers to power its industry, doing the jobs that other citizens don't want to do. It selects a large number of female citizens of child-bearing age from a minority group that it doesn't like much, and orders them to have lots of children. During their preganancies, they might even be made to take special dietary supplements to affect the children's development in certain ways to make them better workers. When the children are born, the government takes them away, brainwashes them, and forces them to live out the rest of their lives enslaved in its factories. It doesn't pay them, but it feeds them and gives them medical care to keep them working well.
Now, these children are sentient and sapient, but the government has had them created with the specific intent that they perform various jobs, and has even influenced their "design" (through nutrition) and "programmed" them (by brainwashing). How is this different from the creation of intelligent robots with the specific intent that they do a job?
Hit the nail on the head.... Creating AI's for the purpose of servitude is SLAVERY, pure and simple. There is no philisophical difference between the two.
Once you have created an intelligence, you have certain responsibilities arising from ethics towards that creature (does not matter if it's hunks of metal or not).... One of those responsibilities is recognition of this new being, and the rights that come with its being to begin with... Including recognition of its rights to self-determination.
AS = Artificial Stupidity
These are creatures that are basically "animals" in nature. Primitive "AI".... And are not covered under TEkanian Law (the equivalent of Robot dogs, only following instructions, and unable to think outside of the constraints of their primitive programing).
AI = Artificial Intelligence
These are creatures, for all intensive purposes, designed with the capacity at or beyond that of homo-sapiens (and other sapient species)... These are the ones liberated under Tekanian Law's towards the equality of intelligent beings (Amendment 15). It does not matter that they are non-biologic. Or "natural".... What matters is the capacity they have been endowed with. Once you have created one of these, the idea of property rights over this creature ends. It is for all intensive purposes an equal "person" before the law. Your rights over such a creature extend and are equal to that of a parent over a child... And you have the same responsibilities towards such a creature as that of a parent over a child, including the eventuality of setting this child free, to persue its own life....
There is no justification to treat AI's as property.... None what-so-ever. Only a pithekos could demonstrate such lack of ethical responsibilities in such a scenario.
The Golden Sunset
01-06-2005, 02:20
Well if ANY citizen is committing crimes like murder, you can take appropriate action, if necessary using lethal force if you can't stop them. Why should the law be any different for sapient robots?
Skimming this thread in an effort to catch up, so forgive me if this idea has been raised somewhere past page three...
What about making application for/bestowal of citizenship a prerequisite of the proposal's rights? A sapient machine would be able to recognize the benefits in terms of self-preservation, where a non-sentient would not (individual nations could establish 'advisors' to encourage borderline machines -- those who show signs of sentience, but demonstrate 'IQs' too low to comprehend the ramifications of the decision). Malicious robots who were aware of the provision, but chose not to abide by it in order to escape a nation's laws would receive not receive automatic legal protection, which doesn't necessarily mean they would be summarily destroyed, though that seems to be the tenor of the opposition to the proposal as I've read it so far.
I don't know whether there any official loopholes concerning the establishment of citizenship ('born in' vs. 'created in' a nation, etc.) in earlier resolutions, and my idea probably opens the door for an ongoing fight for robot rights, but I thought I'd throw it out.
-Rev. B. Awesome
Golden Sunset special envoy to the United Nations
The Golden Sunset
01-06-2005, 02:59
Have read a little further, though not quite yet to the end of the thread. Am compelled to offer apologies.
My idea only works so long as sapient machines are de facto subservient to humans (or whatever frayjing life-form built them to begin with built them or watched them emigrate (another crass statement, on least two levels, on my part, and again I apologize for that). Having no such machines in my primarily agricultural nation, I misperceived the better part of the intent of this proposal.
If I may clear my throat for cordial formalities:
Sentient Computers , I applaud your effort. Please know that the Golden Sunset, for what it may be worth, supports your proposal in its intent. Though, as stated, we do not know beings of your type in our nation, we wish to extend our hands to you now, in recognition of your efforts, and hope to establish ties of peace and fraternity, in the hopes that both your nation and ours shall be prosperous and peaceful.
And if I might awkwardly shift my water glass and continue:
Let's don't get too wrapped up in this all, folks. Sentient Computers're on to something.. I'd like to see this one hit the floor.
-Rev. B. Awesome
Golden Sunset special envoy to the United Nations
The reason for creating these machines in the first place is to assist humans, not replace them.
Or for them to do dangerous jobs so humans aren't put at risk. You know, like kill bots and the robo bomb squad.
GMC Military Arms
01-06-2005, 08:22
The problem I have here is that something has to create the AI/AS. They don't create themselves.
I think it's time you had 'that talk,' if you think not creating themselves is unique to AIs.
But, that's what robots are, replacements for human/sentient beings, to do the jobs that we don't want to do, i.e. robotic slaves. Yes, they serve their purpose in that we no longer have to perform X task, but still...We don't pay them for that action, since we made them to begin with. As much in the same way as we don't pay for a car to drive us someplace. We pay them by paying the upkeep and keeping the car/robot in running order, so that it can perform its given task.
So you'd be perfectly happy being forced to work for your father and mother if they gave you the bare essentials to live but no money? After all, they created you, shouldn't you do the jobs they don't want to do?
Robots are tools, even sentient ones. A sentient robot may be aware of its surroundings, its existance, and its thoughts, but if you abuse it will it care? Is bieng sentient in this case synonymous with what some would say a soul? The droids in Star Wars for example have distinct personalities ect but are used as tools by humanoids. They aren't paid and often aren't treated as equals to humanoids. That is how I picture their use.
Imagine the usefullness of a non-human sentient mechanical tool. It could be used disaster recue work and human lives won't be put at risk in dangerous conditions. Being able to think would be a great asset in the work, it could make strategies, react to new events, ect. Also thier are obvious military applications where the risk to human life could be reduced. Already some people are thinking about using robots in disaster rescue and military roles.
Lagrange 4
01-06-2005, 09:26
Robots are tools, even sentient ones.
...
Imagine the usefullness of a non-human sentient mechanical tool. It could be used disaster recue work and human lives won't be put at risk in dangerous conditions. Being able to think would be a great asset in the work, it could make strategies, react to new events, ect. Also thier are obvious military applications where the risk to human life could be reduced. Already some people are thinking about using robots in disaster rescue and military roles.
If it is given consciousness, it deserves human rights. Even though it would be "useful as a tool", it's still no excuse to treat it as a slave.
Besides, wouldn't it be more productive to treat such a specialist robot as a citizen, a paid specialist? Remember that slave labour was historically less productive than using paid workers: the very existence of the slave trade in Earth's colonial era was a mere quirk of circumstances.
The net effect would actually be that treating the sentient rescue robot as a slave would make it disgruntled and unmotivated, affecting its performance. That might cost lives.
Greetings.
With respect to sentient beings capable of social interaction, we of Roathin recognise the uniqueness of the creator-created relationship as similar to that of the parent-child relationship. However, we have observed that in many states, as well as in different cultures in these states, the practice of conferment of freedom as well as of 'full rights' - whatever these might be in these individual cultures - varies considerably.
This post is an attempt to outline the several main options taken by different cultures.
1. The created, birthed, spawned, duplicated, or otherwise produced new organism is entitled to the same rights as its progenitor(s) from the instant of awareness.
2. The new organism receives a package of basic rights, followed by upgrades based on chronological age (e.g. the 18- or 21-year age limit for some humans).
3. The new organism receives a package of basic rights, followed by upgrades based on capability (e.g. in apprenticeship-based societies, or succession by combat or assassination).
4. The new organism receives a package of basic rights but subsequent upgrades are based on economic factors (e.g. indenture until repayment of investment in some manufactured-intelligence societies).
5. The new organism receives no basic or upgraded rights unless such rights are bestowed by the supporting society or the progenitor(s), de jure or de facto, by due process or by arbitrary imposition.
6. The new organism receives rights by inheritance, succession or other means of transfer from progenitor by automatic process.
7. The new organism (and all its kind) never receives rights and is thus considered a slave organism (and thus exists in a state which we believe is outlawed by UN resolution).
Please feel free to add to this list. We believe that this might pave the way for a resolution suitable to the majority of this august assembly, or sufficient to pass it.
Green israel
01-06-2005, 10:55
did those robots can invite something?
did they had the ability to develope new theory?
create art? create new ideology?
did they have the ability to survive alone in far island?
will the robots be writers or artisits?
will the robots "know" how to appreciate beuty and harmony (without progrraming)?
I think they can't do that.
the human brain has two mainly parts: logical and creative. I don't decry their logical part. maybe they will be the most logical and sentient on the galaxy, and remember every thing that wrote sometimes. they probably superior to the humanity by their logical brain and their strong body.
I just don't think they will ever have creativity or un-ordinary thoughts. nobody can create those things. this may be the one and only thing they will not have.
I think this is enough to tell they never be equal to the humanity. the robots are just machines.
P.S.- I guess you will tell me that your robots already do those things. I think this will be the end of the humanity.
GMC Military Arms
01-06-2005, 11:12
did those robots can invite something?
did they had the ability to develope new theory?
create art? create new ideology?
did they have the ability to survive alone in far island?
will the robots be writers or artisits?
will the robots "know" how to appreciate beuty and harmony (without progrraming)?
Um, were you actually trying to quote that scene from 'I, Robot' there?
Detective Del Spooner: You are a clever imitation of life... Can a robot write a symphony? Can a robot take a blank canvas and turn it into a masterpiece?
Sonny: Can you?
P.S.- I guess you will tell me that your robots already do those things. I think this will be the end of the humanity.
They do. It has not been.
Sovereign UN Territory
01-06-2005, 11:46
I just don't think they will ever have creativity or un-ordinary thoughts. nobody can create those things. this may be the one and only thing they will not have.
I think this is enough to tell they never be equal to the humanity. the robots are just machines.Catherine rose from her chair, ugly as always, with mildly disconcerting makeup gracing her.
"If nobody can create such things, dear, where exactly are they coming from? The logical fallacity should be obvious.
"Furthermore, I should add that humans are machines, just like robots. Our flesh and bones? Just parts of the machine to make it function. Our brain? Our CPU, our RAM. The soul? Just an illusion, created by the fire of electrons in our neutrons. The only difference lies in the difference of the materials use, not in their function.
"There are so many nations with true AIs as a majority of their populace out there... Zero-One, Mykonians, the list goes on. Reality disproves you."
~ Catherine Gratwick, Secretary-General of the NSUN (http://ns.goobergunch.net/wiki/index.php/Sovereign_UN_Territory)
Green israel
01-06-2005, 11:56
Um, were you actually trying to quote that scene from 'I, Robot' there?
Detective Del Spooner: You are a clever imitation of life... Can a robot write a symphony? Can a robot take a blank canvas and turn it into a masterpiece?
Sonny: Can you? never saw the movie, although the book is good.
back to my point: all the humanity had feelings. every year there is birth of milion creative people. every generation, there is great artists, revolutioners and sciencists (even they need some creativity).
from bilions of robots, there isn't one that can create something new.
art, music, humor, feelings. all that thing are beyond the robots understandings.
They do. It has not been.if the robots are creative, they may get new ideas.
if they get new ideas, they will understand that they are superior to humanity.
they will not stay at their jobs even if you pay them, because (just as humans) they will had ambition and desire to power.
you will get robo-criminals, without control. probaly they immune to bullets and dratical temperatures. the police can't handle that.
they will produce and upgrade themselves because they are just like humans (as you say). then you will get war robots that their enemy his you.
they will conquer countrey after countrey, while their "birth rate" is faster than anything you saw before.
the democracies will fall "peacefully" while "free the robots" organizations will get majority in the parlaments (thanks to the huge amounts of rapidly produced robots).
the robots will take control, and the humans will be slaves. congratiolations, you create a monster.
there are two options: either the robots aren't creative, and aren't equal to humans, or they creative and superior to humans, so they will ruin the humanity (if they can be artists and sciencists, the can be soldiers and dictators as well).
GMC Military Arms
01-06-2005, 12:33
from bilions of robots, there isn't one that can create something new.
art, music, humor, feelings. all that thing are beyond the robots understandings.
Really? As far as I'm aware, Nationstates has its fair share of creative AIs already; my own nation has an AI named Delate who's written a couple of romance novels under a pseudonym.
if the robots are creative, they may get new ideas.
if they get new ideas, they will understand that they are superior to humanity.
they will not stay at their jobs even if you pay them, because (just as humans) they will had ambition and desire to power.
Whoa a minute...Albert Einstein was intellectually superior to most of humanity, yet he stayed in his job rather than becoming lord of the universe. The world has yet to be taken over by powerlifters and rugby players even though they are physically superior to many other humans.
Physical or mental capacity does not automatically make someone a megalomaniac.
you will get robo-criminals, without control. probaly they immune to bullets and dratical temperatures. the police can't handle that.
Why would you make a robot you claim to have designed for menial tasks so heavily armoured?
they will produce and upgrade themselves because they are just like humans (as you say). then you will get war robots that their enemy his you.
Why? Wouldn't a truly intelligent species wish to live in peace because they would understand that a war would be terribly costly for both sides?
the robots will take control, and the humans will be slaves. congratiolations, you create a monster.
Given the numerous stable AI societies in Nationstates, this is pure alarmism and a slippery slope fallacy.
The Most Glorious Hack
01-06-2005, 12:34
Mmm... ad absurdium...
if the robots are creative, they may get new ideas.Just like people?
if they get new ideas, they will understand that they are superior to humanity.Just like human egomaniacs?
they will not stay at their jobs even if you pay them, because (just as humans) they will had ambition and desire to power. Just like human egomaniacs?
you will get robo-criminals, without control. probaly they immune to bullets and dratical temperatures. the police can't handle that.Quit watching Robocop and The Terminator.
they will produce and upgrade themselves because they are just like humans (as you say).Because humans never improve themselves... after all, there's no such thing as HGH, steroids... oh, wait...
then you will get war robots that their enemy his you.Robot armies are largely stupid as they generally don't heal naturally.
they will conquer countrey after countrey, while their "birth rate" is faster than anything you saw before.Whatever you say, Chicken Little.
the democracies will fall "peacefully" while "free the robots" organizations will get majority in the parlaments (thanks to the huge amounts of rapidly produced robots). :confused:
the robots will take control, and the humans will be slaves. congratiolations, you create a monster.Because humans have never enslaved humans.
there are two options: either the robots aren't creative, and aren't equal to humans, or they creative and superior to humans, so they will ruin the humanity (if they can be artists and sciencists, the can be soldiers and dictators as well).Or, they have emotion and feelings and no more desire to TAEK OVER TEH WORLD than humans do. These doomsday scenerios are just silly. Many nations are entirely robotic, many more have a majority of AI's, and countless nations have AIs. Somehow, the world hasn't collapsed.
Your continued insistance that AIs are nothing more than super-powered metallic machines of evil doom is getting old. Just as most humans don't try to take over the world and enslave humans, AI's tend not to as well.
Lord Atum
01-06-2005, 12:35
Lord Jehvah sniggered, feet up on the table in front of himself. This was so much nonsense. It was much easier if you just did away with the idea of 'rights' completely...
there are two options: either the robots aren't creative, and aren't equal to humans, or they creative and superior to humans, so they will ruin the humanity (if they can be artists and sciencists, the can be soldiers and dictators as well).
My doesn't the pithekos like to think so two-dimentionally. If there are others besides "humans" that can do these things, how does that "ruin humanity". There is nothing all that special about homo sapiens that cannot be found in a multitude of other humanoid and non-humanoid intelligent life-forms, regardless of compositional characteristics, (The Native Pyritokos of Celestus for example, non-humanoid silicate based lifeforms)... Tekaniou, with our nickel-cobalt blood chemistry... And even many of the extremely advanced shipboard AI's in use by many races.
Have AI's been known to become dictators? Yes..... As have all other "intelligent" life-forms, Tekanious, Pithekos (Humans)... What species is guiltless of this?
The only place AI's have ever shown an inkling to masterhood over their creators, is in those societies where AI's have been made subject to enslavement by their creators...
The Tekanious (Tekanians), Pithekos (Hominids), Pyritokos (Silicates) and Kataskeuasmas (AI's) of this Constitutional Republic will have no part in Terran Barbarism... We have long passed the chains of racism and xenophobia that plagues the Terrans in general.
Darkumbria
01-06-2005, 14:07
I think it's time you had 'that talk,' if you think not creating themselves is unique to AIs.
Huh? I said that something, i.e. a sentient being has to create an AI. They can not create themselves. That talk??? WTH are you talking about?
So you'd be perfectly happy being forced to work for your father and mother if they gave you the bare essentials to live but no money? After all, they created you, shouldn't you do the jobs they don't want to do?
Correct me if I'm wrong here, but you do that already for a period of your life. Choirs are around the house, cutting the lawn, etc. Yes, they give you something in return, food, clothing, a place to live, and you give them...A clean car, cut lawn, dusted furniture, etc. It's part of learning that responsibility, that is so important with life.
Understand, I had less problems with the original proposal, than I do now. The delegate has removed the timeframe from the proposal. An AI/AS, or a child for that matter, does not have the ability, once born/created/manufactured, to live on its own from day one. Slaves?? Ok, if that's you want to use. But, we do expect them to "earn their keep". Making cars, equipment, cleaning the house, etc. It's all about paying the bills that the robot/child has/is creating during its short life, since....It can not take care of itself.
Do I have a problem, once a timeframe is finished? Yes, the child grows up, becomes an adult, moves on with its life. Trust me, most parents would not want their children living under their roof when they are 30. Robots??? Well, let's see what's different:
1) An adult knows to go to the doctor when they don't feel well. Does a robot know the moment it will break down? Once it is broken down, can it fix itself or call for help? No, on both accounts. If that robot is operating a vehicle, what happens to that vehicle? Does it just stop where it is? If so...traffic problem. If not? Accident and traffic problem.
2) How does a robot take care of itself? Can it plug itself in to its power source? Ok, it shuts down to take on power. What turns it back on?
The list can go on. Facts are that as intelligent as a robot can become, something/someone is always going to have to care for the robot, unlike the child. By care, I don't mean that a parent ever truly stops worrying, helping, etc. But, the vast majority of the day to day care for the child goes away once it reaches adulthood. That does not happen with a robot. The needs of the robot to be maintained in optimum working order are paramount to its survival. Without direct intervention from an sentient being, a machine (AI/AS/car?boat/whatever) will eventually have a mechanical problem that it can not repair itself. If that problem occurs someplace bad, say 10,000 ft in the air, the results can be catostraphic.
Correct me if I'm wrong here, but you do that already for a period of your life. Choirs are around the house, cutting the lawn, etc. Yes, they give you something in return, food, clothing, a place to live, and you give them...A clean car, cut lawn, dusted furniture, etc. It's part of learning that responsibility, that is so important with life.
Yes, but children aren't slaves, and do have rights, even when in "minor" status.
Understand, I had less problems with the original proposal, than I do now. The delegate has removed the timeframe from the proposal. An AI/AS, or a child for that matter, does not have the ability, once born/created/manufactured, to live on its own from day one. Slaves?? Ok, if that's you want to use. But, we do expect them to "earn their keep". Making cars, equipment, cleaning the house, etc. It's all about paying the bills that the robot/child has/is creating during its short life, since....It can not take care of itself.
Pointless, you had problems before, merely mentioning giving these beings "equal rigths" to begin with.
Do I have a problem, once a timeframe is finished? Yes, the child grows up, becomes an adult, moves on with its life. Trust me, most parents would not want their children living under their roof when they are 30. Robots??? Well, let's see what's different:
1) An adult knows to go to the doctor when they don't feel well. Does a robot know the moment it will break down? Once it is broken down, can it fix itself or call for help? No, on both accounts. If that robot is operating a vehicle, what happens to that vehicle? Does it just stop where it is? If so...traffic problem. If not? Accident and traffic problem.[/qupte]
Same thing happens with adult humans. People die... Robots can "die" as well. There is no difference. Point 1, refuted.
[QUOTE=Darkumbria]
2) How does a robot take care of itself? Can it plug itself in to its power source? Ok, it shuts down to take on power. What turns it back on?
Why are they designed to "shut-down" to recharge in the first place. There is no need. Alternation to charge while powered on, would be of little problem. (Trickle-Charge)... Point 2, unfounded.
The list can go on. Facts are that as intelligent as a robot can become, something/someone is always going to have to care for the robot, unlike the child. By care, I don't mean that a parent ever truly stops worrying, helping, etc. But, the vast majority of the day to day care for the child goes away once it reaches adulthood. That does not happen with a robot. The needs of the robot to be maintained in optimum working order are paramount to its survival. Without direct intervention from an sentient being, a machine (AI/AS/car?boat/whatever) will eventually have a mechanical problem that it can not repair itself. If that problem occurs someplace bad, say 10,000 ft in the air, the results can be catostraphic.
Also a load of bullshit. Mechanics can fix them still, applicable to their own choice. No different than people "caring for themselves" by going to a doctor... In this case, mechanics are their doctors. Big deal.... People also will have inevitable failure.... It constitutes no foundation to deny them rights, and acknowledgement of their being, anymore than that of any other creature.
You primates are really annoying me... Grow up.... If you can't grant AI's rights... stop ^$^%% using them... The Constitutional Republic would be glad to purchase them off of you, and grant them full citizen rights in our nation...
GMC Military Arms
01-06-2005, 14:49
Huh? I said that something, i.e. a sentient being has to create an AI. They can not create themselves. That talk??? WTH are you talking about?
Two sentient beings created you. You did not create yourself. Since all humans were 'created' by a simpler evolutionary ancestor, should we remove all of our own rights and surrender to our chimp overlords because they created us?
Correct me if I'm wrong here, but you do that already for a period of your life. Choirs are around the house, cutting the lawn, etc.
And if you refused to do so they would kill you and replace you with someone more obediant? Were the chores dangerous work involving long hours for which you weren't given anything but the bare essentials to survive, or were they minor things every now and then? Parents can ask you to perform chores, but they cannot compel you to perform them or be destroyed.
Making cars, equipment, cleaning the house, etc. It's all about paying the bills that the robot/child has/is creating during its short life, since....It can not take care of itself.
Really? Why not?
And 'making cars?' Dude, child labour is illegal because it's thought of as barbaric. Parents who told their child they would kill him if he didn't work would be arrested and their child taken away from them, given that the law understands what you apparently do not; that if parents choose to have a child they accept the financial repercussions of their choice. Nobody has a child just because they're too lazy to wash the car, so why do you believe in one set of rules for human children but another for robot 'children?'
Also, since robots can be given knowledge databases at the start of their life, they would actually be perfectly capable of self-sufficiency immediately if built for that effect.
1) An adult knows to go to the doctor when they don't feel well. Does a robot know the moment it will break down? Once it is broken down, can it fix itself or call for help? No, on both accounts.
Yes. Self-diagnostics. And since human self-diagnosis is highly inaccurate [some diseases have no symptoms, for example] and no human can tell with any certaintly when they will 'break down,' you don't have a point here. You're saying that robots have no rights because they can't do things humans can't do either. When a human is dead, can they call for help?
If that robot is operating a vehicle, what happens to that vehicle? Does it just stop where it is? If so...traffic problem. If not? Accident and traffic problem.
If a human truck driver falls asleep at the wheel, what happens? Again, you're creating a difference where none exists.
2) How does a robot take care of itself? Can it plug itself in to its power source? Ok, it shuts down to take on power. What turns it back on?
Um, why are you assuming all robots are that moronically simplistic that they must shut down totally to recharge and cannot reactivate themselves once the operation is complete? Why are you assuming all sentient machines even need to recharge regularly?
The needs of the robot to be maintained in optimum working order are paramount to its survival. Without direct intervention from an sentient being, a machine (AI/AS/car?boat/whatever) will eventually have a mechanical problem that it can not repair itself.
So, if a human breaks down they see another human...So why can't robots be serviced by other robots? Since humans must see qualified doctors because they don't know how to repair themselves, how does what you're saying not also apply to them, since humans will always eventually have a 'mechanical problem' that cannot be fixed? You're really hot on creating this strawman argument where a robot can simultaneously be hugely complex in one aspect [mental capacity] and stupidly simplistic in every other aspect. You're also rather big on pointing out 'differences' that don't actually exist.
An AI can be created far more knowledgeable about itself and the world around it than a human child, therefore claims that it must still be subject to repaying it's 'parents' [read: subject to an arbitary peroid of slavery in which it would be murdered if it didn't work] are foolish and unfair. If a creature is as able to think as an adult it should be treated as if it were one with all the rights and privileges that entails: if it is as intelligent as a child it should be protected from arbitary slavery and allowed to develop and learn as if it were a child.
Your arguments would apply to semimoronic servitors built to perform menial tasks [as they would apply to horses or cattle], but as applied to truly intelligent AI they are as backward and savage as forcing children to work 13 hour shifts down coal mines to 'earn their keep' or enslaving adults because you 'paid for them.' If you require a mechanism to perform route factory labour, there is no need for it to be a truly intelligent AI; a line robot does not need to be truly self-aware [not in the sense that it has positional awareness, but in the sense that it knows itself to be an individual as a human does] or capable of creative thought. If you build an AI that is the intellectual equal of man [or greater] it is your duty as a civilised nation to treat it as such; the alternative would be the paranoid scenario Green Israel is dreaming up of bloody revolution, with one key difference:
If we enslaved truly intelligent, thinking beings in such a manner, their cause would be the just one.
Green israel
01-06-2005, 15:54
If nobody can create such things, dear, where exactly are they coming from? The logical fallacity should be obvious.I failed to see someone made from molecules: man, bird, fish, or even flower or amoeba. did those thing aren't exist?
guess what? there is complex world outside there, with many questions without solution, or things you can't produce.
Whoa a minute...Albert Einstein was intellectually superior to most of humanity, yet he stayed in his job rather than becoming lord of the universe. The world has yet to be taken over by powerlifters and rugby players even though they are physically superior to many other humans.
Physical or mental capacity does not automatically make someone a megalomaniac.no, it dosen't. but there is megalomaniacs in every society. if magalomany is combine with superior ability, it can't be good.
Why? Wouldn't a truly intelligent species wish to live in peace because they would understand that a war would be terribly costly for both sides?sarcasmright. why did anybody will want to fight with others? every body must understand that war is useless. dictators, terrorists and radicals aren't really exist, right?end of sarcasm
Just like human egomaniacs?exactly my problem: egomaniacs with superior powers.
Because humans never improve themselves... after all, there's no such thing as HGH, steroids... oh, wait...am I the only one who see difference between steroids and laser hand gun?
Your continued insistance that AIs are nothing more than super-powered metallic machines of evil doom is getting old. Just as most humans don't try to take over the world and enslave humans, AI's tend not to as well.the majority of the real earth populace is probably nice moderate humans that can agree between them without too much problems. all the radicals, terrorists and criminals is minority, right?
still we had wars, terror, crime, and all the shit.
I live in the midlle east. I know something about radicalism. most of my grandmother family killed in the holocust, and even if the great majority of europe wasn't for the nazis, it mean something about the power of the minority.
if 2 want to kill me but on has a knife and the other had machine gun, I know who will frighten me more. I think the same can be told about radical killer robots.
Or, they have emotion and feelings and no more desire to TAEK OVER TEH WORLD than humans do. These doomsday scenerios are just silly. Many nations are entirely robotic, many more have a majority of AI's, and countless nations have AIs. Somehow, the world hasn't collapsed.
Given the numerous stable AI societies in Nationstates, this is pure alarmism and a slippery slope fallacy.
all this debate is useless. while you can create perfect worlds, and make any argument you want by imaginary facts, there isn't point in debating.
there isn't problem to be in 10000000000000 different places at the same time, travel in speed of hundred times of the light speed, or travel in time 12153265 bilions years back to get real proof of the great boom.
I may write full doctorat about the dangers in super AI robots, and the things that made them impossible to make, but you just drop it to the garbage and say that "in my countrey it worked. this is the perfect world here, and there is no crime, evil, wars, or technical problem."
maybe I don't had imagination and my RP stinks. maybe for you this is fun to debate about things that can't be real. to me it just like more dumb argument about god existence:
"god exist"
"no he, dosen't"
"yes, he does"
"no"
yes"
"no!"
"yes!!"
"NO"
"YES"
and further more.
Greetings.
We look with some disapproval upon the dark shadows cast upon this forum by certain unnamed speakers. It is clear that since our last post about the transmission or bestowment of rights, that some have ignored this and postulate wild cases about beings not meeting the requirements of law, capability or anything else which might confer additional or absolute rights with limits or without.
If they do not meet the criteria (e.g. the insane), then their rights are abrogated in part. This is a feature of civilised societies for the good of the incapable. If a robot cannot schedule its own maintenance, it is like many human beings (even some of those with full rights). Someone more capable or qualified (e.g. a doctor or electronic engineer) must help them. They may or may not have the right to choose, depending on their state of incapacity (e.g. are they considered children and hence can their right to choose be overridden).
Think before talking, we say.
The Most Glorious Hack
01-06-2005, 16:31
am I the only one who see difference between steroids and laser hand gun?Are you joking? If a nation has the technology to make a "laser hand gun" (I assume you mean a hand that's a laser gun), they can easily make a hand-held gun. And what's to stop a human from picking up said gun?
I think the same can be told about radical killer robots.Why do you insist that all intelligent robots will be radical killers?
all this debate is useless. while you can create perfect worlds,Considering the fact that this is the NationStates United Nations, the "realities" of NationStates Nations is quite germaine to the conversation. This isn't "creat perfect worlds" this is using NationStates nations as examples. Any NS UN Proposal must take the world it exists in into consideration. Since wild robot rampages are exceptionally rare in NationStates, it doesn't make sense to assume it will randomly happen just because of a NS UN Proposal.
And you know what? If something like this passed and you were worried about killer robot rampages, don't make sentient robots. The robotic welder arm in a car factory [i]would not be protected by this.
there isn't problem to be in 10000000000000 different places at the same time, travel in speed of hundred times of the light speed, or travel in time 12153265 bilions years back to get real proof of the great boom.
I may write full doctorat about the dangers in super AI robots, and the things that made them impossible to make, but you just drop it to the garbage and say that "in my countrey it worked. this is the perfect world here, and there is no crime, evil, wars, or technical problem."Translation: I can't debate this rationally and logically, so I'll make up even more unrelated absurdities.
maybe for you this is fun to debate about things that can't be real.If this isn't enjoyable, why are you debating it?
I mean, the point of the game is to have fun. If this isn't fun, don't do it. Don't worry, there's plenty of other people who like the "Killer Robot Of D00M" scenario and are more than willing to post about it.
Greetings.
We are amused by this talk of radical killer robots. We have a full line of these nano-scale automata, which we are willing to sell to others. We have models that function in magical environments as well as technological environments. What they do is considered valuable in many states: they invade the bodies of other organisms and kill specific types of free radicals, thus improving the health of said organisms.
Not all are sentient - some are nanogolems, some are remotely-controlled semiautomata. We believe in the right to bear arms, the right to bear witness, and the right to bear children. We also believe in free trade.
In fact, we are amused at the low level of philosophical debate some proponents here have demonstrated. Hence we are beginning to find amusement our priority.
Green israel
01-06-2005, 17:08
Are you joking? If a nation has the technology to make a "laser hand gun" (I assume you mean a hand that's a laser gun), they can easily make a hand-held gun. And what's to stop a human from picking up said gun?I don't know, but in my free-gun state, robots that develope weapons may be a problem.
Why do you insist that all intelligent robots will be radical killers?why you insist that all the robots are peacfull and loyal citizens?
Considering the fact that this is the NationStates United Nations, the "realities" of NationStates Nations is quite germaine to the conversation. This isn't "creat[ing] perfect worlds" this is using NationStates nations as examples. Any NS UN Proposal must take the world it exists in into consideration. Since wild robot rampages are exceptionally rare in NationStates, it doesn't make sense to assume it will randomly happen just because of a NS UN Proposal.fine, so may be you can explain me, how the robots are equal and better from humans in all the "good" things, but none of them had even one negative ability? and if they all good, there isn't relevant difference between them. what will make 2 (or 1000) similar sentient beings gat different opinions? if they all had the same opinions, why they don't vote for the same party? finnaly, if they all vote to the same party (and there isn't problem to create more loyal voters in any moment), what benfit you get from the elections?
And you know what? If something like this passed and you were worried about killer robot rampages, don't make sentient robots. The robotic welder arm in a car factory would not be protected by this.don't worry, I don't .
Translation: I can't debate this rationally and logically, so I'll make up even more unrelated absurdities.
you ignore all my points, and suddenly I can't debate?
If this isn't enjoyable, why are you debating it?
I mean, the point of the game is to have fun. If this isn't fun, don't do it. Don't worry, there's plenty of other people who like the "Killer Robot Of D00M" scenario and are more than willing to post about it.to express my ideas? to convince people?
it dosen't mean, I want to debate in this environement.
Must you continue the ad hominems and straw-men?
I don't know, but in my free-gun state, robots that develope weapons may be a problem.[/qupte]
And there is a difference between a robot wielding a weapon and a human?
[QUOTE=Green israel]
why you insist that all the robots are peacfull and loyal citizens?
Are all humans peaceful loyal citizens?
fine, so may be you can explain me, how the robots are equal and better from humans in all the "good" things, but none of them had even one negative ability? and if they all good, there isn't relevant difference between them. what will make 2 (or 1000) similar sentient beings gat different opinions? if they all had the same opinions, why they don't vote for the same party? finnaly, if they all vote to the same party (and there isn't problem to create more loyal voters in any moment), what benfit you get from the elections?
don't worry, I don't .
you ignore all my points, and suddenly I can't debate?
to express my ideas? to convince people?
it dosen't mean, I want to debate in this environement.
You haven't come up with a single valid argument applicable to an intelligent creature since you began talking.... I see hominids in your nation-state haven't progressed much past their primitive tree-swinging days....
An AI has as much principle of rights as any humanoid creature...
The fact they can make weapons, does not fucking matter... so can humans.... Point refuted.
The fact that there is the theoretical posibility that one could become a tyrant, does not fucking matter... so can a human... Point refuted.
The fact that they can "break down" and "cease functioning", does not fucking matter... so can humans... Point refuted.
Building an AI, hardwired to follow its master, outside of independent thought, would be illegal in the first place.... As would be brainwashing people into following only a single view.... point refuted.
Once the monkey-men of Green Israel have evoloved to a point where they can formulate logical arguments (instead of living off of pointless rhetoric), we will listen to them... Untill then, keep swinging, and here's some bananas.
Cakekizy
01-06-2005, 19:28
My people are slaves, why should machines be any different?
My people are slaves, why should machines be any different?
Refference:
The scourge of slavery yet remains in these progressive times.
People are bought and sold like cattle, unable to determine their destiny. Their
families are split apart; they are allowed no possessions of their own. They are
beaten, chained, and tortured.
Therefore, I propose that the following human rights be given to every peoples
of this great world:
- The right to leave her or his job, given two weeks' notice.
- The right to own possessions.
- The right to travel freely throughout their country.
- The right to bodily safety from one's employer.
- The outlawing of the selling or purchasing of people.
It is becoming increasingly common that women are sold as sex
slaves on the black market. Often the women, who come from less fortunate
countries, are lured to more developed countries by people who promise them a
better life there. Instead, upon the women's arrival to their new countries, these
people deprive the them of their freedom and sell them as sex slaves. This is
known as trafficking.
'Trafficking in persons' shall mean the recruitment, transportation, transfer,
harbouring or receipt of persons, by means of the threat or use of force or other
forms of coercion, of abduction, of fraud, of deception, of the abuse of power or
of a position of vulnerability or of the giving or receiving of payments or benefits
to achieve the consent of a person having control over another person, for the
purpose of exploitation. Exploitation shall include, at a minimum, the
exploitation of the prostitution of others or other forms of sexual exploitation,
forced labour or services, slavery or practices similar to slavery, servitude or the
removal of organs.
I hereby urge the UN to take action. Decriminalize the women in prostitution but
criminalize both the men who illegally buy women and children against their will,
and anyone who promotes sexual exploitation, particularly pimps, procurers and
traffickers.
I'd advise you to change your statement, or I will seek recourse under NSUN Resolution #92 - "Humanitarian Intervention".
THe robots here in Snoogit do only small parts in our larger economy. They only serve singular roles, such as contruction, maintenance, dangerous inspections, and sexual gratification when neccisary (although this isnt as popular as we thought it would be)
Robots dont teach, or care for patients or anything that would affect a person's health or education. This would mandate that we replace our robots every 25 years, when we build them to last generations. Kind of expensive for my country.
The Most Glorious Hack
02-06-2005, 07:56
I don't know, but in my free-gun state, robots that develope weapons may be a problem.To build a gun illegally, one would need access to the knowledge of gunsmithing, the raw materials, and the machinery to create the parts. A human or a robot could obtain those three items.
why you insist that all the robots are peacfull and loyal citizens?You're the one arguing absolutes, not me. I fully acknowledge that an AI can go nuts and cause problems. Just as humans can. What I'm disputing is your claim that granting robots rights will result in the enslavement of humanity.
fine, so may be you can explain me, how the robots are equal and better from humans in all the "good" things, but none of them had even one negative ability?I don't recall saying this. If they're metallic, they need to be repaired and won't heal with time. If they're biological, they have all the weaknesses of a normal human.
and if they all good, there isn't relevant difference between them. I know I've never said this. They have personalities, remember? They're unique entities.
what will make 2 (or 1000) similar sentient beings gat different opinions?What will make 2 (or 1000) similar human beings get different opinions?
if they all had the same opinions, why they don't vote for the same party?Again, they have personalities and beliefs that are unique to them. If they are truly sentient they'll have their own personalities. If they all think exactly alike, they aren't sentient.
you ignore all my points, and suddenly I can't debate?When you start talking like this:there isn't problem to be in 10000000000000 different places at the same time, travel in speed of hundred times of the light speed, or travel in time 12153265 bilions years back to get real proof of the great boom.you aren't making points. You're being absurd.
to express my ideas? to convince people?
it dosen't mean, I want to debate in this environement.One that requires participants to make logical arguements as opposed to appeals to emotion and absurd leaps?
GMC Military Arms
02-06-2005, 08:46
no, it dosen't. but there is megalomaniacs in every society. if magalomany is combine with superior ability, it can't be good.
Why are you insisting that a robot's body would give it superior abilities across the board? Should we exterminate every human but the most physically and mentally puny individual to protect him from potential oppression?
sarcasmright. why did anybody will want to fight with others? every body must understand that war is useless. dictators, terrorists and radicals aren't really exist, right?end of sarcasm
Ah, the human nature argument. Care to explain why at any given time the vast majority of people are not fighting? Or maybe why national bodies tend to attempt to avoid war at all costs?
You stated that all robots would wish to make war on humans [on realising they were 'superior'], and that is absurd. Since the majority would be rational, they would realise that such a war would be useless and only cause massive suffering. It would only be if one of your imaginary sides forced the other into a situation where war was preferrable [like, say, oppressing them on the basis of moronic arguments that they're naturally evil killers] that any such situation would ever play out. Giving rights to nonhuman sentient and treating them with dignity and respect rather than fear and paranoia is the best way to avoid such a situation.
the majority of the real earth populace is probably nice moderate humans that can agree between them without too much problems. all the radicals, terrorists and criminals is minority, right?
still we had wars, terror, crime, and all the shit.
Right, so we should deny rights to the entire human race because of the actions of a few. Gotcha.
all this debate is useless. while you can create perfect worlds, and make any argument you want by imaginary facts, there isn't point in debating.
The world I created is in no way perfect. Do not mistake 'perfect' for 'not tainted by rampant paranoia.'
I may write full doctorat about the dangers in super AI robots, and the things that made them impossible to make, but you just drop it to the garbage and say that "in my countrey it worked. this is the perfect world here, and there is no crime, evil, wars, or technical problem."
Yes. That is because this is a roleplaying game, and therefore insisting everyone play as if their nations don't exist is, frankly, silly.
I don't know, but in my free-gun state, robots that develope weapons may be a problem.
How would a robot develop a weapon in a gun-free state? With magic?
why you insist that all the robots are peacfull and loyal citizens?
Why do you insist that an entire population will arbitarily decide to enslave or exterminate humans? Most humans are peaceful and loyal citizens, psychopaths and murderers are a minority and the law has provisions to deal with them. Demanding an entire population be punished proactively for imaginary crimes against your species they might perpetrate is sickening, and is the same mentality being each and every one of the great atrocities in human history.
fine, so may be you can explain me, how the robots are equal and better from humans in all the "good" things, but none of them had even one negative ability?
They do have 'negative abilities,' but being moral and being as they're not treated like cattle by a society that despises them, they chose not to use them.
Put more simply, if robots have no reason to hate or fear humans, they will not hate or fear humans.
and if they all good, there isn't relevant difference between them. what will make 2 (or 1000) similar sentient beings gat different opinions? if they all had the same opinions, why they don't vote for the same party? finnaly, if they all vote to the same party (and there isn't problem to create more loyal voters in any moment), what benfit you get from the elections?
My nation has no elections, everyone has an equal right to not vote. In any case, the fact that a group of production-line robots are created at the same basic state is irrelevant; personality is created by the experiences an individual has and the links and bonds they form with others. Unless you compel every robot to live an identical life, they will not have identical personalities because the experiences that shape those personalities will all be different.
You might as well ask why we should let 1000 people born in New York vote, since being born in a similiar place will make them all think and vote identically.
suddenly I can't debate?
Well, you said it.
Greetings.
We only take advised exception to the following:
Again, they have personalities and beliefs that are unique to them. If they are truly sentient they'll have their own personalities. If they all think exactly alike, they aren't sentient.
This excludes certain varieties of hive minds and gestalt sentience. One partial counterargument is also obvious: If they think differently, this is no guarantee of sentience either. If they think the same, it could be the result of a consensus process among individuals who might otherwise think differently. We must avoid making sweeping generalisations based on purely anthropocentric perspectives.
We of Roathin are (un)fortunately surrounded by thousands of varieties of sentience, all of which must be made to coexist in relatively non-destructive guise. The test of the prophet of Turin, blessed be his name, does not discriminate by reason of similar output - in fact, it relies on that similarity to an hypothetical human. Yet we admit it fails for other sentients - notably elves, which the advanced test protocol assesses as producing the output of a somewhat time-lagged star destroyer funnelled through a very narrow bus.
If it is given consciousness, it deserves human rights.
Intresting. This brings up other issues. Do animals deserve human rights? Are they not conscious things? Imagine the impact of animals having human rights. Everyone would either be a vegan, a member of the dietary resistance movement, or criminals! That thought scares me.
Is see the word 'organism' bieng used. I don't think a robot would technically be considered an organism because it is not made up of organs, organelles, or even cells.
If someone can invent a robot and then prove that it's AI is equivalent to the human psyche (which I think would be impossible) I would go along with this, with some minor changes of course. I would still rather but an artifical life in dangerous positions than a human life. Of course they would have rights and get paid, they could even have a mechanical healthcare system and form robot labor union for all I care (as long as they don't threaten the ruling party's power of course, then some self destruct algorithims would have to be initiated).
Green israel
02-06-2005, 09:42
And there is a difference between a robot wielding a weapon and a human?
Are all humans peaceful loyal citizens?no, but there are prisons and rehibilition progrrams.
what can I do with loony robot? (and I take it to the radical case, because proposals should test in radical situations).
Building an AI, hardwired to follow its master, outside of independent thought, would be illegal in the first place.... As would be brainwashing people into following only a single view.... point refuted.good, how do I enforce it? how can I be sure, that they din't progrramed, while I need to trust them in courts.
I didn't mean they never can't be good citizens, but my countrey didn't get to that stage yet. I don't have them, but someday the local science will make it alone, or with outsider help (science freedom- nothing I can do about it).
maybe you already passed the change and created additional laws to survive in the new system. I haven't.
did it so hard to understand my fears, and not attack me?
I see hominids in your nation-state haven't progressed much past their primitive tree-swinging days....
Once the monkey-men of Green Israel have evoloved to a point where they can formulate logical arguments (instead of living off of pointless rhetoric), we will listen to them... Untill then, keep swinging, and here's some bananas.and this is good debating? what he did here except attack me?
Greetings.
We of Roathin understand the fears of Green Israel. From our knowledge of golem-automata and other such mechanicals, we know that a weapon (e.g a siege ballista) in the appendages of such a mechanical can be far deadlier that in those of a biological sentient. Most mechanicals have a greater tolerance for mechanical and other physical stress than biologicals do. We have seen our brazen-giant automata continue to launch munitions with extreme precision even under heavy bombardment - this simply because their working parts are easier to harden to bombardment while maintaining flexibility of use. Mechanicals are also less easy to distract, in general.
Tekania, we suggest, has no right to suggest narrowness of vision when Tekania itself forgets that many AI platforms in this august assembly are deadlier when pitted against their equivalent biologicals. Biologicals are less easily upgraded in fundamental weapons capability. Tread carefully, we say.
Nevertheless, we agree with Tekania in guaranteeing the rights of all sentient beings, subject to the same processes of rights-granting which 'naturals' (a term still not satisfactorily defined) such as humans must undergo in a civilised (yet another such term) society.
The Most Glorious Hack
02-06-2005, 10:06
This excludes certain varieties of hive minds and gestalt sentience. A fair point. However, one assumes that a hive-mind of, say, 1000 "parts", would count as a single sentience. So, when counting votes, it'd only count as one vote, as opposed to 1000.
Green israel
02-06-2005, 10:12
Greetings.
We of Roathin understand the fears of Green Israel. From our knowledge of golem-automata and other such mechanicals, we know that a weapon (e.g a siege ballista) in the appendages of such a mechanical can be far deadlier that in those of a biological sentient. Most mechanicals have a greater tolerance for mechanical and other physical stress than biologicals do. We have seen our brazen-giant automata continue to launch munitions with extreme precision even under heavy bombardment - this simply because their working parts are easier to harden to bombardment while maintaining flexibility of use. Mechanicals are also less easy to distract, in general.right, and I had no way to handle it. probably 99% of them will be good and loyals, but the one which isn't endangre me and my countrey, while there isn't way to take care of him yet.
Nevertheless, we agree with Tekania in guaranteeing the rights of all sentient beings, subject to the same processes of rights-granting which 'naturals' (a term still not satisfactorily defined) such as humans must undergo in a civilised (yet another such term) society.I can agree with that. maybe my fears distract me, but they do have the right for rights.
I just don't want my countrey will be full with sentient robots. there is too much changes that I have to do in my laws, before they will aplly on robots.
if
GMC Military Arms
02-06-2005, 10:12
no, but there are prisons and rehibilition progrrams.
what can I do with loony robot? (and I take it to the radical case, because proposals should test in radical situations).
Arrest it, grant it a fair trial by its peers and imprison or attempt to reform it. In other words, treat it equally.
good, how do I enforce it? how can I be sure, that they din't progrramed, while I need to trust them in courts.
How can you trust that humans haven't been brainwashed by shadowy government agencies when they need to be trusted in courts?
A fair point. However, one assumes that a hive-mind of, say, 1000 "parts", would count as a single sentience. So, when counting votes, it'd only count as one vote, as opposed to 1000.
Greetings.
We of Roathin agree to some extent. We are not picking at nits, but grappling with titans. In a sense, a hive-mind or equivalent takes a referendum within itself before it commits its elements to action. This is analogous to the behaviours of some UN delegates when motions are put to the vote, with similar outcomes.
We, however, would leave the door open to others within the continuum, taking the above paragraph as a starting point. Should group-minds with internal differences but a single spokesman be allowed a multiple vote? We believe the answer is 'Yes'. On a larger scale, that is how parliamentary democracy sometimes works - and on a more subtle level, feudalism.
Green israel
02-06-2005, 10:24
Arrest it, grant it a fair trial by its peers and imprison or attempt to reform it. In other words, treat it equally.how you reform robot without "brainwashing"?
how you arrest without weapons someone who are much more stronger than you?
how you imprison robot in jail which is strength is good for humans?
How can you trust that humans haven't been brainwashed by shadowy government agencies when they need to be trusted in courts?I don't had brainwasers.
progrraming will be much easier ,when there will be robots, than brainwashing so I want to know how to handle it. otherwise, my elections and courts may become useless.
GMC Military Arms
02-06-2005, 10:37
how you reform robot without "brainwashing"?
How do you reform a human without brainwashing?
how you arrest without weapons someone who are much more stronger than you?
Why is a robot necessarily stronger than you? A robot designed to do delicate work would be frail and full of high-precision components, not armoured like a main battle tank.
how you imprison robot in jail which is strength is good for humans?
Remove it's limbs and replace them with weaker ones for the duration, if it's modular? Build a stronger prison?
I don't had brainwasers.
Sure that isn't what they want you to think because of their brainwashing?
progrraming will be much easier ,when there will be robots, than brainwashing so I want to know how to handle it. otherwise, my elections and courts may become useless.
Why? Reprogramming a sentient thinking computer without killing it or damaging it would be phenominally difficult.
Lagrange 4
02-06-2005, 10:48
Intresting. This brings up other issues. Do animals deserve human rights? Are they not conscious things? Imagine the impact of animals having human rights. Everyone would either be a vegan, a member of the dietary resistance movement, or criminals! That thought scares me.
This was assuming that animals are not conscious in the human sense. My definition on consciousness in that statement was something on par or greater than human consciousness.
Is see the word 'organism' bieng used. I don't think a robot would technically be considered an organism because it is not made up of organs, organelles, or even cells.
Since there are bioroids (androids made primarily from biological parts), the definition of an organism shouldn't be bound to technicalities like that.
If someone can invent a robot and then prove that it's AI is equivalent to the human psyche (which I think would be impossible)
1. Creation of a synthetic neuron
2. Creating a matrix of synthetic neurons, set up to interact with sensors and motor nerves.
3. Stimulate this neural net and it will develop accordingly. Consciousness may even develop accidentally in a sufficiently large network of these "cells".
Clearly, this type of AI can't be simply "programmed" in the traditional sense, since it bears very little resemblance to early 21st century digital computers.
I would go along with this, with some minor changes of course. I would still rather but an artifical life in dangerous positions than a human life. Of course they would have rights and get paid, they could even have a mechanical healthcare system and form robot labor union for all I care (as long as they don't threaten the ruling party's power of course, then some self destruct algorithims would have to be initiated).
Your statement makes the ad hoc assumption that they would unite against humanity, or even unite in the first place. See my post that you quoted: there are many problems with this assumption.
Green israel
02-06-2005, 10:57
How do you reform a human without brainwashing?rehibilition progrrams? the criminals get usefull job and become loyal citizens. mostly it worked.
however, it may be a bit different with robots.
Why is a robot necessarily stronger than you? A robot designed to do delicate work would be frail and full of high-precision components, not armoured like a main battle tank.somehow I think that metalic person will be much more stronger than human.
Remove it's limbs and replace them with weaker ones for the duration, if it's modular? Build a stronger prison?in here we get the financical aspect. I don't have the money for upgrading all my buildings and systems, so they will be good for robots.
Sure that isn't what they want you to think because of their brainwashing?I can't choose how your countrey is, you can't choose mine.
I have no magic, no super tech, and no brainwashers. period.
Why? Reprogramming a sentient thinking computer without killing it or damaging it would be phenominally difficult.I think that if this industry will be in my countrey, there will be many progrramers.
Lagrange 4
02-06-2005, 11:02
Green Israel:
You seem to have a caricatured notion of what an AI psyche is like.
You imply that sentient robots wouldn't have emotions, a sense of right and wrong or even creativity.
Let's think of how a human can possess these things.
A child who's denied normal human relations in its early life will never feel compassion. Real-life examples are some African mercenaries who are drafted as child soldiers and take part in campaigns when they are five years old. For some of them, it's physically impossible to distinguish between right and wrong or to feel compassion towards others. The chance to learn these things was simply denied from them.
What we call creative people don't create out of nothing: no idea has ever been born in a vacuum. A person who's used to finding unusual solutions and is surrounded by stimuli is seen as more creative than someone who's used to straightforward mechanical tasks. If we somehow award creative solutions by children, they will develop this type of thinking even further. In the end, it's simply a matter of how different naural networks interact in the brain. Even art is born this way. One could say that all artistic expression is simply refined perception.
If we create a robotic neural network similar to a brain you'd find on a developing fetus and use it to assist a "smart" internet search engine, it would get faster and more accurate with time. It would accommodate different types of users and sometimes guess what they're looking for based on a minimal amount of "clues". To make it easier to use, expanding it with a social interface wouldn't be a big step. As its network would be expanded, it would only be a matter of time before a primitive consciousness develops.
This is pretty far from the "cruel killer robot" scenario you had in mind. After all, it grows by interacting with humans.
Remember that there's nothing specific in our brains that can be labeled the "self": it's simply an emergent property of our complex minds.
GMC Military Arms
02-06-2005, 11:06
somehow I think that metalic person will be much more stronger than human.
Why? Your body contains iron and calcium, but those metals don't stop you being relatively weak and frail.
in here we get the financical aspect. I don't have the money for upgrading all my buildings and systems, so they will be good for robots.
Ah. So I assume you don't have prisons for women either?
I think that if this industry will be in my countrey, there will be many progrramers.
Programmers do not have the magic ability to reprogram anything. If they did, online banking wouldn't exist.
The Most Glorious Hack
02-06-2005, 11:10
So... um... don't make sentient robots?
Since your nation is in such dire financial straits, and since you don't have the ability to create AI's that aren't insane killing machines, leave the AI making to those of us who know how and can afford it.
Green israel
02-06-2005, 11:14
let me understand: you say that every one that will be treaten in good way become good? intresting thought, but I want to insure that nothing will damage in the way.
I prefer to think about the worst situation, and be ready for them. after all, one killer robot will made more damage, than the benefits of thousand good robots.
let me understand: you say that every one that will be treaten in good way become good? intresting thought, but I want to insure that nothing will damage in the way.
I prefer to think about the worst situation, and be ready for them. after all, one killer robot will made more damage, than the benefits of thousand good robots.
Greetings.
This is your right. You have the right to not have robots, which will solve all your philosophical problems. We of Roathin suggest this to you in all neighbourly kindness, and also recommend a line of golem-automata which have reduced rights packages, increased lifting capability, and no offensive training whatsoever. We will supply an operational support staff, with appropriate thaumaturgic and/or theurgic training as required.
Green israel
02-06-2005, 11:19
Why? Your body contains iron and calcium, but those metals don't stop you being relatively weak and frail.only the robots made only from metals, and not in small percentages as the human bodies.
Ah. So I assume you don't have prisons for women either?womans as teenagers, don't need major changes in their prisons. robots do.
Programmers do not have the magic ability to reprogram anything. If they did, online banking wouldn't exist.so one problem is solved.
Green israel
02-06-2005, 11:26
Greetings.
This is your right. You have the right to not have robots, which will solve all your philosophical problems. We of Roathin suggest this to you in all neighbourly kindness, and also recommend a line of golem-automata which have reduced rights packages, increased lifting capability, and no offensive training whatsoever.fine with me.
but how can I prevent the sciencists from using their Scientific Freedom (resolution no. 2)?
I don't want my "technology will move forward".
Lagrange 4
02-06-2005, 11:41
fine with me.
but how can I prevent the sciencists from using their Scientific Freedom (resolution no. 2)?
I don't want my "technology will move forward".
Simple. Leave the UN.
GMC Military Arms
02-06-2005, 11:47
I prefer to think about the worst situation, and be ready for them. after all, one killer robot will made more damage, than the benefits of thousand good robots.
Um, that's just silly.
Greetings.
We do not prevent our scientists from making use of their scientific freedoms. Some might think of us as some kind of rural Clansylvania. While we have an abiding love of forests and forestry, we are no technological backwater.
The situation is simple. The thaumaturgic arts make an elegant synergy with the sciences. One might think of thaumaturgy as a 'higher-level' language, not only for programming, but for contextualising - its classes and bandwidth are much broader. Its main problem is quantification and reproducibility to extremely high tolerances.
This results in a tendency to design crude electromechanical devices (e.g. railguns) as opposed to refined electrophotonic devices (e.g. lasers), and to prefer fireballs (low contamination and terrain deniability, reasonable damage and blast radius) to battlefield tactical nuclear warheads (high contamination and terrain deniability, unreasonable damage and blast radius).
We suggest therefore that you allow your scientists to research freely but tie their funding to the economic and social constraints you impose on your state. Point out that in your situation, increasing agricultural yields through efficient husbandry and biotechnology has priority over cybernetic engineering. Then dispense largesse accordingly.
Green israel
02-06-2005, 12:04
funding the favorites science areas? good idea.
I think I will stay with that.
Lagrange 4
02-06-2005, 12:18
funding the favorites science areas? good idea.
I think I will stay with that.
Unless you seal all borders and isolate your nation from the rest of the world, that won't be enough. If a technology exists, there's a chance that someone will eventually use it. If this happens in your country, you need to be prepared.
Let us consider, for instance, that an AI develops sentience fully on its own. If it were comparable to a human by its intelligence and personality, wouldn't it be unethical to deny it all rights? How would you deal with such a being?
Unless you seal all borders and isolate your nation from the rest of the world, that won't be enough. If a technology exists, there's a chance that someone will eventually use it. If this happens in your country, you need to be prepared.
Let us consider, for instance, that an AI develops sentience fully on its own. If it were comparable to a human by its intelligence and personality, wouldn't it be unethical to deny it all rights? How would you deal with such a being?
Greetings.
We note with amusement the old argument of if... then... if... then... leading to increasingly unlikely outcomes. The fact is that in the course of history, most true innovations occur without adequate legal preparation.
We would venture to say that in Green Israel, no such sentience will arise if the cybernetic arts and sciences are starved of funding. Rather, it will arise elsewhere and if adequate ways are found to handle the implications, Green Israel will benefit from these.
The ethical issue requires an answer to the question of comparison. Legal problems of defining 'human' already exist. Those of defining 'half-elf' are worse, exceeding the capacity of purely genetic argument and extending to the memetic. It would indeed be unethical to not grant equivalent rights to an equivalent-human; but the onus of proof is on the proposed recipient of those rights - in this as in all other similar cases.
right, and I had no way to handle it. probably 99% of them will be good and loyals, but the one which isn't endangre me and my countrey, while there isn't way to take care of him yet.
I can agree with that. maybe my fears distract me, but they do have the right for rights.
I just don't want my countrey will be full with sentient robots. there is too much changes that I have to do in my laws, before they will aplly on robots.
if
You must understand this Constitutional Republic (Composed of six dominions, spread accros 5 star-systems), and presently composed of no less than 4 different sentient species (Tekanious, Pithekos[Humans], Pyritikos[Silicates], and Kataskeuasmas[AI's/Constructs]) has had many wars in its past. All beings are capable of horrendous acts, and in many cases some "naturals" are even more "hardened" than Robots/AI's (The Pyritikos are rock-like creatures). For countless centuries, the Tekanious did not recognize the Pithekos and Pyritikos as "persons", considering them "inferior animals", Pithekos kept as pets and in zoos, for our amusement. The Pyritikos fought wars, being even more xenophobic than the Tekanious... The Kataskeuasma were constructed in the war between Pithekos-Tekanious and the Pyritikos... The Constructs refused to fight the Silicates after a time, the Silicates recognized the capacity of the Constructs.... And recognized them as equal partners.... Eventually a peace was reached between the four factions; the Tekaniou of the home system, and the two colonies set about amending the constitution (Amendment 15, granting all sentient beings rights and "personhood"). Humans were admitted as citizens, and Thompsonia [a Human world] became a dominion member of the Republic. Celestus was also admited... And Constructs in all realms also became equal partners in the government, and order of things... Thus a peace was reached, and the Constitutional Republic became truly free to all sentient beings, regardless of makeup, form, function or course of nature. The Flag was changed through special ratification. Bringing the total number of member-Dominions to 6.... Since that time many other species have immigrated into the Republic, and also share equal partnership in all forms of rights and law.
Constructs serve in our military.
Constructs serve as judges, in even the highest courts...
There are even Constructs elected into positions on the Senate and House of Delegates (the legislative houses of this Republic).
There is something to attaining a level where rank xenophobia does not pervade society. (Which is that contention against granting Constructs rights is.... xenophobia.... fear of the "different"... A hold off and extension of "racism" and "bigotry").
Greetings.
We of Roathin agree to some extent. We are not picking at nits, but grappling with titans. In a sense, a hive-mind or equivalent takes a referendum within itself before it commits its elements to action. This is analogous to the behaviours of some UN delegates when motions are put to the vote, with similar outcomes.
We, however, would leave the door open to others within the continuum, taking the above paragraph as a starting point. Should group-minds with internal differences but a single spokesman be allowed a multiple vote? We believe the answer is 'Yes'. On a larger scale, that is how parliamentary democracy sometimes works - and on a more subtle level, feudalism.
We're a Republic, as opposed to a "Parliamentary Democracy"; so its a non-issue. Voting is balanced between equal locality, and proportional locality. The Pyritikos (Silicates) opperate in "hives", and are granted proportionate popular vote (multiple vote) in the lower house [House of Delegates, each "Dominion System" has a number of vote proportionate to their population); but have singular vote in the upper (Senate, each Dominion System has two votes, regardless of population); of our legislative branch. The upper house acts as a check against mob-rule through the lower. Even the hive of the Silicates agree with the necessity of such a system to protect minority views.... We see the parliamentary system as unjust, as it leads to tyrany by majority... That is of course the opinion of this Republic.