NationStates Jolt Archive


artificial intelligence/life and ethics

Smommer
04-08-2005, 01:01
the concepts of artificial intelligence and artificial life have been around for some time. Their definitions have also changed somewhat over time (from being able to beat a human at chess to being able to hold a conversation with a human without them knowing they were talking to a machine and beyond).

my question is:

if mankind was to one day create a machine that is truly sentient and aware of its own existence would it be ethical for mankind to then switch that machine off? there are things to consider here such as its energy/maintenance demands but lets assume that the machine can be self sufficient.

my original thoughts were yes, if it posed a danger to mankind, but if there was no evidence to suggest this how would we know? Could we destroy such a machine simply to satisfy our unfounded fears?

as a sidenote, if we did create such a machine and it did destroy humanity, would this not just be another step on the evolutionary ladder?

smommer
Neo-Anarchists
04-08-2005, 01:09
if mankind was to one day create a machine that is truly sentient and aware of its own existence would it be ethical for mankind to then switch that machine off? there are things to consider here such as its energy/maintenance demands but lets assume that the machine can be self sufficient.
It would be no more ethical than it would be to end the life of a human who was in the same situation.
my original thoughts were yes, if it posed a danger to mankind, but if there was no evidence to suggest this how would we know?
Well, if it were initiating force against others, then go ahead and stop it from harming others, as you would do the same to a human.
Could we destroy such a machine simply to satisfy our unfounded fears?
That would be wrong. It would be equivalent to executing a man without trial, in my opinion.

Unfotunately, if and when we develop artificial intelligence, there will most likely be a period of oppression of non-biological intelligences, due to the
way that things are currently running and have run. It took a long time to get women rights and blacks rights, I suspect it will take even longer to get AIs rights.
Lord-General Drache
04-08-2005, 01:11
Unless switching the machine off was for maintenance/diagnostics, and you had the consent of it, then I think it'd be equal to murder.

Pre-emptively attacking something or terminating it before it commits an action that would endanger the lives/well being of others hasn't always succeed, and so I view it as a generally bad idea to commit such an action, simply based on having the sneaking suspicion something might happen. Also, since they'd be computer based, and thus have some form of programming, it'd likely be feasible to simply access their thoughts (which would probably wind up illegal without some special warrant).

If we did manage to create a race of sentient machines that wound up destroying us, I suppose it could be viewed as a form of evolution, albeit an unnatural form.
Bolol
04-08-2005, 01:15
My opinion?

Regardless if it has "living" cells, once an entity has reached the point when it is aware of its existence, when it can tell the difference between right and wrong, when it can start to express itself, when it can start making choices...it is not only "alive" but sentient. And as such, is subject to the same rights.

It would not be right just to "switch it off", unless its intent was to harm others of its own kind or humanity. In which case we would treat it as a criminal.

But...this is all hypothetical...
Lord-General Drache
04-08-2005, 01:19
My opinion?

Regardless if it has "living" cells, once an entity has reached the point when it is aware of its existence, when it can tell the difference between right and wrong, when it can start to express itself, when it can start making choices...it is not only "alive" but sentient. And as such, is subject to the same rights.

It would not be right just to "switch it off", unless its intent was to harm others of its own kind or humanity. In which case we would treat it as a criminal.

But...this is all hypothetical...
The thing is, though, sentience does not equate to feeling emotions, and thus having morals. Any artificial intelligence would be bound by our morals, and may not understand why something is "right" or "wrong", or than "we say it is..".At least, that's how I see it.
Bolol
04-08-2005, 01:25
The thing is, though, sentience does not equate to feeling emotions, and thus having morals. Any artificial intelligence would be bound by our morals, and may not understand why something is "right" or "wrong", or than "we say it is..".At least, that's how I see it.

Undeniably, artificial inteligence is inherently...artificial...

Any morals it has will be created by us. Which is why we would have a connection to these beings.

My theory is that at some point in the distant future, sentient programs will be designing other sentient programs...instilling their own morals on the next generation.

But we created the original program with those morals...

And thus...we head into the whole "playing God" dillema...
Lord-General Drache
04-08-2005, 01:31
Undeniably, artificial inteligence is inherently...artificial...

Any morals it has will be created by us. Which is why we would have a connection to these beings.

My theory is that at some point in the distant future, sentient programs will be designing other sentient programs...instilling their own morals on the next generation.

But we created the original program with those morals...

And thus...we head into the whole "playing God" dillema...
And it's because those morals are created by us that may lead to problems, if they are incapable of emotions, at first. They'd lack a very critical connection to what makes us human, and would be a very stark reminder of what makes them [i]in[i/]human.All that would protect "us" from "them" would be some programming, which, as most realize, can in time be altered via corruption due to simple degredation, or purposeful sabotage of it. I think if we want AI to be successful, it would need to be capable of expression itself emotionally, and then mass-creating AIs.

In due time, I'm sure that AIs would be capable of creating more, and perhaps, better versions of themselves. Whether we choose to advance along with them, would remain to be seen.
Vegas-Rex
04-08-2005, 01:32
The thing is, though, sentience does not equate to feeling emotions, and thus having morals. Any artificial intelligence would be bound by our morals, and may not understand why something is "right" or "wrong", or than "we say it is..".At least, that's how I see it.

Actually, its almost impossible to create something even close to mammal intelligence without emotions because emotions allow something to make decisions without detailed specific logic. In any case, the fact that the robot would just have morals on the basis of what it was told/programmed is really no setback because humans work that way too. If a society gives rights to humans on the basis of their complexity of thought then to expel robots all they have to do is change the basis of rights.
Vegas-Rex
04-08-2005, 01:35
Undeniably, artificial inteligence is inherently...artificial...

Any morals it has will be created by us. Which is why we would have a connection to these beings.

My theory is that at some point in the distant future, sentient programs will be designing other sentient programs...instilling their own morals on the next generation.

But we created the original program with those morals...

And thus...we head into the whole "playing God" dillema...

If the machines are able to create others and don't have some hypothetical perfect method they will be subject to the forces of evolution and their morals will shift to whatever is most useful to them. While they probably will develop relatively human morals there might be differences depending on how society treats them.
Smommer
04-08-2005, 01:40
Undeniably, artificial inteligence is inherently...artificial...

Any morals it has will be created by us. Which is why we would have a connection to these beings.

My theory is that at some point in the distant future, sentient programs will be designing other sentient programs...instilling their own morals on the next generation.

But we created the original program with those morals...

And thus...we head into the whole "playing God" dillema...

what if it arrives at its own morals through some kind of genetic algorithm (a computer equivalent to our evolution), this would mean we did not explicitly provide the machine with our own code of moral conduct, although we would have to provide the natural selection criteria, just as mother nature or *insert personal belief here* does for us. The programs would then have very similar traits to our desired code of morals for such machines, but the way it arrived at those morals and the morals themselves would only be what we interperet them to be and not necessarily how the machine interperets them.

*tries not to confuse himself*
Bolol
04-08-2005, 01:42
First of all...

(Gives thread Bolol Nuclear Cookie)

Now. In terms of protecting ourselves from a potential threat from these intelligences...The way I see it, if we are unprepared to deal with the threat, then we do not have the capacity to create such lifeforms, and as such, should not.

I had a discussion with a friend of mine on the evolution of AI.

You can give a program three things to make it more "human"; knowledge, self-awareness, and morals.

You only give it knowledge, then it is nothing more than a database.

You only give it self-awareness, then it is chaotic.

You only give it morals, it has nothing to act upon.

The program becomes DANGEROUS when you only give it two out of the three.

You give it knowledge and self-awareness but no morals...well...You guys have seen the "Terminator" series right...
Feil
04-08-2005, 01:49
the concepts of artificial intelligence and artificial life have been around for some time. Their definitions have also changed somewhat over time (from being able to beat a human at chess to being able to hold a conversation with a human without them knowing they were talking to a machine and beyond).

my question is:

if mankind was to one day create a machine that is truly sentient and aware of its own existence would it be ethical for mankind to then switch that machine off? there are things to consider here such as its energy/maintenance demands but lets assume that the machine can be self sufficient.

my original thoughts were yes, if it posed a danger to mankind, but if there was no evidence to suggest this how would we know? Could we destroy such a machine simply to satisfy our unfounded fears?

as a sidenote, if we did create such a machine and it did destroy humanity, would this not just be another step on the evolutionary ladder?

smommer


Turning off a machine does not kill it. Even your home computer, when turned off, stores its RAM and ROM, and carries out simple processes like the system clock. Turning off a computer or AI is roughly analagous to tranquelising a human.

As to destroying it, if you maintain the RAM and ROM, it is questionably moral to to "kill" the AI, since the software that makes up its mind to the best of our knowledge remains in tact. It is roughly analygous to using a "transporter" a la Star Trek.

If you kill the AI and do not back up the RAM and ROM, it is analagous to killing a human, assuming an equal level of intelligence and self-awareness, with all the moral boundaries and loopholes that go along with that.
---

Concerning the "up the evolutionary ladder" comment. It would not be a step up, nor a step down, but a step sideways. It would be a divergeance. The machine would have extremely high capability to adapt and develop itself, much like humans have attained with medicine and technology, but perhaps even greater; however, lacking the ability to mutate, it cannot evolve per se.
Smommer
04-08-2005, 01:55
You can give a program three things to make it more "human"; knowledge, self-awareness, and morals.

You only give it knowledge, then it is nothing more than a database.

You only give it self-awareness, then it is chaotic.

You only give it morals, it has nothing to act upon.

The program becomes DANGEROUS when you only give it two out of the three.

You give it knowledge and self-awareness but no morals...well...You guys have seen the "Terminator" series right...

could a programs morals not be inferred as a result of analysing knowledge? what is missing here i think may be a sense of purpose, something that is different for many people. morals could then be formulated as a behavioural guide to best fulfill that purpose.
Lord-General Drache
04-08-2005, 01:56
First of all...

(Gives thread Bolol Nuclear Cookie)

Now. In terms of protecting ourselves from a potential threat from these intelligences...The way I see it, if we are unprepared to deal with the threat, then we do not have the capacity to create such lifeforms, and as such, should not.

I had a discussion with a friend of mine on the evolution of AI.

You can give a program three things to make it more "human"; knowledge, self-awareness, and morals.

You only give it knowledge, then it is nothing more than a database.

You only give it self-awareness, then it is chaotic.

You only give it morals, it has nothing to act upon.

The program becomes DANGEROUS when you only give it two out of the three.

You give it knowledge and self-awareness but no morals...well...You guys have seen the "Terminator" series right...
*places the cookie in a nuclear reactor, thus providing a huge energy surplus for my nation*w00t.

I agree with being prepared for the worst, but on the flip side, wouldn't it seem very forbidding if you found out that your creator(s) already thought of ways to kill you, just in case you didn't turn out right when you were "born"?

And yes, I realize how it'd be dangerous to have only two out of those three, in any combination, but the point I'm trying to make is, that morals are no good if you have no emotions to base them on, with no self-punishment or self-reward as immediate consequences. A program as we know them now, as I said, are vulnerable to corruption through various means, and if you place a program, no matter how advanced, into a mechanical body, it then has the very real possibility of causing harm to someone if something should happen to its base programming.
Bolol
04-08-2005, 01:58
could a programs morals not be inferred as a result of analysing knowledge? what is missing here i think may be a sense of purpose, something that is different for many people. morals could then be formulated as a behavioural guide to best fulfill that purpose.

Hmm...never thought of that.

Purpose...Is that not what Smith in the Matrix wanted?
Vegas-Rex
04-08-2005, 02:01
*places the cookie in a nuclear reactor, thus providing a huge energy surplus for my nation*w00t.

I agree with being prepared for the worst, but on the flip side, wouldn't it seem very forbidding if you found out that your creator(s) already thought of ways to kill you, just in case you didn't turn out right when you were "born"?

And yes, I realize how it'd be dangerous to have only two out of those three, in any combination, but the point I'm trying to make is, that morals are no good if you have no emotions to base them on, with no self-punishment or self-reward as immediate consequences. A program as we know them now, as I said, are vulnerable to corruption through various means, and if you place a program, no matter how advanced, into a mechanical body, it then has the very real possibility of causing harm to someone if something should happen to its base programming.

Not sure how the whole "corruptible programming" thing makes robots any different from humans. Robots would probably be better at maintaining their programming than most humans in fact. The only problem would be that it would be easier to tamper with.

As for the consequences, emotions, etc., most of these would be essential in creating even a functional AI, anyway.
Lord-General Drache
04-08-2005, 02:05
Not sure how the whole "corruptible programming" thing makes robots any different from humans. Robots would probably be better at maintaining their programming than most humans in fact. The only problem would be that it would be easier to tamper with.

As for the consequences, emotions, etc., most of these would be essential in creating even a functional AI, anyway.
The fact that AIs would be run by 0s and 1s, and humans by biological processes is not a mark for proving the difference between the two. What would be is the fact that they would be far more susceptible, imo, than humans, at someone altering their thought processes, and/or gaining complete control over them. But you're right, it likely would be easier for an AI to maintain their programming than a human. Protecting, however, I feel is another issue.

And I agree that the creation of an AI capable of emotion would be crucial to their success, which is what I've been trying to say.
Bolol
04-08-2005, 02:09
I think the problem with creating emotion is that emotions seem only expressable after you have learned and mastered them. They don't seem as though they can be "taught"

But...I will expand my list nevertheless.

There are now five things you can give a program to make it more "human".

-Knowledge
-Self-Awareness
-Morals
-Purpose
-Emotion

Please tell me if you disagree with my conclusion.
Lord-General Drache
04-08-2005, 02:15
I think the problem with creating emotion is that emotions seem only expressable after you have learned and mastered them. They don't seem as though they can be "taught"

But...I will expand my list nevertheless.

There are now five things you can give a program to make it more "human".

-Knowledge
-Self-Awareness
-Morals
-Purpose
-Emotion

Please tell me if you disagree with my conclusion.
I agree that the creation and instillation of emotions into an AI would be very prohibitive, but I think possible. I think that perhaps one way for this to happen is to create a single AI, house it within an enclosed network, give it all the material of humanity to access, and just let it..process. I'd think that over time, it'd form opinions, and not just analyses, and that would be the point when emotions emerge. Of course, there's always the not so plausible method of somehow downloading human emotions and installing them into an AI, but that's far too sci-fi for my liking to be seriously considered.

I fully agree with that. However, how would you define "purpose"? That would imply pre-destination, which would preclude a large portion of free will, and that is very much something that is integral to being "human".
Bolol
04-08-2005, 02:18
I agree that the creation and instillation of emotions into an AI would be very prohibitive, but I think possible. I think that perhaps one way for this to happen is to create a single AI, house it within an enclosed network, give it all the material of humanity to access, and just let it..process. I'd think that over time, it'd form opinions, and not just analyses, and that would be the point when emotions emerge. Of course, there's always the not so plausible method of somehow downloading human emotions and installing them into an AI, but that's far too sci-fi for my liking to be seriously considered.

I fully agree with that. However, how would you define "purpose"? That would imply pre-destination, which would preclude a large portion of free will, and that is very much something that is integral to being "human".

Good idea with allowing it to form its own opinions.

As for purpose. I've seen people search to the very depths of their souls looking for their purpose. A living being needs to have a sense of worth...of purpose.

I think we'd need to let the program know that it has a purpose just like the rest of humanity: to live, to grow, and to evolve.
Pax Aeternus
04-08-2005, 02:21
There are now five things you can give a program to make it more "human".

-Knowledge
-Self-Awareness
-Morals
-Purpose
-Emotion

Please tell me if you disagree with my conclusion.

I have a problem with this whole programming humanity into machines thing. I've always thought that the purpose of AI was to have a completely new, self aware sentience. Programming things like morals and purpose defeat the purpose of something new. Wouldn't it make more sense to give it only enough programming to learn on it's own? Ever since I knew what it was, I've been a subscriber to the trait theory in psychology and think that it's also the basis of AI. The basic idea behind the trait theory is that there are a few main traits that make up your decision making process. In my opinion, that's true but for AI, I expand that theory to form a web of interacting traits such as fear, greed, etc. Have any of you ever heard of a program called Massive? It was used as a crude AI for the soldiers in LOTR battle scenes. Every little soldier you saw had it's own little brain made up a web of choices. While crude, this is what I see is the basis of true AI. A web of interconnecting traits that change as the being has more experiences. The trick to this would be to build in enough sensors and whatnot on the machine to allow it to have senses. A computer sitting on a desk couldn't truly have intelligence. It would only be a database. I have more ramblings on the subject but I'll wait and see what you all think of this.
Lord-General Drache
04-08-2005, 02:23
Good idea with allowing it to form its own opinions.

As for purpose. I've seen people search to the very depths of their souls looking for their purpose. A living being needs to have a sense of worth...of purpose.

I think we'd need to let the program know that it has a purpose just like the rest of humanity: to live, to grow, and to evolve.

Thank you. The problem would then lie, however, on the amount of time it would take for it to begin forming opinions, which, on a human scale, could be prohibitive. We are, after all, only mortal and thus prone to great impatience. Then again, since it would be a mechanically based intelligence, it would be able to absorb and process data at far faster rate than we could.

In regards to the purpose, I'm glad you think that way. I was wondering if you were thinking of a more..literal definition, such as some creation that simply supplants humans in undiserable occupations.

I happen to believe that the purpose of life itself is, exactly as you said, to grow, to learn, to evolve.
Vegas-Rex
04-08-2005, 02:25
I think the problem with creating emotion is that emotions seem only expressable after you have learned and mastered them. They don't seem as though they can be "taught"

But...I will expand my list nevertheless.

There are now five things you can give a program to make it more "human".

-Knowledge
-Self-Awareness
-Morals
-Purpose
-Emotion

Please tell me if you disagree with my conclusion.


Many of these are subsets of the others:
1.Self awareness is knowledge of one's own existence, almost certainly present if any knowledge is.
2. Morals, Emotion, can both be summarized as instincts required for functionality, grouped along with aversion to self-destruction, possibly elements of Self awareness, etc.
3. In my experience Purpose makes people much less human and would probably do the same to robots.

In the end all that we really have as necessary are:
1. Knowledge
2. Ability to think (no-brainer, guys, can't believe it wasn't on the original list)
3. Necessary instincts.
Lord-General Drache
04-08-2005, 02:28
I have a problem with this whole programming humanity into machines thing. I've always thought that the purpose of AI was to have a completely new, self aware sentience. Programming things like morals and purpose defeat the purpose of something new. Wouldn't it make more sense to give it only enough programming to learn on it's own? Ever since I knew what it was, I've been a subscriber to the trait theory in psychology and think that it's also the basis of AI. The basic idea behind the trait theory is that there are a few main traits that make up your decision making process. In my opinion, that's true but for AI, I expand that theory to form a web of interacting traits such as fear, greed, etc. Have any of you ever heard of a program called Massive? It was used as a crude AI for the soldiers in LOTR battle scenes. Every little soldier you saw had it's own little brain made up a web of choices. While crude, this is what I see is the basis of true AI. A web of interconnecting traits that change as the being has more experiences. The trick to this would be to build in enough sensors and whatnot on the machine to allow it to have senses. A computer sitting on a desk couldn't truly have intelligence. It would only be a database. I have more ramblings on the subject but I'll wait and see what you all think of this.

I agree that the purpose of an AI would not be to imitate humanity, but to become something else, entirely. However, since we have no other frame of reference for what sentience "should" be, they would wind up being human, at least at first, but given enough time, I'm should they could, and would, evolve into something beyond that. Also, we would need some "human" connection point to them, in order to peacefully co-exist and relate to them, even if it's just a few common beliefs.
Vegas-Rex
04-08-2005, 02:30
I agree that the creation and instillation of emotions into an AI would be very prohibitive, but I think possible. I think that perhaps one way for this to happen is to create a single AI, house it within an enclosed network, give it all the material of humanity to access, and just let it..process. I'd think that over time, it'd form opinions, and not just analyses, and that would be the point when emotions emerge. Of course, there's always the not so plausible method of somehow downloading human emotions and installing them into an AI, but that's far too sci-fi for my liking to be seriously considered.

You seem to be confusing emotions with opinions. Emotions are basically states in which the robot is able to make generalized, snap decisions, and are essential to creating a functional AI in the first place, even before we think of making it human. Opinions can easily be gained from experience.
Bolol
04-08-2005, 02:32
Many of these are subsets of the others:
1.Self awareness is knowledge of one's own existence, almost certainly present if any knowledge is.
2. Morals, Emotion, can both be summarized as instincts required for functionality, grouped along with aversion to self-destruction, possibly elements of Self awareness, etc.
3. In my experience Purpose makes people much less human and would probably do the same to robots.

In the end all that we really have as necessary are:
1. Knowledge
2. Ability to think (no-brainer, guys, can't believe it wasn't on the original list)
3. Necessary instincts.

Well said!

(Gives Vegas Rex a Bolol Nuclear Cookie)

In addition, I give you my toupe'!

Now...Emotions and morals are very interconnected. Emotions are the foundations of our morals.

But knowledge, one can have knowledge but not be aware. Like I said before, you'd be nothing more than a database.

And I maintain that the only real purpose in life is to live, nothing more. How you live that life, is up to you.
Smommer
04-08-2005, 02:33
Hmm...never thought of that.

Purpose...Is that not what Smith in the Matrix wanted?

its been a while since i saw the matrix but i think it was. i cant remember how that panned out in the end, was smith a corrupted (evolved?) program after trying to get into the real world, i guess we wont know if that was its purpose or a means to finding or reaching its purpose.

*apologies for fading knowledge of the matrix*

i seem to remember in the film AI the kids purpose was to love its mother and be loved and broke its behaviour protocols when barriers to this were placed infront of it. emotions came into play here through jealousy to the parents human son. although in this instance the robot kid had a pre-programmed sense of purpose and may be another factor that needs to be looked at when deciding what is sentient and what isnt.
Pax Aeternus
04-08-2005, 02:39
I agree that the purpose of an AI would not be to imitate humanity, but to become something else, entirely. However, since we have no other frame of reference for what sentience "should" be, they would wind up being human, at least at first, but given enough time, I'm should they could, and would, evolve into something beyond that. Also, we would need some "human" connection point to them, in order to peacefully co-exist and relate to them, even if it's just a few common beliefs.

Then perhaps we could build the physical avatars of this AI in a human form. Maybe we could form a type of sentience nothing like humanity if we gave the thing 4 legs or a tail or 3 arms or something because with these differences, it would have different experiences than humans and would react differently to certain situations. This would perhaps only be a physical difference but who knows? My point in the matter is that actively trying to make them in any way human could flaw the whole thing. It's sort of like trying to have a kid make their own decisions but then giving them rules as to what those decisions are. That may be safer but it doesn't truly allow the kid to form opinions on his own.
Smommer
04-08-2005, 02:43
I fully agree with that. However, how would you define "purpose"? That would imply pre-destination, which would preclude a large portion of free will, and that is very much something that is integral to being "human".

i think your right in that a sense purpose implies pre-destination. but i think this is only the case assuming:

a) that all of us reach our purposes (assuming we actually have one, we may just think we do)

b) our purposes are static and dont chage throughout the duration of our lives. free will essentially allows us to change our ideas of our purpose as our knowledge and morals change
Bolol
04-08-2005, 02:47
Off-topic: As much as I'd love to continue this discussion, I gotta get to bed early (heading to Water Country tommorow :D). Drache! Take command here!

See ya' guys!
Lord-General Drache
04-08-2005, 02:47
You seem to be confusing emotions with opinions. Emotions are basically states in which the robot is able to make generalized, snap decisions, and are essential to creating a functional AI in the first place, even before we think of making it human. Opinions can easily be gained from experience.

An AI would make analyses, statements based purely on facts. An opinion, however, is one's personal take, an interpretation of something, not taken at literal or face value. It's something more..intuitive, than logical, which is, to me, would be the beginning of learning to feel emotions.

Then perhaps we could build the physical avatars of this AI in a human form. Maybe we could form a type of sentience nothing like humanity if we gave the thing 4 legs or a tail or 3 arms or something because with these differences, it would have different experiences than humans and would react differently to certain situations. This would perhaps only be a physical difference but who knows? My point in the matter is that actively trying to make them in any way human could flaw the whole thing. It's sort of like trying to have a kid make their own decisions but then giving them rules as to what those decisions are. That may be safer but it doesn't truly allow the kid to form opinions on his own.

I see your point, but I think that humans, for their own psychological benefit, would need an AI that at least vaguely resembled them, or a preconceived idea of what a sentient, feeling creature "should" look like, and let the AIs decide how they wish to look, and gradually mold themselves into that shape. It may be that they decide to take a multitude of forms, or choose to remain humanoid.
Lord-General Drache
04-08-2005, 02:49
Off-topic: As much as I'd love to continue this discussion, I gotta get to bed early (heading to Water Country tommorow :D). Drache! Take command here!

See ya' guys!

lol, I'll do what I can, Bolol. 'Twas a pleasure.
Vegas-Rex
04-08-2005, 02:49
Well said!

(Gives Vegas Rex a Bolol Nuclear Cookie)

In addition, I give you my toupe'!

Now...Emotions and morals are very interconnected. Emotions are the foundations of our morals.

But knowledge, one can have knowledge but not be aware. Like I said before, you'd be nothing more than a database.

And I maintain that the only real purpose in life is to live, nothing more. How you live that life, is up to you.

1. Don't want to segue fully into a morality debate, but I would say that what we consider morals are either A:Emotions already, not really based per se, or B: Memetic, they should decide themselves.
2. To go into a bit of an excess of detail, I would say that awareness is a bit of a combination of knowledge and necessary instinct, I.E. the knowledge that one exists (like Encarta having an entry for Encarta) and the instinctual reactions of pleasure to that which benefits oneself and pain to that which harms oneself.
3. Living may be the physical purpose of life, but its not the intellectual purpose. The intellectual purpose of anything is gaining pleasure and minimizing displeasure. Even the most self-denying of people are merely trying to either gain more pleasure in future or gain happiness from their self control and appeasement of conscience. The only question is what should give pleasure/pain to the robot, and that is what most of these issues (emotions, morals, etc.) are really about.
Holyawesomeness
04-08-2005, 02:55
The point of artificial intelligence from my point of view is simply to create something that can serve us. This means that AI is expendable and the expendable nature of AI should even be programmed into its morality. The whole view of AI as being expendable servants makes the argument of AI remain that they are simply property. To give programs too many human traits would hurt us it would also be unfair to the bastard creatures that we create, Robots should know that their only purpose in life is to serve us to the best of their ability.
Vegas-Rex
04-08-2005, 02:56
An AI would make analyses, statements based purely on facts. An opinion, however, is one's personal take, an interpretation of something, not taken at literal or face value. It's something more..intuitive, than logical, which is, to me, would be the beginning of learning to feel emotions.



I see your point, but I think that humans, for their own psychological benefit, would need an AI that at least vaguely resembled them, or a preconceived idea of what a sentient, feeling creature "should" look like, and let the AIs decide how they wish to look, and gradually mold themselves into that shape. It may be that they decide to take a multitude of forms, or choose to remain humanoid.

From what you're describing you are talking at least a little more about the same thing, namely the ability to make generalizations, which is really all that emotions/your meaning of opinions are, and which in any case is necessary merely to allow and AI the cognitive power of the human brain.

As for the "made in Human's own image" thing, you're right that most people would be more comfortable with an AI that resembles them, but in early stages of the science's development that may not be as feasible as it may seem. It may be that the first functional AI will have to be on wheels, many legs, etc., just to function.
Pax Aeternus
04-08-2005, 02:58
The only question is what should give pleasure/pain to the robot, and that is what most of these issues (emotions, morals, etc.) are really about.

I would assume that pain to a robot would be anything that would damage it's physical form. That's how humans work. In the short term anyway. Eventually, the robot should "evolve" to be able to experience emotional pain. Emotional pain is much more difficult to figure out. From a purely biological and evolutionary standpoint, I have no idea how emotional pain works. Physical pain would be the basis for fear. Perhaps emotions such as missing someone came from physical roots too. If someone who fed you regularly dissapeared one day, you'd miss them and be sad. In many animals, they don't really care who feeds them as long as their fed. That's one of the things that separates us from animals. I guess love stems from pleasure and as humans evolved, so did the notion of love from mental stimulation. Would an AI have to be fast-tracked through the process of evolution to be able to feel emotions as we do now?
Vegas-Rex
04-08-2005, 02:59
The point of artificial intelligence from my point of view is simply to create something that can serve us. This means that AI is expendable and the expendable nature of AI should even be programmed into its morality. The whole view of AI as being expendable servants makes the argument of AI remain that they are simply property. To give programs too many human traits would hurt us it would also be unfair to the bastard creatures that we create, Robots should know that their only purpose in life is to serve us to the best of their ability.

I can't think of any reason whatsoever why a being with an AI would be useful as a servant. Allowing a full range of learning capability makes your machine very unreliable. If you want a robot servant you want something specialized to some degree, preferably one that won't think like a human at all. Such a device would not need the benefits of true AI.

If scientists ever create AI it will most likely be for a similar purpose to walking on the moon: because we can.
Pax Aeternus
04-08-2005, 03:01
The point of artificial intelligence from my point of view is simply to create something that can serve us. This means that AI is expendable and the expendable nature of AI should even be programmed into its morality. The whole view of AI as being expendable servants makes the argument of AI remain that they are simply property. To give programs too many human traits would hurt us it would also be unfair to the bastard creatures that we create, Robots should know that their only purpose in life is to serve us to the best of their ability.

If the only purpose of a robot was to serve us, why make it sentient? Wouldn't the just be cruel? We have robots that serve us now and they have no need for sentience.
Lord-General Drache
04-08-2005, 03:01
From what you're describing you are talking at least a little more about the same thing, namely the ability to make generalizations, which is really all that emotions/your meaning of opinions are, and which in any case is necessary merely to allow and AI the cognitive power of the human brain.

As for the "made in Human's own image" thing, you're right that most people would be more comfortable with an AI that resembles them, but in early stages of the science's development that may not be as feasible as it may seem. It may be that the first functional AI will have to be on wheels, many legs, etc., just to function.
Ah, well, even if our meanings of opinion seem to differ, it would seem we still agree, regardless, on the importance of an AI to feel.

And sadly, you may well be right that the first AIs will be relegated to such, due to scientific constraints, but it may be that we turn to them, in time, for help on creating better materials and designs.
Vegas-Rex
04-08-2005, 03:02
I would assume that pain to a robot would be anything that would damage it's physical form. That's how humans work. In the short term anyway. Eventually, the robot should "evolve" to be able to experience emotional pain. Emotional pain is much more difficult to figure out. From a purely biological and evolutionary standpoint, I have no idea how emotional pain works. Physical pain would be the basis for fear. Perhaps emotions such as missing someone came from physical roots too. If someone who fed you regularly dissapeared one day, you'd miss them and be sad. In many animals, they don't really care who feeds them as long as their fed. That's one of the things that separates us from animals. I guess love stems from pleasure and as humans evolved, so did the notion of love from mental stimulation. Would an AI have to be fast-tracked through the process of evolution to be able to feel emotions as we do now?

Much of the emotional pain issue could also be programmed in: (sense of boredom, some basic level of empathy, other evolutionary stuff). As for specific people, the ability to generalize/emote would also allow sufficient obsession to miss people, etc., merely as a tool to facilitate faster cognition.
Pax Aeternus
04-08-2005, 03:08
Much of the emotional pain issue could also be programmed in: (sense of boredom, some basic level of empathy, other evolutionary stuff). As for specific people, the ability to generalize/emote would also allow sufficient obsession to miss people, etc., merely as a tool to facilitate faster cognition.

But how do you program things like this into the AI without making it like a normal program that only has "If A, then B" processing? Boredom is different for different people. Some like to sit alone and think. Others always have to be out and about doing something. Programming empathy could also be the A then B thing. You could tell the machine that killing is bad but will it actually know why? How is it that we can get the machines to "think" like us rather than just have programmed responses to certain occurences?
Vegas-Rex
04-08-2005, 03:14
But how do you program things like this into the AI without making it like a normal program that only has "If A, then B" processing? Boredom is different for different people. Some like to sit alone and think. Others always have to be out and about doing something. Programming empathy could also be the A then B thing. You could tell the machine that killing is bad but will it actually know why? How is it that we can get the machines to "think" like us rather than just have programmed responses to certain occurences?

You're assuming a computer has to be as simple as "A then B". This is a rather simple description of a rather complex process. How we determine what that process should be is of course difficult, but if we design these processes similar to those of animals or humans (note: not end results, processes) we can easily get what we would probably consider to be an AI. Robots may have to be programmed, but the only real difference between human cognitive processes and programming is the lack of pre-planned design. We're both programmed.
Holyawesomeness
04-08-2005, 03:41
I can't think of any reason whatsoever why a being with an AI would be useful as a servant. Allowing a full range of learning capability makes your machine very unreliable. If you want a robot servant you want something specialized to some degree, preferably one that won't think like a human at all. Such a device would not need the benefits of true AI.

If scientists ever create AI it will most likely be for a similar purpose to walking on the moon: because we can.
I would say that the purpose of AI is to create an effective servant. Artificial Intelligence would give robots some ability to learn. This would make it easier for a robot to do work for us, of course we would have to control some parts of its morality. If scientists created AI just for the heck of it, AI would still be created with the purpose being serving us. It would not serve us as a slave but instead would serve us as an experiment to be thrown away when we are done with it.
Holyawesomeness
04-08-2005, 03:48
If the only purpose of a robot was to serve us, why make it sentient? Wouldn't the just be cruel? We have robots that serve us now and they have no need for sentience.
I would propose incomplete sentience. Certain parts of a robot would be allowed to grow while others would just be constant. Cruelty only exists if the being complains about the suffering. I would see a sentient servent as a workaholic, and loyal being. It would be able to learn and adapt to its environment like a human but certain moral adaptations would be forbidden(like solipsism or hatred of humanity). My idea of AI would be like a sentient ant, it can think and learn but is bound by instinctual loyalties, besides it is not like human beings are really a tabula rasa anyway.
Vegas-Rex
04-08-2005, 03:49
I would say that the purpose of AI is to create an effective servant. Artificial Intelligence would give robots some ability to learn. This would make it easier for a robot to do work for us, of course we would have to control some parts of its morality. If scientists created AI just for the heck of it, AI would still be created with the purpose being serving us. It would not serve us as a slave but instead would serve us as an experiment to be thrown away when we are done with it.

No one's sure what an AI consists of, but the ability to learn is certainly not all of it. Most people who refer to an AI refer to something that thinks of itself as in the same category (whatever that means) as humans. It would be useful for a serving machine to be a smart, complex robot, but not necessarily one with AI.

As for the scientist issue, you're right that AI might be created to be tested (Sorta like currently proposed uses of cloning) but I was imagining more of a scenario where the scientists already know all they need to of how such a being would work and are merely creating it as a test of ability.
M3rcenaries
04-08-2005, 03:58
in the movies, people allways give the ai control of there militaries then one day it would go corrupt, mass chaos would ensue, etc etc. However, if we designed a controlled amount of ai in civillian environment, I say its worth the gamble. If the things do evolve, or whatever hollywood portrays would happen, then simply send the human controlled military in to defuse the problem. However i do not believe that the need for ai is there, and with a growing population will the need be there in the future? I think not, the planet will be croweded enough. But when we further space exploration, say land on mars, asteroids, etc ai may come in handy. And i believe a walking robot would be more effective than those rovers of theres, although hard to do (so aerospace experts dont yell at me). But whenever my classes used to follow those it was annoying to have to wait 4 weeks because the things got caught in a small sand trap.
Holyawesomeness
04-08-2005, 04:03
No one's sure what an AI consists of, but the ability to learn is certainly not all of it. Most people who refer to an AI refer to something that thinks of itself as in the same category (whatever that means) as humans. It would be useful for a serving machine to be a smart, complex robot, but not necessarily one with AI.

As for the scientist issue, you're right that AI might be created to be tested (Sorta like currently proposed uses of cloning) but I was imagining more of a scenario where the scientists already know all they need to of how such a being would work and are merely creating it as a test of ability.
I would think that the ability to learn is the most important thing distinguishing AI from what we have right now. Human intelligence is not the only form of intelligence. We are not a tabula rasa. Perhaps what I do seek is only a complex robot but what is the use of Artificial Intelligence other than to serve our purposes? Humanity is only for humans. Robots, hunks of metal given life by our practices, are not human no matter how much they wish to be. To kill an AI is simply the destruction of a code, nothing more and nothing less.
Vegas-Rex
04-08-2005, 04:03
in the movies, people allways give the ai control of there militaries then one day it would go corrupt, mass chaos would ensue, etc etc. However, if we designed a controlled amount of ai in civillian environment, I say its worth the gamble. If the things do evolve, or whatever hollywood portrays would happen, then simply send the human controlled military in to defuse the problem. However i do not believe that the need for ai is there, and with a growing population will the need be there in the future? I think not, the planet will be croweded enough. But when we further space exploration, say land on mars, asteroids, etc ai may come in handy. And i believe a walking robot would be more effective than those rovers of theres, although hard to do (so aerospace experts dont yell at me). But whenever my classes used to follow those it was annoying to have to wait 4 weeks because the things got caught in a small sand trap.

Again, the usefulness of AI all depends on what constitutes AI. Something that wants to have a social life? An identity? I can't see how that would ever be useful. Even in regards to space exploration, its much easier to simply make the robot smart but not sentient. As for leggedness, maybe some sort of insect-style probe, possibly with flight capability, would trump both human and current designs, at least in my opinion.
Vegas-Rex
04-08-2005, 04:10
I would think that the ability to learn is the most important thing distinguishing AI from what we have right now. Human intelligence is not the only form of intelligence. We are not a tabula rasa. Perhaps what I do seek is only a complex robot but what is the use of Artificial Intelligence other than to serve our purposes? Humanity is only for humans. Robots, hunks of metal given life by our practices, are not human no matter how much they wish to be. To kill an AI is simply the destruction of a code, nothing more and nothing less.

And to kill a human is simply the destruction of a code, nothing more and nothing less, too. Just less expensive.

In any case, we already have programs that can learn. In the end I think that what AI will come down to is the desire to have some sort of life outside of the robot's purpose. As such, an AI would be almost completely useless except as we discussed earlier as a research object or propaganda device. Similar situation to a fully-functional human clone.
Pax Aeternus
04-08-2005, 04:42
The way I see it is that true AI, as in robots with the ability to think the way humans do, have no practical purpose whatsoever. The only reason we'd create them is to say we can. I don't see any need to make any "servant" robot sentient. But if we do manage to create machines using true AI, then we'll have people complaining about playing God. I figure that if God didn't want us doing it, we wouldn't be able to. I agree with Vegas-Rex in the respect that killing a human is just killing a bunch of code. It just happens to be organic code. Other than the possibility of a hostile takeover by sentient robots, they wouldn't really pose a problem when it comes to global overpopulation because all they need is energy and spare parts every once in a while. I think that by the time we're able to create a sentient machine, we'll have solved our energy problems. Energy leads to another problem. Notice human like robots now. They have massive batteries and don't last very long before needing to be recharged. We'll have to figure out how to create much more efficient batteries before there's any chance of autonomous robots.
Falhaar
04-08-2005, 05:00
If the things do evolve, or whatever hollywood portrays would happen, then simply send the human controlled military in to defuse the problem. In other words, make sure the AI only has limited control of the hardware, right? Otherwise the machines would wipe the floor with us.
Neo-Anarchists
04-08-2005, 17:30
I am hearing much about the usefulness of AIs.
Well, perhaps we could think about it this way:

What is the use for a human?

Most of the complaints about AIs ('they'll want lives outside of work' and such) also hold up for humans. But humans haven't been totally outdated, at least not unless it was done while I wasn't looking.

It seems to me that we humans are being more and more pushed into the role of making use of, supervising, and/or controlling non-intelligent processes and machines, such as on assembly lines, or writing computer code, or many other things. What is to say that an AI could not fit into a similar role as does a human?

Of course, I'm just thinking aloud (thinking in text?) here, so I'm not sure how much of this makes sense.
Holyawesomeness
04-08-2005, 17:45
And to kill a human is simply the destruction of a code, nothing more and nothing less, too. Just less expensive.

In any case, we already have programs that can learn. In the end I think that what AI will come down to is the desire to have some sort of life outside of the robot's purpose. As such, an AI would be almost completely useless except as we discussed earlier as a research object or propaganda device. Similar situation to a fully-functional human clone.
That is why we must kill useless humans :D

But really, human life takes on a special meaning in our mind. I believe that this is not because we can think(people are opposed to the killing of the retarded) but instead simply because we are of the human species.

I can not think of any reason that people would waste all the money of creating an AI that was of no practical purpose. Scientists can only do things with the support of practical people paying for their grants. Whatever robot that is created will be a servent of some form because I doubt that society would be so tolerant of an AI that was not under our control. I think of AI as being highly sophisticated learning processes more than the human ability to think whatever it wants, perhaps AI would be able to think abstract thoughts but most certainly not anti-human thoughts.
Jjimjja
04-08-2005, 18:06
if AI was possible, would the term 'artificial intelligence' be seen as a slur?
Would a human be aloud to marry a sentient machine?
Would same model AI's be aloud to marry each other?
Willamena
04-08-2005, 18:22
the concepts of artificial intelligence and artificial life have been around for some time. Their definitions have also changed somewhat over time (from being able to beat a human at chess to being able to hold a conversation with a human without them knowing they were talking to a machine and beyond).

my question is:

if mankind was to one day create a machine that is truly sentient and aware of its own existence would it be ethical for mankind to then switch that machine off? there are things to consider here such as its energy/maintenance demands but lets assume that the machine can be self sufficient.
The ethical consideration would depend on the circumstances under which man would eliminate that consciousness.

Just turning it off is not in itself unethical. The ethical consideration lies in the reason.

my original thoughts were yes, if it posed a danger to mankind,
That's the ticket. There have to circumstances for it to be a question about ethics.

but if there was no evidence to suggest this how would we know? Could we destroy such a machine simply to satisfy our unfounded fears?

as a sidenote, if we did create such a machine and it did destroy humanity, would this not just be another step on the evolutionary ladder?
Even "unfounded fears" are circumstancial. It would be unethical to eliminate that consciousness because of "unfounded fears."

Re the sidenote, I suppose.
Pangea mosto
04-08-2005, 18:43
:rolleyes: does artificial inteligence include human emotions :confused: . If it does then I would, only then, consider destroying it as murder. :(
Vaitupu
05-08-2005, 07:13
Concerning the "up the evolutionary ladder" comment. It would not be a step up, nor a step down, but a step sideways. It would be a divergeance. The machine would have extremely high capability to adapt and develop itself, much like humans have attained with medicine and technology, but perhaps even greater; however, lacking the ability to mutate, it cannot evolve per se.
Well, it kinda depends a bit on perspective. While, no, it cannot truly mutate as humans do, an intelligent machine could determine, say, a third arm would give it an advantage, and simply add it on to every future generation. True, this isn't really evolution, and is kinda along the lines of the adaptaion you discuss, but moreso. Humans cannot change our form to better fit the environment. A machine easily could, particularly if intelligent and self-aware

A good question is this. An intelligent machine would potentially have a limitless lifespan. If it is self-aware, then a main interest would be self-preservation. However, as new software or parts became available, these could potentially require a new machine alltogether. Would machines be willing to destroy themselves to improve the overall "race"? Or would advancement eventually stagnate? Would getting new software be like creating a new person in an old body? or would it just be like getting a new idea?


I would say before we create AI, we need to figure alot more out. We also can't enslave them (Lets face it, the science fiction of the past has eeirly come true. Do we want movies like Bicentenial Man; I, Robot; and The Matrix coming true?) I think before we can create such a powerful thing, we really need to consider many many more options. Think Jurassic Park. "Your scientists were so preoccupied with the fact that they can, that they didn't stop to think if they should."