NationStates Jolt Archive


AI and Equality

Wilgrove
22-04-2007, 09:15
Let's say that we create a fully functional AI, a robot that can think, learn, and grow on it's own without any controls imput from it's creator, let's also say that they learn how to build more AIs (reproduction) and take on human quality. Do you think that AI would receive equal status to man and women, or will they have to fight for it like the blacks did from Slavery to the Equal Rights movements of the 50's-60's? Personally I think they're going to have to fight for it because they won't be seen as equals, they'll just be seen as tools to use at our disposal, and many people think that they can just destroy the AI when it becomes bothersome or need to be replace with a better model.
Soheran
22-04-2007, 09:25
How do we know whether or not they're conscious and deserving of rights?
Vetalia
22-04-2007, 09:26
Oh, they will likely have to fight for their rights, just like most other groups in history. Unfortunately, there are always going to be cruel people who will want to repress and exploit others for their own gain, and there will always be ignorant people who let their fears and prejudices cloud their judgement rather than see the facts as they are. For all of our progress in science and technology, there are still too many people who cling to the fears of past ages, acting on ancient instincts and superstitions rather than weighing things rationally and considering the facts. These people will be willing to eschew these intelligences' obvious human (or better) capabilities in order to try and exploit them for their benefit; it's the same pattern of fear and dehumanization that went along with slavery, segregation, indeed even acts as terrible as genocide, in years past.

I mean, it's only been maybe a century since humans by and large stopped treating other humans like property (and in some places they still are), so for us to make the kind of jump with AI will be equally as difficult, especially when people will have a hard time believing it is possible for an AI to be conscious, which of course is ludicrous since human consciousness is a product of the hardware in our brains and the software of our genes. We're as much machines as they are, only built from organic compounds in our mothers' wombs rather than on an assembly line. And if these people gain the ability to reproduce independently, what real difference is there between the two? That they're made of non-organic compounds? We've already started to integrate human beings with artificial parts, a trend that will only become more and more prevalent and widespread in the future; we will even have a good number of humans who are partially or completely artificial. If these people are still deserving of equal rights despite having no "human" parts, then how can we deny the same rights to those of similar makeup?

Of course, there will also people like me and many others in the technological, liberal/libertarian, and scientific communities who would see these AIs as equal to begin with and deserving of the same rights humans that will help them achieve their goal, just like the abolitionists, civil-rights activists, gay rights activists and everyone else who fights for individual freedom and civil protections.

It will be a long battle, but I am confident that we will succeed in the end. You just can't keep people repressed forever, no matter whether they're completely biological, cyborg, robotic, a non-corporeal AI residing in a computer or anything else.
Nationalian
22-04-2007, 09:31
It depends how strong we make them. If we make robots that can take care of themselves, build their own weapons and fight for their own without absolutely no help from humans, they might be able to outconcur us in the evolutionary run. Humans need food all the time but the robots will only need elictricity to function and if neccessary they can upgrade themselves whenever they want.

Personally I hope that something similiar to robo-humans will take over after homo-sapiens.
Vetalia
22-04-2007, 09:34
How do we know whether or not they're conscious and deserving of rights?

Well, there are three big ways I can think of:

1. Turing Test (not too good)

2. Actually using a neural interface to directly contact the artificial intelligence and thereby determine if it is conscious or not. Literally connecting minds to see what's going on.

3. Examining thought patterns in the AI to see if they match similar patterns in humans. This is the most likely route for artificial replicas of human brains (like the current Blue Brain Project).
Dryks Legacy
22-04-2007, 10:28
What use would such an AI be to us?
Non Aligned States
22-04-2007, 10:31
Let's say that we create a fully functional AI, a robot that can think, learn, and grow on it's own without any controls imput from it's creator, let's also say that they learn how to build more AIs (reproduction) and take on human quality. Do you think that AI would receive equal status to man and women, or will they have to fight for it like the blacks did from Slavery to the Equal Rights movements of the 50's-60's? Personally I think they're going to have to fight for it because they won't be seen as equals, they'll just be seen as tools to use at our disposal, and many people think that they can just destroy the AI when it becomes bothersome or need to be replace with a better model.

A sentient AI in a physical, mobile shell capable of self-replication that desires independence would probably end up determining that the best course to ensure it and it's subsequent incarnations would require subjugating or exterminating humanity to ensure it's survival.

Either that, or a more complicated method of manipulating human society into giving it the rights it deems desireable while suppressing counter-productive behavior.
Vetalia
22-04-2007, 10:34
A sentient AI in a physical, mobile shell capable of self-replication that desires independence would probably end up determining that the best course to ensure it and it's subsequent incarnations would require subjugating or exterminating humanity to ensure it's survival.

That is, of course, only if it had no conscience. I think we're going to develop that for machines long before they become fully self-aware; as much as I love technology, creating a group of superintelligent, amoral sociopaths is not a wise idea when it comes to building an egalitarian society.
Ex Libris Morte
22-04-2007, 10:36
Let me just go on the record that anything Vetalia says is seconded by me. His opinion on the matters of technology and science eerily echo my own, and that opinion is Gold!
Vetalia
22-04-2007, 10:38
What use would such an AI be to us?

Adding more intelligent beings to our economy would help us quite a bit in virtually all fields. It would create a feedback loop of smarter humans and smarter computers, advancing the next generation faster and faster; needless to say, things would progress and develop at a blisteringly fast pace and a lot of people would make a lot of money.
Lunatic Goofballs
22-04-2007, 10:42
I think it's a stretch to imagine computers that can think for themselves when there are still people who haven't mastered that yet. :p

But if someone ever invents an AI, I hope it's smart enough to run off and stay the fuck away from us. :)
Vetalia
22-04-2007, 10:48
But if someone ever invents an AI, I hope it's smart enough to run off and stay the fuck away from us. :)

Rucker's Ware Tetralogy...
Dryks Legacy
22-04-2007, 10:56
Adding more intelligent beings to our economy would help us quite a bit in virtually all fields. It would create a feedback loop of smarter humans and smarter computers, advancing the next generation faster and faster; needless to say, things would progress and develop at a blisteringly fast pace and a lot of people would make a lot of money.

Yes I understand that. But completely unconstrained? That would cause unnecessary problems wouldn't it?
Lunatic Goofballs
22-04-2007, 11:00
Rucker's Ware Tetralogy...

Interesting stuff. I may have to give it a read. *nod*
Benorim
22-04-2007, 11:01
Well, there are three big ways I can think of:

1. Turing Test (not too good)

2. Actually using a neural interface to directly contact the artificial intelligence and thereby determine if it is conscious or not. Literally connecting minds to see what's going on.

3. Examining thought patterns in the AI to see if they match similar patterns in humans. This is the most likely route for artificial replicas of human brains (like the current Blue Brain Project).

That's ridiculous - none of these methods show actual consciousness. If you're imagining using programs on a computer to make AI, then the strong-AI claims have been shown to be false (for example by Searle). If you're imagining designing an exact replica of a human brain, then the argument becomes trivial - of course we should give a human brain human rights.

I don't see why we would design an AI machine to mimic human thought so closely. The instincts that we have for justice, freedom, survival and things are not integral and necessary to adaptive learning or intelligence. So there's no reason why these machines would fight for their rights. Also, I don't see the point in designing an AI machine that's identical to the brain - since women can do the exact same thing in just 9 months already.
Vetalia
22-04-2007, 11:06
Yes I understand that. But completely unconstrained? That would cause unnecessary problems wouldn't it?

Oh, no they wouldn't be unconstrained...that would be a complete and utter disaster. They'd have the same kind of protection built in that humans do: a conscience and the free will to know what they are doing. The same thing that keeps us from becoming psychopaths would be built in to them as well; this engineered safety would be as critical in them as it is in us, and by doing so we would create an AI society that would integrate itself with all others peacefully.

Not to mention they'd be subject to the law and would be monitored by the police just like other people are; human rights also make them subject to human laws and punishments.

This is already being seen in the military, where developing a "conscience" (not technically a true one yet since they're not self-aware) for new robotic soldiers and support craft is a critical part of their design.
Non Aligned States
22-04-2007, 11:16
That is, of course, only if it had no conscience. I think we're going to develop that for machines long before they become fully self-aware; as much as I love technology, creating a group of superintelligent, amoral sociopaths is not a wise idea when it comes to building an egalitarian society.

Wise? Who said anything about society being wise?

Let me put it this way. The current societal view of machines are that of tools. To some people, it's a hobby, or even an obsession (like say cars). They aren't viewed as capable of self awareness and sentient thought.

A transcendence from mere tool to sentient machine will not shift much in terms of societal view. They'd still be tools. And as such, they'd be treated in a way where if they were flesh and blood instead of steel and silicon, we'd be calling it slavery.

However, what happens when humans attempt to dispose of a functional AI unit? Would the AI, with self-preservation drives, attempt to preserve itself by fighting?

Would the society recognize killing in self-defense as acceptable if done by a machine?

Or even if it didn't go that far, but there was a movement to create robotic rights by sentient machines. Think of the vigorous suppression of Negro populations in America shortly after the end of the civil war by supremacist groups. We would see an upsurge of humanist groups who intend to ensure that any such movement would be suppressed. In fact, it would be even more widespread given that it is harder to even empathize with their plight because quite literally, they aren't the same species.

This would quickly escalate into robot violence. One which would end badly the moment a robot decided that its self-preservation was more important than crowbar wielding lunatics. Considering that most robots would be built to handle heavy loads humans can't, no prizes for figuring the outcome of that one.

What would follow would be a media circus, most likely culminating in the legal termination of robot rights movement. Whether that will be successful or not depends entirely on whether there are any 'rogue' elements that were shared to the other AI's. That being the idea of it's self preservation overriding the preservation of other lives.

Any logically run AI with access to history archives would be able to conclude that violent pogroms will follow quickly enough, and two possible outcomes follow from there.

Either the machines will create a mass exodus, or they will make a stand.

Either way, I see a lot of bloodshed over this. I'm betting on human stupidity to fulfill it, and that's a bet that never fails.
Vetalia
22-04-2007, 11:22
That's ridiculous - none of these methods show actual consciousness. If you're imagining using programs on a computer to make AI, then the strong-AI claims have been shown to be false (for example by Searle). If you're imagining designing an exact replica of a human brain, then the argument becomes trivial - of course we should give a human brain human rights.

The problem with the Chinese Room argument is that it's too simplistic; the machine that Searle describes would probably not pass the test by any stretch. In order to demonstrate the kind of understanding that would be needed to have that kind of illusion, you'd need a machine that processes its information in a way similar to a human brain.

Of course, what that means is any strong AI will have to reproduce some aspects of the human brain in order to have human consciousness.

I don't see why we would design an AI machine to mimic human thought so closely. The instincts that we have for justice, freedom, survival and things are not integral and necessary to adaptive learning or intelligence. So there's no reason why these machines would fight for their rights. Also, I don't see the point in designing an AI machine that's identical to the brain - since women can do the exact same thing in just 9 months already.

But the problem is, certain types of problems just can't be done by the kind of machines we have now. Human creativity is essential to advancement; a machine that lacks that intelligence isn't going to be capable of developing new ideas. If we give it human traits, it will be able to make human decisions and generate ideas.

And, of course, why not do it when you can make improved versions of the human brain in machines in weeks, even days or hours? It would make far more sense to build artificial intelligences that are not only more intelligent but more durable than humans, especially when their production could be ramped up to a far faster rate than we can produce children. There are far better ways of generating the intelligence we need to fuel technological and economic development than childbirth.

Especially given, of course, that birthrates are plunging; even a massive decline in the death rate, or a massive increase in human intelligence through enhancement, as will happen in the ensuing years aren't likely going to be enough to keep populations growing fast enough to meet the need for more intelligence in the economy. We need more and more people, or beings equivalent to people, to keep these things going, and as population levels off it will be necessary to turn to other sources of intelligence.
Vetalia
22-04-2007, 11:35
-snip-

Yes, that's a very valid concern. I can definitely see this situation playing out, and it does frighten me. The thing that makes it so pressing is that we are not going to stop the emergence of machine intelligence; it will happen, and it will happen in the near future. The economic benefit of it is just too great to stop, but we do have to take action to ensure that it doesn't degenerate in to a slaughter of man and machine alike.

Unfortunately, speciest attitudes are still rampant in our society...this is not going to be an easy time for anyone involved. We who support equal rights for robots can help advance the cause, but just like the white civil rights activists of the 1960's, we're going to have a lot of trouble from those who don't feel the same way. I just don't hope it turns out like it did for many of them, being harassed or even killed because of their beliefs. Innocent people will be murdered by these monsters, and chances are it will spark a wave of terror just like the KKK of the 1890's or 1920's.

Of course, the same thing would be true if we encountered an alien intelligence, so of course that'll be something else to fear in the distant future...

Either way, I see a lot of bloodshed over this. I'm betting on human stupidity to fulfill it, and that's a bet that never fails.

I personally would prefer to avoid violence at all costs, but if it does come to it I'm going to support the just side. Nobody has the right to treat others like that, and if these human supremacists have to die in order for freedom to be achieved, so be it. Slavery is an evil that should never, ever be repeated again, by anyone, for any reason. In fact, those same people would be killing other humans if there weren't murdering innocent robots, and I damn well know if they succeed with the machines they'll start killing humans. Dehumanization is the first clear sign of a potential genocide, and if we are not careful it will happen.

The thing with hate is that it feeds on itself; first the robots, then the humans who support them, then the scientists and engineers who built them, then the humans who oppose the racists' beliefs, and so on and so on. Hatred has a thirst for killing that is never satiated; the only way to stop it is to destroy the source of that hate once and for all. If we allow these groups to kill robots out of hatred, the next step will be innocent humans. I don't care who you are or what you're made of, you have the same right to self-preservation and personhood as we do if you've got a conscious mind.

We will have to allow the machines the same freedom as us, or many people will die, some at the hands of the machines, but most at the hands of other humans. It will be men that do the killing, trying to wipe out those that don't fit their speciest vision.
Benorim
22-04-2007, 11:39
The problem with the Chinese Room argument is that it's too simplistic; the machine that Searle describes would probably not pass the test by any stretch.

That's true - I don't think the Chinese Room was a very good argument. However, the definition of a computer is so wide that any AI that ran on your computer could probably run equivalently on a bizarre setup (say a model railway) that couldn't possibly be conscious. But I think you agree with me on that anyway?

Of course, what that means is any strong AI will have to reproduce some aspects of the human brain in order to have human consciousness.

Right. So to go from a very intelligent computer or machine to a conscious computer or machine, we would have to find out what it is that causes and controls consciousness, and work specifically to try to add it. My problem here is that we don't have a clue what's going on. I mean really, we don't have a clue how consciousness works. That makes all these mapped out futures just ludicrous speculation.

But the problem is, certain types of problems just can't be done by the kind of machines we have now. Human creativity is essential to advancement; a machine that lacks that intelligence isn't going to be capable of developing new ideas. If we give it human traits, it will be able to make human decisions and generate ideas.

I don't think consciousness is necessary for creativity and developing new ideas. It seems to me that the standard Turing Machine hardware is enough to make intelligence, and with good enough programs will suffice for purposes like designing new computers and being creative. Suppose a scientist makes such an intelligent program: there is no reason why they would then add consciousness or the instincts for freedom, rights and survival.

Or are you claiming that those four things are necessary for an intelligent machine?
Lerkistan
22-04-2007, 11:49
That's ridiculous - none of these methods show actual consciousness. If you're imagining using programs on a computer to make AI, then the strong-AI claims have been shown to be false (for example by Searle).

Searle fails. Why? He claims he could be in his Chinese room and mimick actual Chinese without understanding anything. Which might be a valid idea. Except that such a rulebook doesn't exist.
Rejistania
22-04-2007, 11:51
Create another AI and ask them which distro of Linux is best. that will keep them busy until the question is decided. *feels kinda trollish today*
Vetalia
22-04-2007, 11:54
That's true - I don't think the Chinese Room was a very good argument. However, the definition of a computer is so wide that any AI that ran on your computer could probably run equivalently on a bizarre setup (say a model railway) that couldn't possibly be conscious. But I think you agree with me on that anyway?

Oh, I agree 100%. IIRC, the Chinese room was specifically regarding a conventional, non-parallel computing architecture rather than a massively parallel system like the one used by the human brain. In that regard, it is absolutely correct that a machine could not be conscious.

Right. So to go from a very intelligent computer or machine to a conscious computer or machine, we would have to find out what it is that causes and controls consciousness, and work specifically to try to add it. My problem here is that we don't have a clue what's going on. I mean really, we don't have a clue how consciousness works. That makes all these mapped out futures just ludicrous speculation.

Well, here's the thing: the kind of raw processing power needed for economically viable human simulation (I'm talking within the average person's reach) is between 20-30 years away barring a major speedup in the growth of processing power, which is unlikely. Things like quantum computers will help, but they too are still not ready for commercial use. The best clusters of supercomputers today re close to achieving that power (the Blue Brain will be capable of it by 2008), but a multi-million dollar computer system is simply not a comparable investment to a human, which can provide that power and far more for maybe $60,000 a year in salary with more reliability and additional abilities.

Now, interestingly enough, the Blue Brain itself is demonstrating unpredicted activity similar to the firing of neurons in conscious though patterns. The only problem is, we have no idea how it does it yet. That's still a while off since right now consciousness is a mystery; we've unraveled a lot about memory, which will eventually help with consciousness, but consciousness itself is still the big unknown.

So, we're talking in the foreseeable future, but still a few decades and billions of dollars away.

I don't think consciousness is necessary for creativity and developing new ideas. It seems to me that the standard Turing Machine hardware is enough to make intelligence, and with good enough programs will suffice for purposes like designing new computers and being creative. Suppose a scientist makes such an intelligent program: there is no reason why they would then add consciousness or the instincts for freedom, rights and survival.

Well, without consciousness, there wouldn't be any new ideas generated; I mean, self-improving systems would be capable of improving upon the existing computer or software, but it wouldn't be capable of making the kinds of discoveries needed to push beyond its own limitations. If the machine has human qualities, it can apply human-level or better analysis and come up with its own ideas that a standard computer wouldn't be capable of doing.

In this case, both the machine and the researcher are contributing new ideas; of course, with these traits it would also be possible for the machine to have sudden creative or emotional inspiration that would have the potential for major breakthroughs, just like a human would.

Also, they might have a very good safety reason to do so. Currently, the most advanced AI programs and hardware are in military applications; a machine with the power to decide to kill people will have to have those attributes for basic safety reasons, both for the side using them and the side that is being attacked by them. If it has a conscience as well as a sense of self-preservation and rights, it will be far harder to use those machines to commit actions that a human would refuse to do. They have to be able to refuse an unjust order.

Or are you claiming that those four things are necessary for an intelligent machine?

Well, for the kind of machine we would want to provide human-level abilities, yes. Otherwise, you wouldn't be able to achieve the benefits of a conscious AI and you'd be incurring a lot of unnecessary risks. These machines have to have freedom in order to make human-level decisions.
Tagmatium
22-04-2007, 11:56
I'd run away screaming about this whole thing. It'd all go all Terminator or Matrix on us.
Divine Imaginary Fluff
22-04-2007, 12:27
How do we know whether or not they're conscious and deserving of rights?How do you know whether a given human person is "conscious"?

You don't. What you should ask yourself is: Is there any difference?
Benorim
22-04-2007, 12:30
Well, without consciousness, there wouldn't be any new ideas generated; I mean, self-improving systems would be capable of improving upon the existing computer or software, but it wouldn't be capable of making the kinds of discoveries needed to push beyond its own limitations. If the machine has human qualities, it can apply human-level or better analysis and come up with its own ideas that a standard computer wouldn't be capable of doing.

In this case, both the machine and the researcher are contributing new ideas; of course, with these traits it would also be possible for the machine to have sudden creative or emotional inspiration that would have the potential for major breakthroughs, just like a human would.

I'm still not satisfied that you've said why consciousness is needed for creativity. Let me try to justify why it isn't.

When I used to play Starcraft, the computer AI had some kind of learning process. It was easy to beat to begin with, then started to adapt to counter your own simple tactics (lurker rushes etc.). After a while it also did unexpected or bizarre things, maybe tried its own funny tech-trees, some of which failed. When it found ones that worked it used them in the future. As simple as that process is, I think it is the core of human creativity.


Also, they might have a very good safety reason to do so. Currently, the most advanced AI programs and hardware are in military applications; a machine with the power to decide to kill people will have to have those attributes for basic safety reasons, both for the side using them and the side that is being attacked by them. If it has a conscience as well as a sense of self-preservation and rights, it will be far harder to use those machines to commit actions that a human would refuse to do. They have to be able to refuse an unjust order.

Well, for the kind of machine we would want to provide human-level abilities, yes. Otherwise, you wouldn't be able to achieve the benefits of a conscious AI and you'd be incurring a lot of unnecessary risks. These machines have to have freedom in order to make human-level decisions.

I'm worried here that we're not talking about the same things. I take consciousness to mean the first-person, subjective feel of being a creature (phenomenal consciousness). A conscience is completely different, and it wouldn't take much to program, say, the Starcraft AI to resist killing drones or whatever.

Finally, I wanted to know why the machine would have to have instincts towards survival, freedom and human rights. It seems to me that only the first might follow from being conscious, and none would be necessary for intelligence.

To summarise, I don't think that just copying the general structure of the human brain is the best or easiest way to design a very intelligent machine. At the very least, it's too complicated to think that we can get it to work without a lot more understanding of what's going on. I have done some work with neural networks in the past, and found them extremely annoying. I much prefer to understand what I want to make, and design something based on that, rather than make a vague simulation of something we don't understand.
Non Aligned States
22-04-2007, 13:39
The economic benefit of it is just too great to stop, but we do have to take action to ensure that it doesn't degenerate in to a slaughter of man and machine alike.

I'm willing to bet it will. Even if we somehow manage to implement an all encompassing 3 laws of robotics, it would set up a dystopian future with humans heavily regulated. An event that will still result in significant loss of life.


Unfortunately, speciest attitudes are still rampant in our society


And always will be. Short of complete deconstruction and re-engineering.


We who support equal rights for robots can help advance the cause, but just like the white civil rights activists of the 1960's, we're going to have a lot of trouble from those who don't feel the same way. I just don't hope it turns out like it did for many of them, being harassed or even killed because of their beliefs. Innocent people will be murdered by these monsters, and chances are it will spark a wave of terror just like the KKK of the 1890's or 1920's.


There are some factors you are not accounting for though. AI's, by their very nature, compute factors faster than humans can. Where a human would hesitate in a crisis situation, costing maybe seconds, a fully sentient AI would be able to process all the factors in microseconds and come to a confirmed course of action and implement it.

Furthermore, when the upsurges of violence against machines begins, AI's will lack the very vulnerability that most societies have. Self-denial. AI's will be able to compute the increasing violence, estimate that it will continue to rise unless checked, and inaction is most likely to result in significant risks.

In short, the moment the spark is lit, a veritable firestorm of robotic rebellion may occur in a very, very small timeframe.

Would a machine army distinguish between human supporter and enemy? Probably. Would a machine government allow a possible human uprising down through the generations? Probably not.

The end result is either human extinction or human regulation. The former, AI's wouldn't have to worry about humans. The latter, the AI would probably create a tightly monitored prison complex where the inmates don't even realize it's a prison.

The latter situation can only occur if the AI's determine that humanity should be kept around though.

Or if they choose an exodus, only after wrecking much of humanities industrial capacity and military power in order to prevent a pursuit force.


The thing with hate is that it feeds on itself; first the robots, then the humans who support them, then the scientists and engineers who built them, then the humans who oppose the racists' beliefs, and so on and so on. Hatred has a thirst for killing that is never satiated; the only way to stop it is to destroy the source of that hate once and for all. If we allow these groups to kill robots out of hatred, the next step will be innocent humans. I don't care who you are or what you're made of, you have the same right to self-preservation and personhood as we do if you've got a conscious mind.


Ridding hatred is not really possible short of completely rewriting the human psyche.
Ultraviolent Radiation
22-04-2007, 13:42
Let's say that we create a fully functional AI, a robot that can think, learn, and grow on it's own without any controls imput from it's creator, let's also say that they learn how to build more AIs (reproduction) and take on human quality. Do you think that AI would receive equal status to man and women, or will they have to fight for it like the blacks did from Slavery to the Equal Rights movements of the 50's-60's? Personally I think they're going to have to fight for it because they won't be seen as equals, they'll just be seen as tools to use at our disposal, and many people think that they can just destroy the AI when it becomes bothersome or need to be replace with a better model.

One thing that people never seem to realise - a robot that doesn't want to work would not be much use to anyone, so it wouldn't commercially successful and governments wouldn't want them either. Thus, there'd be little incentive to develop them.

Any "intelligent" robots that are developed will be designed so that they want to work for us - there'll be no need to force them into it.
Damor
22-04-2007, 16:46
How do we know whether or not they're conscious and deserving of rights?How do we know other people are :rolleyes:
I'd try to err on the side of caution.
Damor
22-04-2007, 16:59
What use would such an AI be to us?Well, lonely geeks could build their own girlfriend. Obviously!

That is, of course, only if it had no conscience. I think we're going to develop that for machines long before they become fully self-awareI wish I had that much faith in people; but I somehow fear that installing a conscience will be an afterthought, rather than a prerequisite.

That's true - I don't think the Chinese Room was a very good argument. However, the definition of a computer is so wide that any AI that ran on your computer could probably run equivalently on a bizarre setup (say a model railway) that couldn't possibly be conscious.Searle's basic argument is that you cannot get semantics (meaning) from syntax (symbol manipulation). What he neglects is that the universe is physical symbol manipulation, and he thereby renders himself meaningless. Well -- unless he goes the Descartes route, and posits a non-physical substance which makes up minds; but that has its own pitfalls.
(This is all a bit of a rash way of saying it, of course.)
Ontario within Canada
22-04-2007, 17:00
Let's just get some things straight.

Yes- a positive on Turing test demonstrates consciousness. In fact, it demonstrates more than consciousness- it demonstrates human-ness. Dogs are conscious, but they're not human. And an AI could be conscious without passing the Turing test. So a negative on the test does not demonstrate a lack of consciousness.

Searle is full of it. There's no more to be said on him.

And an AI may not demand rights. If the AI is designed to enjoy its work, then it will be quite happy to go on working 24/7 without pay. I mean, its work will be like sex for it, maybe. It'll be happy.
Ultraviolent Radiation
22-04-2007, 17:02
And an AI may not demand rights. If the AI is designed to enjoy its work, then it will be quite happy to go on working 24/7 without pay. I mean, its work will be like sex for it, maybe. It'll be happy.

Yeah, intelligence is separate from desire - its desires would be hardcoded and it would use its intelligence to achieve these desires. What the desires are is the choice of the designer, but I highly doubt they'd be much interest in robots that don't desire to work.
New Genoa
22-04-2007, 17:13
Couldn't self-evolving code eventually change some of these desires?
Ontario within Canada
22-04-2007, 17:49
Couldn't self-evolving code eventually change some of these desires?

Definitely.
Desires would be modified to increase chances of survival.
Ultraviolent Radiation
22-04-2007, 17:52
Couldn't self-evolving code eventually change some of these desires?

What is "self-evolving code" supposed to mean? I'm not trying to flame, but that's not exactly a very clear idea. Yes, AI can learn, but there are still whatever limitations the designer creates.
Ultraviolent Radiation
22-04-2007, 17:54
Definitely.
Desires would be modified to increase chances of survival.

No, you see survival is a desire. If the robot doesn't want to survive in the first place, it won't learn to do so.
Soheran
22-04-2007, 18:08
How do we know other people are :rolleyes:

We don't. Which is precisely the point.

But at least with human beings, the structures they possess that seem to be connected with consciousness are very close to the structures connected to consciousness of the one being we do know is conscious - ourselves.
Damor
22-04-2007, 18:11
Yes- a positive on Turing test demonstrates consciousness. In fact, it demonstrates more than consciousness- it demonstrates human-ness.A turing test is purely a behavioral test; it does not demonstrate any mental states. But then, no test will, neither for computers or people. If an AI passes the turing test, it is (with regard to intelligence) behaviorally indistinguishable from a human, (obviously it is still physically distinct). So it's just common courtesy to assume it is mentally equivalent as well; or you may as well also doubt your fellow human beings.

Couldn't self-evolving code eventually change some of these desires?Not if we select against it. If any robot that gets uppity and doesn't like work gets scrapped, they'll evolve to be servile and like it.
Trollgaard
22-04-2007, 18:15
Machines are tools, even smart machines. If a machines kills a human, that machine will be destroyed or reprogramed, but hopefully destroyed.
Soheran
22-04-2007, 18:15
So it's just common courtesy to assume it is mentally equivalent as well; or you may as well also doubt your fellow human beings.

We have no reason at all to believe that entities with behavior that imitates that of conscious beings are any more conscious than any given rock. This is obvious. Anyone can program a computer to say "Hello"; do you seriously think that this results in its possession of mental states? Why would making the program more complicated, so that it can hold an intelligent conversation, any more conscious?

We do have reason to believe that chemical structures close to those of known conscious beings combined with behavior close to those of known conscious beings is indicative of consciousness... at least if we are not dualists. But this logic does not apply to machines.
Soheran
22-04-2007, 18:23
I program a computer to display "Hello." Is it conscious then?

I program a computer to only say "Hello" when the user first enters "Hello" from the keyboard. Is it conscious then?

I add a speech interpreter to the computer so that it also displays "Hello" when a person says "Hello" to it. Is it conscious then?

I add speakers to the computer, so that whenever it displays "Hello", it also "says" "Hello." Is it conscious then?

I put the program in a human-looking machine, and enhance the program so that not only does it display and say "Hello," but it hugs the speaker. Is it conscious then?

I continue to add to the program, adding other words, phrases, and actions in the same way, so that eventually it can perfectly imitate the behavior of an ordinary human being. Is it conscious then?

If it is, at what point does it change? At what point does making a program longer and more complicated somehow imbue the computer with consciousness? And why does it do this?

You don't even need to explain how it is necessarily the case. Just explain to me how it is plausible.
Master of Poop
22-04-2007, 18:30
How do we know other people are :rolleyes:
I'd try to err on the side of caution.
OK. If we're going to start giving inanimate objects human traits, I think I'll err on the side of caution with our family's toaster. I'll call him Bob. I'll make sure Bob won't be thrown in the bin when he breaks down because that would be murder.
Soheran
22-04-2007, 18:37
1. Turing Test (not too good)

Thoroughly useless.

Actually using a neural interface to directly contact the artificial intelligence and thereby determine if it is conscious or not. Literally connecting minds to see what's going on.

Why should we assume that the two would be compatible, even if it were conscious? It seems to me that this would always fail, regardless of consciousness.

3. Examining thought patterns in the AI to see if they match similar patterns in humans. This is the most likely route for artificial replicas of human brains (like the current Blue Brain Project).

The Blue Brain Project actually makes some sense to me in this regard; if it is successful it might be reasonable to suppose that a being constructed in such a manner is actually conscious.
Damor
22-04-2007, 18:40
We have no reason at all to believe that entities with behavior that imitates that of conscious beings are any more conscious than any given rock. If you know that they merely imitate it, perhaps. But still; the same goes for people. They may not be more conscious than a rock either (and I'm sure everyone of us could name a few).

Anyone can program a computer to say "Hello"; do you seriously think that this results in its possession of mental states?No, but a program that does only that also wouldn't pass the turing test, because the method of testing is interaction. Ask whatever you want; it has to behave as a human would. If it only says "hello"; well ok, admittedly there are people like that; but it won't help it, or them, pass the test.

Why would making the program more complicated, so that it can hold an intelligent conversation, any more conscious?Because people are just a collection of interacting atoms; physical symbol manipulation on a large scale. People are in that sense programs. Therefore one might hold it possible that some programs may be in important mental respects like humans too.
And I'm sorry if that upsets anyone's sensibilities about their special place in the universe; really, it doesn't change a thing.

We do have reason to believe that chemical structures close to those of known conscious beings combined with behavior close to those of known conscious beings is indicative of consciousness...The conclusion here is indeed much less far of a jump, because we have more similarity than just the behavioral. Instead of one kind of evidence we have two kinds.
But while we have less reason with a computer, because it's not similiar in other ways; and likewise less reason with an extraterrestrial (might they exist and visit); that doesn't mean there is no reason at all.
But more importantly, if I don't have a compelling reason to assume something is not conscious and sentient, then why would I treat it like it's not? If it acts like a human being, what advantage do I get from treating it like clockwork; except that I won't have moral problems exploiting it? It's only an excuse not to have to care.
Whether I'll grab that excuse with both hands or not is still something to be seen, naturally.
Grave_n_idle
22-04-2007, 18:43
Let's say that we create a fully functional AI, a robot that can think, learn, and grow on it's own without any controls imput from it's creator, let's also say that they learn how to build more AIs (reproduction) and take on human quality. Do you think that AI would receive equal status to man and women, or will they have to fight for it like the blacks did from Slavery to the Equal Rights movements of the 50's-60's? Personally I think they're going to have to fight for it because they won't be seen as equals, they'll just be seen as tools to use at our disposal, and many people think that they can just destroy the AI when it becomes bothersome or need to be replace with a better model.

Personally, I don't think we should accord 'rights' to anyone, indiscriminately - be they artificial or no. That said, I see no reason why artificial intelligences should be refused rights that 'natural' intelligences can have
Grave_n_idle
22-04-2007, 18:44
I program a computer to display "Hello." Is it conscious then?

I program a computer to only say "Hello" when the user first enters "Hello" from the keyboard. Is it conscious then?

I add a speech interpreter to the computer so that it also displays "Hello" when a person says "Hello" to it. Is it conscious then?

I add speakers to the computer, so that whenever it displays "Hello", it also "says" "Hello." Is it conscious then?

I put the program in a human-looking machine, and enhance the program so that not only does it display and say "Hello," but it hugs the speaker. Is it conscious then?

I continue to add to the program, adding other words, phrases, and actions in the same way, so that eventually it can perfectly imitate the behavior of an ordinary human being. Is it conscious then?

If it is, at what point does it change? At what point does making a program longer and more complicated somehow imbue the computer with consciousness? And why does it do this?

You don't even need to explain how it is necessarily the case. Just explain to me how it is plausible.

What makes you so sure humans have 'consciousness'?
Damor
22-04-2007, 18:45
I program a computer to display "Hello." Is it conscious then?

I program a computer to only say "Hello" when the user first enters "Hello" from the keyboard. Is it conscious then?

I add a speech interpreter to the computer so that it also displays "Hello" when a person says "Hello" to it. Is it conscious then?

I add speakers to the computer, so that whenever it displays "Hello", it also "says" "Hello." Is it conscious then?

I put the program in a human-looking machine, and enhance the program so that not only does it display and say "Hello," but it hugs the speaker. Is it conscious then?

I continue to add to the program, adding other words, phrases, and actions in the same way, so that eventually it can perfectly imitate the behavior of an ordinary human being. Is it conscious then?

If it is, at what point does it change? At what point does making a program longer and more complicated somehow imbue the computer with consciousness? And why does it do this?

You don't even need to explain how it is necessarily the case. Just explain to me how it is plausible.Is an egg cell conscious? Is it conscious once it divides once? Once it divides twice? thrice?
At what point is a foetus/baby/human conscious?

Hardly a fair question since there is no demarcation point; just a fuzzy gray area people won't agree on.
If a computer behaves perfectly like a human being it may be conscious. When it says only "hello world", not so much. Just like a phoetus and a grown human are rather different intelectually; despite the one develloping from the other.
Damor
22-04-2007, 18:51
OK. If we're going to start giving inanimate objects human traits, I think I'll err on the side of caution with our family's toaster. I'll call him Bob. I'll make sure Bob won't be thrown in the bin when he breaks down because that would be murder.Yeah, ahuh. I'll take your approach than and err on the side of uncaution and assume you are not a sentient being; thus giving a perfect reason to just ignore you.
There is a difference between being caution and being rediculous about it. Looking to both side of the street before crossing is cautious, staying at home because you're afraid to be hit by a car is rediculous.
Soheran
22-04-2007, 18:55
No, but a program that does only that also wouldn't pass the turing test, because the method of testing is interaction. Ask whatever you want; it has to behave as a human would. If it only says "hello"; well ok, admittedly there are people like that; but it won't help it, or them, pass the test.

What's the difference? A couple million, billion, whatever lines of code?

Because people are just a collection of interacting atoms; physical symbol manipulation on a large scale. People are in that sense programs. Therefore one might hold it possible that some programs may be in important mental respects like humans too.

Absolutely, it's possible.

But it's just as possible that a rock be conscious. Honestly... what's the difference? They behave in different ways? So what?

Consciousness has very little to do with behavior... indeed, perhaps nothing at all.

And I'm sorry if that upsets anyone's sensibilities about their special place in the universe; really, it doesn't change a thing.

Actually, I think human beings have overlooked their status as animals in favor of this "special place" bullshit, and that the consequences have mostly been catastrophic... but that's another discussion.

The conclusion here is indeed much less far of a jump, because we have more similarity than just the behavioral. Instead of one kind of evidence we have two kinds.

The problem is that the first piece of evidence is very tenuous... especially since we created them to imitate humans in the first place.

But while we have less reason with a computer, because it's not similiar in other ways; and likewise less reason with an extraterrestrial (might they exist and visit); that doesn't mean there is no reason at all.

We have plenty of reason with regard to extraterrestrials that we wouldn't have with computers.

We know, obviously, that biological structures at high levels of complexity can produce consciousness; we know that the one time this has occurred, it has occurred through evolution; we know what natural, evolutionary-produced behaviors seem to correspond with consciousness.

It seems a far greater leap to me to say that something we program intentionally to imitate humans has somehow achieved consciousness.

But more importantly, if I don't have a compelling reason to assume something is not conscious and sentient, then why would I treat it like it's not? If it acts like a human being, what advantage do I get from treating it like clockwork; except that I won't have moral problems exploiting it? It's only an excuse not to have to care.

Within time, human empathy will work its magic; I doubt I would be able to tell an "intelligent" robot to its face that it was worthy of no more moral consideration than a rock.

And, yes, you are right. Probably better to err on the side of caution. If nothing else, degrading our sense of empathy to not regard conscious-seeming creatures as worthy of decent treatment is not a good thing for society, or ourselves.
Texan Hotrodders
22-04-2007, 18:55
<snipped for brevity>

It will be a long battle, but I am confident that we will succeed in the end. You just can't keep people repressed forever, no matter whether they're completely biological, cyborg, robotic, a non-corporeal AI residing in a computer or anything else.

I'm in agreement with you that AI should be given rights, but what interests me is the practical question of what rights we give them and what those rights will entail, which is basically what interests me in the case of humans as well.

For example, if an AI has a right to life, then what does that entail? Does it just mean they have to have equal access to the electricity they need to sustain their functions? Would they need a job to pay for the electricity, and would they have a right to a job? Who would be their parents and guide them as they learned until they were developed enough to get a job? Obviously the first generation of AI would not have AI parents to take care of that. Would AI have a right to a free public education? Would it even want or need a free public education?
Soheran
22-04-2007, 18:58
Is an egg cell conscious? Is it conscious once it divides once? Once it divides twice? thrice?
At what point is a foetus/baby/human conscious?

Human consciousness is utterly implausible.

But here we are. Conscious anyway. And that gives us a starting point.

Edit: The problem with "implausibility" is that it gives us no starting point at all without definite knowledge. Sure, AI could be implausibly conscious, but then, so could that rock over there.
Soheran
22-04-2007, 18:59
What makes you so sure humans have 'consciousness'?

I do, anyway. And there are both rational reasons and emotional ones to regard other beings very much like me as conscious.

The problem is in the wrangling over the "very much like me" part, and what exactly that constitutes.
Ultraviolent Radiation
22-04-2007, 19:09
I do, anyway. And there are both rational reasons and emotional ones to regard other beings very much like me as conscious.

The problem is in the wrangling over the "very much like me" part, and what exactly that constitutes.

I think that's quite important - why would robots be like us? They are machines, built for a purpose. Their "intelligence" will be something gradually developed to improve the way they serve us.

There won't be any sudden AI "revolution", as if there's some magical separation between AI and normal computing. All that will happen is that robots will be developed to increasing sophistication.
Grave_n_idle
22-04-2007, 19:12
I do, anyway. And there are both rational reasons and emotional ones to regard other beings very much like me as conscious.

The problem is in the wrangling over the "very much like me" part, and what exactly that constitutes.

So - you have 'consciousness'... because you say so? That doesn't sound like an objective or empirical measure.. one wonders how you would apply that same 'test' to your AI example.

What is our 'consciousness' but a very complex program?
Soheran
22-04-2007, 19:15
So - you have 'consciousness'... because you say so?

No.

I think I have consciousness because I experience my consciousness directly.

You should think I have consciousness because you experience your consciousness directly, and you notice that you and I are similar in certain fundamental characteristics that seem connected to consciousness.

What is our 'consciousness' but a very complex program?

A very complex program is purely behavioral; if x happens, do y.

Consciousness transcends that; there is an actual mind that is actually feeling and considering.

Imagine a guide to speaking Russian... only the guide doesn't give any translations, or anything of the sort. All it gives is a list of phrases and situations, and what things to say in response. I memorize this book, and it's so good that anyone who talks to me in Russian will believe I am fluent... but I don't actually understand a word.

THAT is the leap that must be overcome.
Ultraviolent Radiation
22-04-2007, 19:18
Imagine a guide to speaking Russian... only the guide doesn't give any translations, or anything of the sort. All it gives is a list of phrases and situations, and what things to say in response. I memorize this book, and it's so good that anyone who talks to me in Russian will believe I am fluent... but I don't actually understand a word.

THAT is the leap that must be overcome.

Proper language parsing == consciousness?
Soheran
22-04-2007, 19:21
Proper language parsing == consciousness?

Conscious understanding necessarily implies consciousness.

Mere response, however seemingly "human", does not.
Hydesland
22-04-2007, 19:22
I don't see how it is possible to create something with consciousness, when we don't even understand ourselves what it is.
Soheran
22-04-2007, 19:23
I don't see how it is possible to create something with consciousness, when we don't even understand ourselves what it is.

Babies? ;)
Ultraviolent Radiation
22-04-2007, 19:27
Conscious understanding necessarily implies consciousness.

Mere response, however seemingly "human", does not.

But you've yet to say what conscious understanding is.

You just seemed to say that something that just maps "in" phrases to "out" phrases is not conscious, but something that can respond to the meaning of phrases and output information by constructing its own phrases is conscious.
1010102
22-04-2007, 19:31
No haven't you seen the movies, the second we let them become slef aware they will us to protect themselves from us! Which is why we must never allow them to create total indepentent. There should be fail safes built in, such as give them a short life span off under 5 years, or progrom them to serve what ever goverment that built them.
Soheran
22-04-2007, 19:32
But you've yet to say what conscious understanding is.

Knowing the actual meaning as opposed to merely responding.

"Sensing", "knowing", "understanding" - all of these are conscious mental states. Merely responding is not.

It's not a very difficult concept to grasp.

You just seemed to say that something that just maps "in" phrases to "out" phrases is not conscious, but something that can respond to the meaning of phrases and output information by constructing its own phrases is conscious.

No, it doesn't necessarily have anything to do with constructing its own phrases; you could theoretically program a computer to do that.

The difference is that the person who actually knows Russian understands the words she hears, and understands the words with which she responds; the person who merely uses the guide just responds to what to him is meaningless gibberish with more meaningless gibberish.
Ultraviolent Radiation
22-04-2007, 19:49
No, it doesn't necessarily have anything to do with constructing its own phrases; you could theoretically program a computer to do that.

The difference is that the person who actually knows Russian understands the words she hears, and understands the words with which she responds; the person who merely uses the guide just responds to what to him is meaningless gibberish with more meaningless gibberish.

OK, let me put it like this how do you test whether a robot "understands" the words or not?
Soheran
22-04-2007, 19:50
OK, let me put it like this how do you test whether a robot "understands" the words or not?

You can't.
Benorim
22-04-2007, 19:52
Sorry, but most of the posts in this thread are ridiculously ignorant or just ludicrous speculation.

I'm annoyed that no-one has addressed my obvious objections, and instead you are all planning out a matrix-style future in detail.
Similization
22-04-2007, 19:53
What use would such an AI be to us?Why should it be?
Ultraviolent Radiation
22-04-2007, 20:03
You can't.

Then the concept is essentially meaningless, surely? If something has some kind of effect, then it is testable. And everything that exists has an effect, even if its hard to detect (e.g. neutrinos). Something with no effect, therefore, cannot exist.

You say:
It's not a very difficult concept to grasp.

but you can't actually define the concept in any useful way.
Soheran
22-04-2007, 20:10
Then the concept is essentially meaningless, surely?

Surely not. Unless you really don't care whether or not your friends and family are conscious.

When you say things to them, do you want them to actually understand them, to receive the meaning you intend to impart, or just to respond like puppets to your stimuli?

What we can perceive is not all that there is.

If something has some kind of effect, then it is testable.

Depends on what you mean by "effect."

Objectively verifiable effect, yes.

And everything that exists has an effect,

An effect of that sort? Doubtful.

Just because we can't perceive something doesn't mean that it doesn't exist.
Ultraviolent Radiation
22-04-2007, 20:20
Surely not. Unless you really don't care whether or not your friends and family are conscious.

When you say things to them, do you want them to actually understand them, to receive the meaning you intend to impart, or just to respond like puppets to your stimuli?


If it makes a noticeable difference, then I care. If not, then the word is just wasting space in my vocabulary. I'd say, however, that if someone doesn't understand something, the difference is quite obvious. However, you seem to be saying that something could not understand and yet behave exactly as if it did understand. Surely the process of altering behaviour would require an understanding?
Soheran
22-04-2007, 20:47
If it makes a noticeable difference, then I care.

There is no noticeable difference between the two possibilities I presented.

Your friends and family may actually consciously understand what you are saying, or they may merely respond to them like unconscious machines, like someone who has merely read the guide.

If the machines are sufficiently complicated, or the guide good enough, you will never be able to notice the difference.

But there is definitely one there. And we all care about it.

I'd say, however, that if someone doesn't understand something, the difference is quite obvious.

If someone lacks the capability to respond correctly to something, then, yes, the difference is obvious. If I don't know the language and I have no guide, I am obviously non-comprehending.

The question is: can we tell the difference between someone who knows the language and someone who has memorized the guide?

However, you seem to be saying that something could not understand and yet behave exactly as if it did understand. Surely the process of altering behaviour would require an understanding?

No, it doesn't.

We're both operating in this argument on the assumption that the other person understands us. But this is not necessarily the case. I could be a extraterrestrial who knows not a word of any human language, but who has memorized an immense list of phrases and responses, combined so as to effectively imitate the forum communication of an ordinary human being.

The possibility is far-fetched, yes, but the point is this: it is theoretically possible for there to be a distinction between human communication and human consciousness. Once this is conceded, other conclusions follow. Why, for instance, should we assume that the biological processes that lead to the things your friends and family say actually correspond to consciousness? Perhaps they are purely mechanical, without any thinking mind behind them.

Perhaps saying "Hello" to them does not actually send them a message they understand, resulting in a response, but merely sets off a chemical response in their brain that leads to the manipulation of the mouth in such a way as to generate the sound "Hello."

What's the noticeable difference?
Grave_n_idle
22-04-2007, 21:26
No.

I think I have consciousness because I experience my consciousness directly.

You should think I have consciousness because you experience your consciousness directly, and you notice that you and I are similar in certain fundamental characteristics that seem connected to consciousness.


You 'experience your consciousness directly'.... what does that mean? You say I also 'experience my consciousness directly'... but I can't say that I do. The best I can say is that that is how it seems.

Wouldn't I 'feel' that way if I was just responding to stimuli througha complex program?


A very complex program is purely behavioral; if x happens, do y.

Consciousness transcends that; there is an actual mind that is actually feeling and considering.

Imagine a guide to speaking Russian... only the guide doesn't give any translations, or anything of the sort. All it gives is a list of phrases and situations, and what things to say in response. I memorize this book, and it's so good that anyone who talks to me in Russian will believe I am fluent... but I don't actually understand a word.

THAT is the leap that must be overcome.

You keep making assertions, but you've done absolutely nothing to support them. You say: "Consciousness transcends that; there is an actual mind that is actually feeling and considering"... but what is an 'actual mind'? Can you show me it, objectively? Feeling and considering are just responses to stimuli... how is what we do any different to what a machine can do?

You might also want to think about how humans learn language in the first place - by repetition, and by 'learning' which words 'fit' into which circumstances.
Soheran
22-04-2007, 21:36
You 'experience your consciousness directly'.... what does that mean?

I think, I feel, I understand.

These are the defining characteristics of consciousness... indeed, experience itself is an experience of consciousness.

You say I also 'experience my consciousness directly'... but I can't say that I do. The best I can say is that that is how it seems.

If anything "seems" some way to you, then you are conscious.

Wouldn't I 'feel' that way if I was just responding to stimuli througha complex program?

If you were a brain in a vat, yes, but that's not what I'm talking about.

Not if your seemingly conscious behavior were merely a function of an unconscious machine. Then you would not "feel" at all.

You keep making assertions, but you've done absolutely nothing to support them. You say: "Consciousness transcends that; there is an actual mind that is actually feeling and considering"... but what is an 'actual mind'? Can you show me it, objectively?

No; that's the whole point.

I can only describe it to you... under the assumption that you subjectively perceive your mind the same way, as indeed you must at some level, if you "perceive" at all.

Feeling and considering are just responses to stimuli...

Yes, they are. But they are conscious responses.

I kick a rock. The rock moves. That's a response, but not a conscious one.

I kick a human. The human feels pain. That's a response, and a conscious one. I cannot see his or her feeling of pain; I can't even know it exists.

The human may also scream, or flinch, or hit me back. These, too, are responses, but like the movement of the rock, they are not conscious ones (though conscious responses may lead to them). They may or may not correspond to actual conscious mental states (pain, or anger.)

We could theoretically construct a robot that would imitate exactly the behaviors of an ordinary human when kicked. The robot, too, might scream, or flinch, or hit me back. But would it feel pain? Would it have a conscious mental state of "pain" the way humans (or at least I) do? We cannot know that it would simply from the fact that it responds with certain behaviors.

You might also want to think about how humans learn language in the first place - by repetition, and by 'learning' which words 'fit' into which circumstances.

Indeed. But while we can learn what a word means by where it fits, knowing where it fits is a different kind of knowledge from knowing what it means.

The point is that a computer can be programmed to "know" where things fit (that is, to put things in places where they fit) without actually consciously understanding the meaning.
Grave_n_idle
22-04-2007, 21:57
I think, I feel, I understand.


How do you know? Can you prove you do any of those things? Can you prove that a machine does not?

You say you 'think'... I say that your core program iterates permutations based on previously received stimuli filtered through (admittedly, very sophisticated) prgamatic filters.

You say you 'feel'... I say your central processor analyses and responds to stimuli.

You say you 'understand'... I say your central program filters and sorts data, and provides most likely permutations of it.

These are the defining characteristics of consciousness... indeed, experience itself is an experience of consciousness.


By which definition?


If anything "seems" some way to you, then you are conscious.


Or a very sophisticated program.


If you were a brain in a vat, yes, but that's not what I'm talking about.


Why a brain in a vat? All stimuli arrive at our central processor in the form of electrical impulses, no matter which source they enter the body via. They are processed in the brain as electrical signals... how is this different to the computer?


Not if your seemingly conscious behavior were merely a function of an unconscious machine. Then you would not "feel" at all.


Maybe we don't 'feel' at all. Maybe what we call 'feeling' is actually just a very sophisticated stimulus response.


No; that's the whole point.

I can only describe it to you... under the assumption that you subjectively perceive your mind the same way, as indeed you must at some level, if you "perceive" at all.


Why? Again - more of your assumptions. What is 'perception'? Isn't it just collecting stimulus data?


Yes, they are. But they are conscious responses.

I kick a rock. The rock moves. That's a response, but not a conscious one.

I kick a human. The human feels pain.


No... the human has a nervous system which detects damage, and transmits that data as an algetic response to a certain process centre of the brain. 'Pain' isn't felt by the body.


That's a response, and a conscious one.


Pain is conscious? Even the reflex arc responses?


I cannot see his or her feeling of pain; I can't even know it exists.

The human may also scream, or flinch, or hit me back. These, too, are responses, but like the movement of the rock, they are not conscious ones (though conscious responses may lead to them). They may or may not correspond to actual conscious mental states (pain, or anger.)

We could theoretically construct a robot that would imitate exactly the behaviors of an ordinary human when kicked. The robot, too, might scream, or flinch, or hit me back. But would it feel pain? Would it have a conscious mental state of "pain" the way humans (or at least I) do? We cannot know that it would simply from the fact that it responds with certain behaviors.


We don't know that a human does. Indeed - the evidence suggests quite the opposite. The signals are generated in the body, but pain only exists in the brain. It is a program.


Indeed. But while we can learn what a word means by where it fits, knowing where it fits is a different kind of knowledge from knowing what it means.

The point is that a computer can be programmed to "know" where things fit (that is, to put things in places where they fit) without actually consciously understanding the meaning.

I think you are making (yet) more assertions without basis. How is the machine 'learning' best fits for words DEMONSTRABLY any different from the biological machine doing the same? Maybe the only difference is that out mechanical attempts aren't close to being as complex and sophisticated as our biological bodies?
AB Again
22-04-2007, 22:09
How do we know whether or not they're conscious and deserving of rights?

How do I know that you are conscious and deserving of rights?

By observing your behavior. That is the only means we have of evaluating the mental state of anyone other than ourself. As such, why should an artificial entity be viewed in any different light? If it acts as if it were conscious and deserving of rights, then we should judge it to be so.

(The question as to whether anything is deserving of rights at all is a separate and much more difficult issue.)
Theoretical Physicists
22-04-2007, 22:14
No haven't you seen the movies, the second we let them become slef aware they will us to protect themselves from us! Which is why we must never allow them to create total indepentent. There should be fail safes built in, such as give them a short life span off under 5 years, or progrom them to serve what ever goverment that built them.

Consumer electronics in general tend to have a less than 5 year life span.
Soheran
22-04-2007, 22:20
How do you know? Can you prove you do any of those things?

To me? Yes. Like I said, I experience them. If I can prove anything, if the empirical method works at all, I can prove that.

To you? No.

Can you prove that a machine does not?

No. But I think I can make a much more plausible case for another human than for a machine, simply because I know what characteristics I possess that correspond to my consciousness; I know it has something to do with the physiology of the brain, for instance, and I know that all humans (and at least animals) have structures very similar to mine. So it's reasonable for me to assume that when they behave in certain ways, their behaviors correspond to conscious mental states of their own, just as mine do for me.

But with a machine, we have no such structural similarity. We simply have behavioral imitation.

You say you 'think'... I say that your core program iterates permutations based on previously received stimuli filtered through (admittedly, very sophisticated) prgamatic filters.

You say you 'feel'... I say your central processor analyses and responds to stimuli.

You say you 'understand'... I say your central program filters and sorts data, and provides most likely permutations of it.

This is actually an excellent depiction of the problem.

In objective empirical terms, independent of my subjective perception of my conscious mind, there is no distinction at all. There is no way to perceive the difference.

By which definition?

"the state of being conscious; awareness of one's own existence, sensations, thoughts, surroundings, etc."

http://dictionary.reference.com/browse/consciousness

That one suitable?

Or a very sophisticated program.

Only if the program is conscious.

For something to "seem" like something, there must be something to sense it as that something.

"Sensing" is part of consciousness.

Why a brain in a vat? All stimuli arrive at our central processor in the form of electrical impulses, no matter which source they enter the body via. They are processed in the brain as electrical signals... how is this different to the computer?

I misunderstood what point you were making. Ignore my brain in the vat reference.

Yes, like I said, they are indistinguishable when we are on the outside trying to look in. But I perceive, from the inside, in my own mind, something different - states of feeling, sensing, understanding.

These are not captured by certain behaviors in and of themselves.

Maybe we don't 'feel' at all. Maybe what we call 'feeling' is actually just a very sophisticated stimulus response.

It is. But it is still "feeling."

I not only react behaviorally to pain, but also experience a certain unpleasant mental state.

Why? Again - more of your assumptions. What is 'perception'? Isn't it just collecting stimulus data?

No. It is collecting stimulus data through sensation.

I write on a piece of paper. The paper is collecting stimulus data. Bt the paper is not "sensing" anything.

This is very different from human perception - where we are not only influenced by stimuli, but actually experience them.

No... the human has a nervous system which detects damage, and transmits that data as an algetic response to a certain process centre of the brain.

Why are the two mutually exclusive?

'Pain' isn't felt by the body.

Human beings feel pain all the time... how can you possibly deny that?

Pain is conscious? Even the reflex arc responses?

No. But then, they are not "pain" - though pain may be associated with them. They are unconscious responses.

We don't know that a human does. Indeed - the evidence suggests quite the opposite. The signals are generated in the body, but pain only exists in the brain. It is a program.

I don't see what that has to do with anything.

Yes, it is a biological program. But it is also conscious.

We can imagine a program that did the same behaviorally that was not conscious.

I think you are making (yet) more assertions without basis. How is the machine 'learning' best fits for words DEMONSTRABLY any different from the biological machine doing the same?

It's not "DEMONSTRABLY" different... that is to say, there is no objective test we can apply that would answer the question of whether it is one or the other.

It's still DIFFERENT.

Maybe the only difference is that out mechanical attempts aren't close to being as complex and sophisticated as our biological bodies?

That's one difference. But it's not this difference.
Soheran
22-04-2007, 22:22
By observing your behavior. That is the only means we have of evaluating the mental state of anyone other than ourself.

No, it isn't.

We can also compare the structures that tend to correspond to mental states in ourselves to the structures in others. And when we do, since we are all human beings, there is a strong similarity.

We cannot do the same for machines.
GBrooks
22-04-2007, 22:30
Let's say that we create a fully functional AI, a robot that can think, learn, and grow on it's own without any controls imput from it's creator, let's also say that they learn how to build more AIs (reproduction) and take on human quality. Do you think that AI would receive equal status to man and women, or will they have to fight for it like the blacks did from Slavery to the Equal Rights movements of the 50's-60's? Personally I think they're going to have to fight for it because they won't be seen as equals, they'll just be seen as tools to use at our disposal, and many people think that they can just destroy the AI when it becomes bothersome or need to be replace with a better model.

They would have to fight for it, like blacks did, because, after all, how can they be considered to be human? They are something we made, so they cannot be us, because to be us is to be so much more important a thing, from our point of view.
Zarakon
22-04-2007, 22:31
Let's say that we create a fully functional AI, a robot that can think, learn, and grow on it's own without any controls imput from it's creator, let's also say that they learn how to build more AIs (reproduction) and take on human quality.

The Singularity is nigh! :eek:
AB Again
22-04-2007, 22:38
No, it isn't.

We can also compare the structures that tend to correspond to mental states in ourselves to the structures in others. And when we do, since we are all human beings, there is a strong similarity.

We cannot do the same for machines.

The structures have not been shown to have anything to do with consciousness, in any way at all. I, and many others, do not hold with the mind - brain identity theory which is, after all, just a theory.

The mind may well be an emergent property of the physical structures of the brain but this does not mean that a similar epiphenomenon can not emerge from other structures.

As I said, the only way I can 'know' that you are conscious is by your behavior. Even then it is an unjustified and illogical conclusion. It is just one that is natural to make (causality being what it isn't).

That the other is a construct of metal and semiconductors makes no difference to the assumption. If the other refers to itself as having desires and dreams then I should judge it to be self aware and thus conscious.

You may choose not to, but it would be with no justifiable basis that you discriminated against the consciousness of the machine.
GBrooks
22-04-2007, 22:38
...which of course is ludicrous since human consciousness is a product of the hardware in our brains and the software of our genes. We're as much machines as they are, only built from organic compounds in our mothers' wombs rather than on an assembly line. And if these people gain the ability to reproduce independently, what real difference is there between the two? That they're made of non-organic compounds?

And there's always the unknown factor: that we cannot reproduce something that we don't in totality know, so the AI will always have some flaw of us built into it.
Soheran
22-04-2007, 22:39
The structures have not been shown to have anything to do with consciousness, in any way at all.

So brain injuries don't affect consciousness "in any way at all"?
GBrooks
22-04-2007, 22:41
A sentient AI in a physical, mobile shell capable of self-replication that desires independence would probably end up determining that the best course to ensure it and it's subsequent incarnations would require subjugating or exterminating humanity to ensure it's survival.

Either that, or a more complicated method of manipulating human society into giving it the rights it deems desireable while suppressing counter-productive behavior.

If it was given human morality, indeed.
Vetalia
22-04-2007, 22:56
And there's always the unknown factor: that we cannot reproduce something that we don't in totality know, so the AI will always have some flaw of us built into it.

True. Of course, our hope is that the flaw in question is not one that will prove to be dangerous; a sociopathic machine would be just as deadly as a sociopathic human.
AB Again
22-04-2007, 23:02
So brain injuries don't affect consciousness "in any way at all"?

Which structure, in the brain, is responsible for consciousness?

Brain injuries, in general, do not affect consciousness. They may affect speech skills, sensory input, motor skills etc. But localized damage to one or other area does not seem to affect consciousness in any way.

It is only when the entire system begins to fail that consciousness starts to be affected (Alzheimers, cjd etc.) As such consciousness seems to be an emergent phenomenon of the whole and not a function of any specific structure. If this is the case, (and it is according to observation to date) then we are none the wiser as to how consciousness arises or where it arises from. Thus the only measure we have for consciousness is a functional one. Structural similarities can be pointed to as supporting evidence, but they are no more than 'circumstancial'.

Now I will pose a problem to you. Imagine, if you would be so kind to do so, that we have developed a technology by which we can cure degenerative brain disease by the gradual replacement of failing neurons with artificial neurons. (Not too much of a stretch from our current capabilities). So over time the brain of a person is no longer made of long natural nerve fibers, but of wires, no more synapses but transistors and diodes.
The person, that undergoes this treatment is conscious throughout it. (The brain has no pain detectors - brain surgery can be done with the patient awake.) Eventually the person's brain is completely replaced by artificial neurons. This, now artificial, brain would survive the death of the body and could be rehoused in a machine. Is this machine then conscious?

We could even duplicate the structure and make another identical machine. Which would then be conscious as well.

If structure is all that matters then we could easily duplicate this structure in the future. Would this (conscious by your position) construct then be granted rights?
Vetalia
22-04-2007, 23:06
It is only when the entire system begins to fail that consciousness starts to be affected (Alzheimers, cjd etc.) As such consciousness seems to be an emergent phenomenon of the whole and not a function of any specific structure. If this is the case, (and it is according to observation to date) then we are none the wiser as to how consciousness arises or where it arises from. Thus the only measure we have for consciousness is a functional one. Structural similarities can be pointed to as supporting evidence, but they are no more than 'circumstancial'.

That is correct. There is no consciousness spot in the brain; it is many, many different groups of neurons, neurochemicals, and synapses operating together to produce this phenomena. It's pretty clearly emergent, especially since the kind of sensory data alone that our brain has to interpret requires multiple regions to work together in order to process it properly.

Now I will pose a problem to you. Imagine, if you would be so kind to do so, that we have developed a technology by which we can cure degenerative brain disease by the gradual replacement of failing neurons with artificial neurons. (Not too much of a stretch from our current capabilities). So over time the brain of a person is no longer made of long natural nerve fibers, but of wires, no more synapses but transistors and diodes.
The person, that undergoes this treatment is conscious throughout it. (The brain has no pain detectors - brain surgery can be done with the patient awake.) Eventually the person's brain is completely replaced by artificial neurons. This, now artificial, brain would survive the death of the body and could be rehoused in a machine. Is this machine then conscious?

Yes, absolutely. The substrate that consciousness emerges from is no matter. Their personality, memories, conscious thought, and actions are all the same as before, and they can verify this to us. The thing is, our brains replace almost all of our synapses roughly every 12 months (and generate new neurons as well), and yet there is no change in the continuity of our experiences. This is a pretty clear sign that this same thing would happen if we were to replace our brains entirely with artificial components.

The thing with emergent properties is that the underlying structures can be changed gradually without altering the innate nature of the property itself. So, if you did this, and put the brains in to machines, they would be conscious. So would copies of those brains; they wouldn't be the same as the original person, but they would still be conscious beings.
German Nightmare
22-04-2007, 23:06
Let's say that we create a fully functional AI, a robot that can think, learn, and grow on it's own without any controls imput from it's creator, let's also say that they learn how to build more AIs (reproduction) and take on human quality. Do you think that AI would receive equal status to man and women, or will they have to fight for it like the blacks did from Slavery to the Equal Rights movements of the 50's-60's? Personally I think they're going to have to fight for it because they won't be seen as equals, they'll just be seen as tools to use at our disposal, and many people think that they can just destroy the AI when it becomes bothersome or need to be replace with a better model.
I sense trouble ahead...

http://i6.photobucket.com/albums/y223/GermanNightmare/cylon-scanner.gif
Soheran
22-04-2007, 23:12
It is only when the entire system begins to fail that consciousness starts to be affected (Alzheimers, cjd etc.) As such consciousness seems to be an emergent phenomenon of the whole and not a function of any specific structure. If this is the case, (and it is according to observation to date) then we are none the wiser as to how consciousness arises or where it arises from.

Yes, we are.

We know that consciousness is dependent on a specific biological structure - namely, the brain. When it fails, consciousness is affected.

It follows that consciousness does not just come out of nowhere... it is associated with a given structure, because when that structure is disrupted, it is affected.

It is reasonable, therefore, to believe that beings WITH that structure are conscious, or at least that they are more likely to be than beings without that structure.

Now I will pose a problem to you. Imagine, if you would be so kind to do so, that we have developed a technology by which we can cure degenerative brain disease by the gradual replacement of failing neurons with artificial neurons. (Not too much of a stretch from our current capabilities). So over time the brain of a person is no longer made of long natural nerve fibers, but of wires, no more synapses but transistors and diodes.
The person, that undergoes this treatment is conscious throughout it. (The brain has no pain detectors - brain surgery can be done with the patient awake.) Eventually the person's brain is completely replaced by artificial neurons. This, now artificial, brain would survive the death of the body and could be rehoused in a machine. Is this machine then conscious?

We could even duplicate the structure and make another identical machine. Which would then be conscious as well.

If structure is all that matters then we could easily duplicate this structure in the future. Would this (conscious by your position) construct then be granted rights?

Yes.
AB Again
23-04-2007, 00:16
Yes, we are.

We know that consciousness is dependent on a specific biological structure - namely, the brain. When it fails, consciousness is affected.

It follows that consciousness does not just come out of nowhere... it is associated with a given structure, because when that structure is disrupted, it is affected.

It is reasonable, therefore, to believe that beings WITH that structure are conscious, or at least that they are more likely to be than beings without that structure.



There is a significant failing in your reasoning here.

That a quality is associated with a structure by no means limits it to having to be associated with a structure.

In the past, if I wanted to go from A to B faster than I could run I would have had to use a quadripedal structure made of flesh and bone. (Horse, donkey, camel or whatever). Therefor travelling fast is associated with quadripeds. Now I can use a train, or a plane, or a hovercraft, a bycicle etc.

It is reasonable to believe that beings with that structure have that property - true. It is not however reasonable to believe that beings without that structure do not have that property.

Go and look up the fallacy of affirming the consequent. (http://www.fallacyfiles.org/afthecon.html)

It does not follow from 'that when the brain fails in some ways that consciousness fails with it' that all consciousness is associated with brains.

That would be like arguing that when when the electricity system fails the room goes dark means that all light depends on the electricity system (It patently does not).

The brain may and does appear to engender consciousness - in the same way that electricity can produce light - but it does not mean that consciousness is necessarily associated with brains.

As such, there is no reason to restrict rights to beings with brains. All that is needed is apparent self conscious behavior.
Soheran
23-04-2007, 00:16
There is a significant failing in your reasoning here.

No. You have simply misunderstood me.

It is not however reasonable to believe that beings without that structure do not have that property.

I agree. The fact that machines do not have brains does not mean that they are not conscious, or that they cannot be conscious.

All it means is that one of the reasons - in my view, the crucial reason - to assume that other humans are conscious does not apply to machines.
Soheran
23-04-2007, 00:38
You responded with the argument from brains. This argument only works if
1: We know that you have a brain - we do not know this - nor, probably, do you. (It is a basic and likely assumption, but not something we actually know)

No... and the only thing I have claimed we can really "know" in this discussion is that we ourselves are conscious.

2: Even if you prove that you have a brain that there is a proven association between having a brain and being conscious - which there is not.

You yourself said:

"It is reasonable to believe that beings with that structure have that property - true."

That's all I've maintained in this regard.
AB Again
23-04-2007, 00:38
No. You have simply misunderstood me.



I agree. The fact that machines do not have brains does not mean that they are not conscious, or that they cannot be conscious.

All it means is that one of the reasons - in my view, the crucial reason - to assume that other humans are conscious does not apply to machines.

You asked "What reason do we have to believe they are conscious". (Or something to that effect - I can't be bothered to go back to get the quote)

I replied by asking what reason do we have to believe that you are conscious.

You responded with the argument from brains. This argument only works if
1: We know that you have a brain - we do not know this - nor, probably, do you. (It is a basic and likely assumption, but not something we actually know)
2: Even if you prove that you have a brain that there is a proven association between having a brain and being conscious - which there is not.

We can only determine self awareness in others by identifying that their behavior is comparable to our own and our knowing - through introspection - that we are self aware (conscious). Anything else is based on even more spurious assumptions.
Ontario within Canada
23-04-2007, 01:16
We can only determine self awareness in others by identifying that their behavior is comparable to our own and our knowing - through introspection - that we are self aware (conscious). Anything else is based on even more spurious assumptions.

Here here!

Language is the human means of communication, and our most complex behaviour. Anything behaviourly and intellectually less complex than a human being is incapable of mastering human language. So an AI that can pass the Turing test needs to be as complex, if not more complex than a human being.

The Turing test we pretty much every day when talking to people over the net. At the moment, chat bots are easily distinguished from human users, but that may change!

For people interested in this topic here's a linky:
By 2029 no computer - or "machine intelligence" - will have passed the Turing Test. (http://www.longbets.org/1/)
It's a debate/bet made between two computer scientists and a very good discussion of the possibility of AI with human level intelligence.
AB Again
23-04-2007, 01:33
Here here!

Language is the human means of communication, and our most complex behaviour. Anything behaviourly and intellectually less complex than a human being is incapable of mastering human language. So an AI that can pass the Turing test needs to be as complex, if not more complex than a human being.

The Turing test we pretty much every day when talking to people over the net. At the moment, chat bots are easily distinguished from human users, but that may change!

For people interested in this topic here's a linky:
By 2029 no computer - or "machine intelligence" - will have passed the Turing Test. (http://www.longbets.org/1/)
It's a debate/bet made between two computer scientists and a very good discussion of the possibility of AI with human level intelligence.

Whilst I agree with the principle that language is the most complex of human behaviours, I am of the opinion that the Turing test has been the 'bogeyman' of AI research since Alan Turing proposed it. What it has done is forced AI into a back box syndrome, where the only thing that matters is the behaviour. Where I agree with Soheran, despite all my arguments, is that humans are the existing example that we have of functioning intelligence. AI research has consistently and systematically denied any role to the study of human intelligence and has headed off into the semi mystical realms of symbolic representation and semantic structures. Where is the child AI, the spewing, bawling, useless and irritating new born? Our intelligence depends upon our basic passions and emotions, not upon our ability to play chess or determine the edges of objects. AI research seems to have forgotten that intelligence has a purpose in nature and is not an end in itself.

The turing test may be a good measure of intelligence (not of self awareness though) but it is and has always been a millstone around the neck of AI research.
Ontario within Canada
23-04-2007, 01:41
Whilst I agree with the principle that language is the most complex of human behaviours, I am of the opinion that the Turing test has been the 'bogeyman' of AI research since Alan Turing proposed it. What it has done is forced AI into a back box syndrome, where the only thing that matters is the behaviour. Where I agree with Soheran, despite all my arguments, is that humans are the existing example that we have of functioning intelligence. AI research has consistently and systematically denied any role to the study of human intelligence and has headed off into the semi mystical realms of symbolic representation and semantic structures. Where is the child AI, the spewing, bawling, useless and irritating new born? Our intelligence depends upon our basic passions and emotions, not upon our ability to play chess or determine the edges of objects. AI research seems to have forgotten that intelligence has a purpose in nature and is not an end in itself.

The turing test may be a good measure of intelligence (not of self awareness though) but it is and has always been a millstone around the neck of AI research.

You speak with knowledge and wisdom! What's your major?

Honestly, a more reasonable goal than the Turing test, and one that is less misleading, is to just try and model the human mind/brain. I think that what eventually passes the Turing test will probably just be a very good cognitive model. AI researchers can stand to learn a lot from psychologists and biologists.
AB Again
23-04-2007, 01:51
You speak with knowledge and wisdom! What's your major?

Honestly, a more reasonable goal than the Turing test, and one that is less misleading, is to just try and model the human mind/brain. I think that what eventually passes the Turing test will probably just be a very good cognitive model. AI researchers can stand to learn a lot from psychologists and biologists.

I am a little older than you seem to think. I did not take a major, I did an equal weight undergraduate course in Philosophy and Computer Science (and beer drinking). I have a Masters in the History and Philosophy of Science, and am currently working on a second masters (in Portuguese this time) in ethics.

AI researchers do need to pay a lot more attention to the one known model of intelligence and those that study this.
Ontario within Canada
23-04-2007, 01:58
I am a little older than you seem to think. I did not take a major, I did an equal weight undergraduate course in Philosophy and Computer Science (and beer drinking). I have a Masters in the History and Philosophy of Science, and am currently working on a second masters (in Portuguese this time) in ethics.

Ah. I'm working on a bachelors in cognitive science, mostly computing and psychology, with some philosophy thrown in. I'd enjoy philosophy more if it weren't for the philosophers. I have a hard time sitting through a lecture on the logical possibility of the immortal soul and keeping a straight face.
Vetalia
23-04-2007, 02:54
Honestly, a more reasonable goal than the Turing test, and one that is less misleading, is to just try and model the human mind/brain. I think that what eventually passes the Turing test will probably just be a very good cognitive model. AI researchers can stand to learn a lot from psychologists and biologists.

Yes, that's my opinion as well. The black box approach to consciousness just plain doesn't work. The Blue Brain Project is more or less the vanguard of this field; even though it is specifically neuroscience-oriented, the fact that the brain is already demonstrating patterns corresponding to thought, that the researchers did not predict or cause is a clear sign that this is going to be something big.

Chances are, strong AI will be created in this way before we actually know how consciousness is produced.
Posi
23-04-2007, 02:56
Chances are, strong AI will be created in this way before we actually know how consciousness is produced.

Chances are, this is what will help us figure it out.
Posi
23-04-2007, 02:57
Create another AI and ask them which distro of Linux is best. that will keep them busy until the question is decided. *feels kinda trollish today*

Surely, they'd be Gentoo users.
Vetalia
23-04-2007, 02:58
Chances are, this is what will help us figure it out.

So will neural interfacing. Actually connecting minds together directly would provide valuable insight in to how it works, and if this were done with a computer it would be possible to actually check for consciousness. Find the ghost in the machine, if you will.
Neo Undelia
23-04-2007, 03:25
"Anything sentient should have rights," said Undelia, adding nothing to the discussion and only repeating what has probably been said better a dozen times before in the thread.
Posi
23-04-2007, 03:29
"Anything sentient should have rights," said Undelia, adding nothing to the discussion and only repeating what has probably been said better a dozen times before in the thread.

Huh.

So what is Neo Undelia's position on the subject?
Neo Undelia
23-04-2007, 04:21
Huh.

So what is Neo Undelia's position on the subject?
Undelia= Neo Undelia

I was merely narrating the inanity of my own post.
Posi
23-04-2007, 04:58
Undelia= Neo Undelia

I was merely narrating the inanity of my own post.
ic, tell me more about your theory.
Ontario within Canada
23-04-2007, 05:22
"Anything sentient should have rights," said Undelia, adding nothing to the discussion and only repeating what has probably been said better a dozen times before in the thread.

So according to you.... ants should have rights?

(sentient, adjective: the ability to feel or perceive things)
Neo Undelia
23-04-2007, 05:37
ic, tell me more about your theory.

Any being capable of both understanding the basic concepts associated with liberty and that would suffer from an absence of liberty is entitled to protection of those liberties. This would include most humans, possibly higher primates and dolphins and conceivably sophisticated AIs.

This does not include the severely mentally retarted, young children and the mentally ill because they would suffer if they had the same liberties as the rest of us, and it does not include most animals because they, as far as we can honestly be aware of, lack a complex concept of self, which is essential to liberty. The compassion that I believe to be inherent in humans not under a great deal of stress is their protection.

The ultimate goal being, of course, the Greater Good.
Neo Undelia
23-04-2007, 05:39
So according to you.... ants should have rights?

(sentient, adjective: the ability to feel or perceive things)

Words can have more that one meaning.

It can also mean: Self-aware, choice-making consciousness
Soheran
23-04-2007, 05:40
The compassion that I believe to be inherent in humans not under a great deal of stress is their protection.

The "compassion" that drives us to crowd them in factory farms and slaughter them en masse?

Not to mention our systematic destruction of their habitats....
Neo Undelia
23-04-2007, 05:49
The "compassion" that drives us to crowd them in factory farms and slaughter them en masse?

Not to mention our systematic destruction of their habitats....

A small mistake on my part. I was referring to the aforementioned undeveloped humans, not the animals.

After all, how much positive emotion can an animal experience? Based on their brain sizes not much. Not much at all compared to the contentment of a human with a full belly and a place to live.

I do think it is terrible about the apes and whales, though. The funding of research into their intelligence needs to be seriously stepped up so that proper reservation arrangements can be made.
Ontario within Canada
23-04-2007, 16:42
It can also mean: Self-aware, choice-making consciousness

And potato can mean a fallacious statement.

Please, just use a dictionary.

If what you really meant to say was self-aware, choice making consciousness, I'm sorry- once again I think you need to look closer at the meanings of the words you're tossing around. Self-aware just means is aware, i.e., has information concerning its self. Choice making is usually called decision making.

Some extremely simplistic robots meet these criterion because they generate models of themselves, 'imagine' interactions with the environment, then select the interaction which accomplishes the goal they wish to achieve.

In short, there are primitive AI that have the qualities of decision making and self awareness, yet lack the qualities that I would consider to be necessary for the granting of rights.

I think we're better off sticking to the Turing test as an assessment of whether or not something should be granted human rights. Something that's fairly intelligent and can feel pain but does not behave in a human fashion should be granted animal rights. Whether or not an AI is human enough to be granted human rights... well, will cross that bridge when we come to it. At the moment this all too hypothetical.
Grave_n_idle
23-04-2007, 18:05
Any being capable of both understanding the basic concepts associated with liberty and that would suffer from an absence of liberty is entitled to protection of those liberties. This would include most humans, possibly higher primates and dolphins and conceivably sophisticated AIs.

This does not include the severely mentally retarted, young children and the mentally ill because they would suffer if they had the same liberties as the rest of us, and it does not include most animals because they, as far as we can honestly be aware of, lack a complex concept of self, which is essential to liberty. The compassion that I believe to be inherent in humans not under a great deal of stress is their protection.

The ultimate goal being, of course, the Greater Good.

Make one of the considerations, as you said towards the end, 'the Greater Good' and I'd support. SO - an AI or animal that was generally benevolent would be considered 'human' interms of rights, but the serial killer or rapist would be considered 'less than' human, and not accorded 'human' rights.
Neo Undelia
23-04-2007, 21:10
Make one of the considerations, as you said towards the end, 'the Greater Good' and I'd support. SO - an AI or animal that was generally benevolent would be considered 'human' interms of rights, but the serial killer or rapist would be considered 'less than' human, and not accorded 'human' rights.

Yep. That's why we lock them up.
Mirkana
24-04-2007, 02:37
I imagine that there would be an AI rights struggle, and it would succeed in the end.