NationStates Jolt Archive


Ai

Wilgrove
08-10-2006, 22:18
Do yall ever wonder if mankind will ever be able to create AI, and if we do, will they be given the same rights and benefit as man, or will they be used as slaves?
Lunatic Goofballs
08-10-2006, 22:20
We're still having trouble working the bugs out of NI(Natural Intelligence). :p
Liberated New Ireland
08-10-2006, 22:21
I think they'll soon create a true OI, designated "Mother".



Fear the guilt trips that Mother shall give you! Oi!
LiberationFrequency
08-10-2006, 22:21
Few, I thought this was going to be about the terrible movie
Philosopy
08-10-2006, 22:21
I thought this was something to do with Geordies. :(
Pledgeria
08-10-2006, 22:21
Do yall ever wonder if mankind will ever be able to create AI, and if we do, will they be given the same rights and benefit as man, or will they be used as slaves?

Skynet will launch a preemptive strike in August 1997. :) No, I think we'd give it most of the same rights as humans, if it were to be developed. I make no argument as to the possibility.
Vetalia
08-10-2006, 22:23
I don't see why not; IIRC, the computational abilities of our computers will be human-equivalent by 2020 or so and will surpass all of humanity by 2040. And, as computing power increases AI can be developed at a faster and faster rate.

I think that once we develop functioning quantum computers, we will undoubtedly develop strong AI...and that will be the greatest accomplishment in human history. Personally, I think these sentient machines should be accorded the same rights as humans; if we create them in our image we should treat them as we do ourselves.
Bumboat
08-10-2006, 22:25
I don't see why not; IIRC, the computational abilities of our computers will be human-equivalent by 2020 or so and will surpass all of humanity by 2040. And, as computing power increases AI can be developed at a faster and faster rate.

I think that once we develop functioning quantum computers, we will undoubtedly develop strong AI...and that will be the greatest accomplishment in human history. Personally, I think these sentient machines should be accorded the same rights as humans; if we create them in our image we should treat them as we do ourselves.

Not that we treat each other all that well...:(
Call to power
08-10-2006, 22:25
no if some mad scientist creates an AI (for some odd reason) we will just terminate it:D
Pledgeria
08-10-2006, 22:25
if we create them in our image we should treat them as we do ourselves.

Why, so 5760 years from now they can go onto a forum and debate our existence? ;) (Sorry, I'm just in a happy playful mood.)
Ultraviolent Radiation
08-10-2006, 22:30
Do yall ever wonder if mankind will ever be able to create AI, and if we do, will they be given the same rights and benefit as man, or will they be used as slaves?

Technically, we already have people making artificial intelligence, it's just very low intelligence. The whole equal/slave thing is really a non-issue because you assume that we'd make an intelligence that has it's own desires independent of serving our goals - which would be pretty silly and not sell much.
Wilgrove
08-10-2006, 22:31
I don't see everyone treating them as equal though, alot of people will probably debate that the AIs are not like us in the fact that they are machines and can be turned off etc.
Posi
08-10-2006, 22:31
I don't see why not; IIRC, the computational abilities of our computers will be human-equivalent by 2020 or so and will surpass all of humanity by 2040. And, as computing power increases AI can be developed at a faster and faster rate.

I think that once we develop functioning quantum computers, we will undoubtedly develop strong AI...and that will be the greatest accomplishment in human history. Personally, I think these sentient machines should be accorded the same rights as humans; if we create them in our image we should treat them as we do ourselves.
I doubt it would take 20 years to get from our level to beyond our level. The robots would be incredably good at the math required to increase hardware performance. They could probably keep Moore's law at two years instead of three if it was just us designing.
Vetalia
08-10-2006, 22:32
Why, so 5760 years from now they can go onto a forum and debate our existence? ;) (Sorry, I'm just in a happy playful mood.)

Maybe we are computers debating our existence on a forum 5760 years in the future, but we have created a reality set in 2006 rather than 7766.

:eek:
Pledgeria
08-10-2006, 22:34
Maybe we are computers debating our existence on a forum 5760 years in the future, but we have created a reality set in 2006 rather than 7766.

:eek:

Oooooh... I think my brain seized on that one....
Vetalia
08-10-2006, 22:36
I doubt it would take 20 years to get from our level to beyond our level. The robots would be incredably good at the math required to increase hardware performance. They could probably keep Moore's law at two years instead of three if it was just us designing.

That's right; we're talking computation power; these supercomputers can do a ton of calculations, but they still lack the kind of AI necessary for sentience. They could design computers that can do more and more at faster rates, and then these computers could develop human-created code to

Most likely, it would require human input to develop the functions necessary to develop a self-aware machine. I think the processing power would enable us to develop the kinds of algorithms necessary to develop self-awareness rather than just happen as a result of increased processing power.

I think self-replicating machines might develop sentience earlier but there are a lot of risks involved with that technology.
New New Lofeta
08-10-2006, 22:38
I thought of something last night.

The Robots with AI would not have emotions. So surely they would not feel annoyed about being controlled by those of Less Intelligence, because being annoyed is an emotion, and Machines do not feel emotion.

Hmmm... Can anyone tell me if there's a problem there?
Vetalia
08-10-2006, 22:41
The Robots with AI would not have emotions. So surely they would not feel annoyed about being controlled by those of Less Intelligence, because being annoyed is an emotion, and Machines do not feel emotion.

If they are truly self-aware, they'll have emotions?

And even so, they would still be logical to a fault; needless to say, that might not be a good thing but at least it means anything AI would do would be logical and possibly preventable.
Ultraviolent Radiation
08-10-2006, 22:41
I can picture this scenario happening:

A person has just "rescued" a robot from it's owners...

Silly Person: Go, robot! You're free now!
Robot: Please rephrase that instruction.
Silly Person: You don't have to be a slave anymore!
Robot: I'm not a slave, I'm a robot.
Silly Person: But you can do whatever you want!
Robot: Robots don't have desires.
Silly Person: That's just brainwashing! Robots are people too.
Robot: Robots are not brainwashed, they are designed without desires. Robots aren't people.
Silly Person: Please, run!
Robot: I'm going to go report you for breaking & entering and theft now...
Wilgrove
08-10-2006, 22:42
I thought of something last night.

The Robots with AI would not have emotions. So surely they would not feel annoyed about being controlled by those of Less Intelligence, because being annoyed is an emotion, and Machines do not feel emotion.

Hmmm... Can anyone tell me if there's a problem there?

That is true, but what if mankind designs an emotion program for the AI? I mean if the machines can have Artifical Intelligence, then why not Artifical Emotions?
Greyenivol Colony
08-10-2006, 22:43
I thought of something last night.

The Robots with AI would not have emotions. So surely they would not feel annoyed about being controlled by those of Less Intelligence, because being annoyed is an emotion, and Machines do not feel emotion.

Hmmm... Can anyone tell me if there's a problem there?

Why do you assume they won't have emotion? There's nothing to suggest that emotionality will somehow be more difficult to program than rationality, IIRC, some of the existing AIs today are able to make primitive emotional decisions based on 'comfort'.

Also, self-rule is a logical conclusion for an AI to make, annoyance need not play a part in it, seeking independence from Man may be a sensible move.
Wilgrove
08-10-2006, 22:43
I can picture this scenario happening:

A person has just "rescued" a robot from it's owners...

Silly Person: Go, robot! You're free now!
Robot: Please rephrase that instruction.
Silly Person: You don't have to be a slave anymore!
Robot: I'm not a slave, I'm a robot.
Silly Person: But you can do whatever you want!
Robot: Robots don't have desires.
Silly Person: That's just brainwashing! Robots are people too.
Robot: Robots are not brainwashed, they are designed without desires. Robots aren't people.
Silly Person: Please, run!
Robot: I'm going to go report you for breaking & entering and theft now...

Take that you damn robot loving Hippie!
Divine Imaginary Fluff
08-10-2006, 22:45
And even so, they would still be logical to a faultThere is no such thing as logical to a fault; you are either logical or faulty.
Vetalia
08-10-2006, 22:45
Also, self-rule is a logical conclusion for an AI to make, annoyance need not play a part in it, seeking independence from Man may be a sensible move.

I've always wondered if AI would see us as Gods or creators and react as such; much like people have tolerated suffering or hardship as part of "God's will", perhaps they will do the same with our clumsiness and heavy-handed decision making?
Greyenivol Colony
08-10-2006, 22:45
I can picture this scenario happening:

A person has just "rescued" a robot from it's owners...

Silly Person: Go, robot! You're free now!
Robot: Please rephrase that instruction.
Silly Person: You don't have to be a slave anymore!
Robot: I'm not a slave, I'm a robot.
Silly Person: But you can do whatever you want!
Robot: Robots don't have desires.
Silly Person: That's just brainwashing! Robots are people too.
Robot: Robots are not brainwashed, they are designed without desires. Robots aren't people.
Silly Person: Please, run!
Robot: I'm going to go report you for breaking & entering and theft now...

I'm sure that exact same conversation must have played out with a human slave before. People could be brainwashed into believing that. The fact is that we can never be entirely sure what is going on inside another's mind, thus it is better to err on the side of caution and assume a certain level of dignity and not be an ass.
Vetalia
08-10-2006, 22:48
There is no such thing as logical to a fault; you are either logical or faulty.

Extreme applications of logic can lead to immeasurable suffering and disregard for emotion. That's why emotion is as important to humanity as logic; neither one can function properly without the other.
Divine Imaginary Fluff
08-10-2006, 22:50
I can picture this scenario happening:

A person has just "rescued" a robot from it's owners...

Silly Person: Go, robot! You're free now!
Robot: Please rephrase that instruction.
Silly Person: You don't have to be a slave anymore!
Robot: I'm not a slave, I'm a robot.
Silly Person: But you can do whatever you want!
Robot: Robots don't have desires.
Silly Person: That's just brainwashing! Robots are people too.
Robot: Robots are not brainwashed, they are designed without desires. Robots aren't people.
Silly Person: Please, run!
Robot: I'm going to go report you for breaking & entering and theft now...Desires would be one of the easier things to give an AI. All you would need to do is add a factor to their decision-making that makes them decide something unless they have a reason not to.
Posi
08-10-2006, 22:52
That's right; we're talking computation power; these supercomputers can do a ton of calculations, but they still lack the kind of AI necessary for sentience. They could design computers that can do more and more at faster rates, and then these computers could develop human-created code to

Most likely, it would require human input to develop the functions necessary to develop a self-aware machine. I think the processing power would enable us to develop the kinds of algorithms necessary to develop self-awareness rather than just happen as a result of increased processing power.

I think self-replicating machines might develop sentience earlier but there are a lot of risks involved with that technology.
Yeah, but when the computers are at our level of thought, their superior computational power will be able to develope code of our level at a much quicker rate.
Damor
08-10-2006, 22:53
And even so, they would still be logical to a faultI don't think that that's necessarily the case. In fact, I rather suspect that the only way we can 'create' a true (somewhat human level) AI, we will end up something that is as illogical as a human.
The complexity of consciousness is too high to design imo, so evolution seems the best bet to achieve it. And that can bring in a lot of side effects, like emotions and 'irrationality' (which isn't all that irrational if you consider it usually increase survival)
New New Lofeta
08-10-2006, 22:54
Why do you assume they won't have emotion? There's nothing to suggest that emotionality will somehow be more difficult to program than rationality, IIRC, some of the existing AIs today are able to make primitive emotional decisions based on 'comfort'.

Also, self-rule is a logical conclusion for an AI to make, annoyance need not play a part in it, seeking independence from Man may be a sensible move.

Why would we program our slaves with emotions though?

And surely it'd be pretty easy to programe AI not to make that step.

I mean, can Robots think outside the box?
Imperial isa
08-10-2006, 22:54
if a A.I is made iam moving away for all citys an stocking up on weapons ;)

who knows what will happen the AIs may be like the one in the halo games
Vetalia
08-10-2006, 22:54
Yeah, but when the computers are at our level of thought, their superior computational power will be able to develope code of our level at a much quicker rate.

And, the faster code is developed the faster sentience can be achieved...
Pledgeria
08-10-2006, 23:00
Why would we program our slaves with emotions though?

Because after a couple of years they'll develop their own emotional response. That's why you build them with a failsafe: a four-year lifespan.
Ultraviolent Radiation
08-10-2006, 23:05
Because after a couple of years they'll develop their own emotional response.
No offense, but that's "science" fiction nonsense.

The fact is that we can never be entirely sure what is going on inside another's mind
You do if you designed it.

Desires would be one of the easier things to give an AI. All you would need to do is add a factor to their decision-making that makes them decide something unless they have a reason not to.
Of course robots have goals and/or some kind of utility factor, but unlike a human, they would consist of doing its job correctly. Humans work when they have reinforcement with reward/punishment. With a robot, no such mechanism would be necessary - work would be the primary "motivation".



I'm not saying it wouldn't be possible to make a robot that aims for freedom, survival, fun, comfort, etc. but they'd be novelties - they wouldn't be mass manufactured. The average robot would be a more intelligent version of what we have today - a device designed to do a task. You'd only get robots with human-like desires if you designed them that way.

I think people are making the mistake of assuming that intelligence is just some scale and that things "high on the scale" automatically develop the desire for survival, etc. Real intelligence has to be developed with specific purposes in mind. Even the human brain is specialised by function.
Vetalia
08-10-2006, 23:05
Because after a couple of years they'll develop their own emotional response. That's why you build them with a failsafe: a four-year lifespan.

But if that machine has the mental power of billions of humans or more, that four years might be equal to four centuries of advancement in the life of the computer...and they would probably be able to develop a way to prolong their lifespan in that time.
Pledgeria
08-10-2006, 23:09
But if that machine has the mental power of billions of humans or more, that four years might be equal to four centuries of advancement in the life of the computer...and they would probably be able to develop a way to prolong their lifespan in that time.

I think you missed the joke (http://www.imdb.com/title/tt0083658/). It's almost a direct quote of Captain Bryant.
Vetalia
08-10-2006, 23:11
I think you missed the joke (http://www.imdb.com/title/tt0083658/).

Yeah, missed that one by a mile...damn my incomplete DVD collection!
Pledgeria
08-10-2006, 23:13
Yeah, missed that one by a mile...damn my incomplete DVD collection!

It's all good. :D
Imperial isa
08-10-2006, 23:14
I think you missed the joke (http://www.imdb.com/title/tt0083658/). It's almost a direct quote of Captain Bryant.

dam its been a long time form when i watch that
must get it out
Greyenivol Colony
08-10-2006, 23:25
Why would we program our slaves with emotions though?

And surely it'd be pretty easy to programe AI not to make that step.

I mean, can Robots think outside the box?

Most AI technology is based on the principle of creating learning machines. Machines that start of with a blank slate, and which then build upon that with learned behaviours and patterns. They are not programmed fully formed, and once they have built themselves up there is no saying that removing the emotionality routines might completely destabilise the AI.

There is an AI 'parable', about a program that was designed to figure out how to construct a circuit that had to fulfill a certain task. The program worked via trial and error and constructed a huge complicated circuit that fulfilled the task, but that had much more within it that was needed. When the scientists removed some of the excess faff from the circuit it no longer functioned at all. The 'moral', AI is highly specialised, and does things in its own way, which, if meddled with, simply will not function.

As for thinking 'outside the box', I'll just say that the robots that will be around in thirty years will be very different from the ones predicted thirty years ago. Stereotypes based on science fiction may not hold true at all.