NationStates Jolt Archive


HAL is 28 years too late

Lunatic Goofballs
18-02-2008, 20:26
Or there will be humans as dumb as computers. :p


Edit: This thread is mine. *charges rent*
New Limacon
18-02-2008, 20:28
According to Ray Kurzweil (http://en.wikipedia.org/wiki/Ray_Kurzweil), by 2029, there will be machines as smart as humans.
Link (http://news.bbc.co.uk/2/hi/americas/7248875.stm)
Link to nifty chart I found that's sort of related (http://en.wikipedia.org/wiki/Image:ParadigmShiftsFrr15Events.jpg)
Now, assuming Kurzweil is not completely nuts (quite an assumption, I grant you), do you think his prediction is accurate? If it is, what will the effects be? Discuss.
Vojvodina-Nihon
18-02-2008, 20:31
Well, given the intelligence of the average human, I'm surprised they haven't made a computer smarter than humans by now.

I wonder, will it also be capable of self-sustaining and self-replication?
Ryadn
18-02-2008, 21:04
After reading about the logical reasoning and memorization abilities of various animals in the latest National Geographic, I'm hard pressed to rule anything of the sort as impossible.

To quote Heinlein: "Somewhere along evolutionary chain from macromolecule to human brain self-awareness crept in. Psychologists assert it happens automatically whenever a brain acquires certain very high number of associational paths. Can't see it matters whether paths are protein or platinum."
Fall of Empire
18-02-2008, 21:32
According to Ray Kurzweil (http://en.wikipedia.org/wiki/Ray_Kurzweil), by 2029, there will be machines as smart as humans.
Link (http://news.bbc.co.uk/2/hi/americas/7248875.stm)
Link to nifty chart I found that's sort of related (http://en.wikipedia.org/wiki/Image:ParadigmShiftsFrr15Events.jpg)
Now, assuming Kurzweil is not completely nuts (quite an assumption, I grant you), do you think his prediction is accurate? If it is, what will the effects be? Discuss.4443

A Scary thought certainly... but I have ammunition in my basement.;)
Damor
18-02-2008, 21:40
Whenever you asked a futurologist in the past 5 decades when machine intelligence would rival human intelligence the answer has been "in twenty years".
I wouldn't hold my breath, personally. Not for twenty years, certainly.

Admittedly, if we ever accomplish it, one of them will inevitably be right if they keep it up..
Lord Tothe
18-02-2008, 21:42
The assumption is that processing power = intelligence. The average insect brain is far faster and far more complex than a Cray supercomputer. It is thought that the human brain contains more electrical connections than the electrical systems of every electronic device on the panet combined. I doubt self-aware computers will ever exist - and if they do, it will be long after I'm dead.
Fall of Empire
18-02-2008, 21:42
Well, given the intelligence of the average human, I'm surprised they haven't made a computer smarter than humans by now.

I wonder, will it also be capable of self-sustaining and self-replication?

No, it almost certainly wouldn't be. To reproduce, a society of robots would require factories and large scale industrialization, which is a weakness. To be self sustaining, they would require large numbers of power plants and the like. Robot society would be small and weak. It would be more realistic to have robots functioning within computers, maintaining humans to do physical things.

And regardless, computers function by serial processing, as opposed to the parallel processing in the human brain. They will forever be "smarter" than us, but we have our conciousness, something they will forever lack. Or at least lack for a long time.
[NS]Rolling squid
18-02-2008, 21:43
meh. AI doesn't bother me that much , all we have to do is remember to include an inhibitor chip, and we have no problems.
Sel Appa
18-02-2008, 21:45
According to Ray Kurzweil (http://en.wikipedia.org/wiki/Ray_Kurzweil), by 2029, there will be machines as smart as humans.
Link (http://news.bbc.co.uk/2/hi/americas/7248875.stm)
Link to nifty chart I found that's sort of related (http://en.wikipedia.org/wiki/Image:ParadigmShiftsFrr15Events.jpg)
Now, assuming Kurzweil is not completely nuts (quite an assumption, I grant you), do you think his prediction is accurate? If it is, what will the effects be? Discuss.

That chart is not even remotely related.

It is interesting though. I'm a bit scared, but faithful that we won't have problems on the scale of The Matrix or I, Robot--the movies, that is.
Ruby City
18-02-2008, 21:57
We used to have humans sitting in front of switchboards to connect phone calls to the right destinations but now we have machines that route communication much better than humans. That change has happened in countless areas and will continue to happen in more areas while humans move on to new jobs. Even if machines are as intelligent as humans by 2029 and according to moor's law I guess twice as intelligent as humans by 2031 it won't change anything. We will still have machines doing more and more of what humans used to do in the past and humans moving on to new jobs. Eventually machines will be both better than humans are today at everything and cheaper to mass produce but by that time we will be good enough at genetics and cybernetics to upgrade humans to keep up with the machines.

I don't think there will be an us vs them race war between humans and machines like in the movies. If both humans and machines are productive then both are useful so coexistence is beneficial. Maybe there won't even be 2 separate races, maybe everyone will have cybernetic implants and be half machine or maybe we will have partly DNA-based computers powered by photosynthesis that are half biological.
Reasonstanople
18-02-2008, 22:12
According to Ray Kurzweil (http://en.wikipedia.org/wiki/Ray_Kurzweil), by 2029, there will be machines as smart as humans.
Link (http://news.bbc.co.uk/2/hi/americas/7248875.stm)
Link to nifty chart I found that's sort of related (http://en.wikipedia.org/wiki/Image:ParadigmShiftsFrr15Events.jpg)
Now, assuming Kurzweil is not completely nuts (quite an assumption, I grant you), do you think his prediction is accurate? If it is, what will the effects be? Discuss.

Ray isn't nuts--Don't forget, this is the guy who invented the electric keyboard. He's a certifiable genius inventor. He also backs up his arguments for his view of the future rather logically: However, his claims are incredibly optimistic and extraordinary, so we shouldn't accept him outright. We can't dismiss him either, though, since there seems to be at least a chance that he is correct, or at least partially correct.
Damor
18-02-2008, 22:15
The assumption is that processing power = intelligence. The average insect brain is far faster and far more complex than a Cray supercomputer.They're complex in different ways; I don't really think it's comparable. Insect brains can't play chess, and computers suck at navigating the real world. But the latter is more a problem of our inability to make a robust program than insufficient computing power.
The last DARPA challenge didn't fare too badly, mind you. Some robots even made it to the finish this time.

It is thought that the human brain contains more electrical connections than the electrical systems of every electronic device on the panet combined.If memory serves me right, the human brain has some 20 billion neurons with an average of ten thousand connections. A modern CPU has 10 million transistors, which supposedly are all connected; so 200 million computers, seems doable.
Also consider that the 'clock-speed' of the brain is maybe 1 ms, while for a computer it's 1 ns, a million times faster.
Cannot think of a name
18-02-2008, 22:23
How do we measure 'as smart as humans'? They already beat us at chess, a professor at UC Santa Cruz created a program that can 'compose' like Mozart. Asimo can navigate a crowded room. What gauge do we use to measure it? What is 'as smart as humans'?
[NS]Rolling squid
18-02-2008, 22:34
How do we measure 'as smart as humans'? They already beat us at chess, a professor at UC Santa Cruz created a program that can 'compose' like Mozart. Asimo can navigate a crowded room. What gauge do we use to measure it? What is 'as smart as humans'?

turing test (http://en.wikipedia.org/wiki/Turing_test)
Cannot think of a name
18-02-2008, 22:53
Rolling squid;13462824']turing test (http://en.wikipedia.org/wiki/Turing_test)

I'm easily distracted, now I'm fascinated by the fact that there is an opera called 'The Turing Test'.
New Limacon
18-02-2008, 23:15
That chart is not even remotely related.

It's nifty, all the same. :)
Posi
18-02-2008, 23:21
Frankly, I think this guy is quite full of shit. To make something as smart as a human is quite difficult. Hell, even making something with even a basic level of intelligence actually fake it by programming vast levels of knowledge into it. He seems to think by throwing more silicon at the problem, it will get more intelligent. It won't. I'll be able to get its pre-programmed knowledge faster and store more pre-programmed knowledge, but neither of those will make it more intelligent than us.

In reallity, we haven't made much progress on machine intelligence. Its not hard to see why. A computer does math really, really fast. If we can turn it into math, a computer can do it. An intelligent brain however, is basically a pattern recognition engine. It finds takes similar situations and looks for patterns. The trick is in putting patterns into immutable math.
New Limacon
18-02-2008, 23:41
In reallity, we haven't made much progress on machine intelligence. Its not hard to see why. A computer does math really, really fast. If we can turn it into math, a computer can do it. An intelligent brain however, is basically a pattern recognition engine. It finds takes similar situations and looks for patterns. The trick is in putting patterns into immutable math.

That's basically what math is, though, the study of patterns. Some patterns are already quite easy for computers, linear regression, for example. Others, such as identifying a chair, are much more difficult, but not impossible theoretically.
What may end up happening is that it is possible to create smarter than human machines, and the only stumbling block is that we're not smart enough to create them.
Posi
18-02-2008, 23:46
How do we measure 'as smart as humans'? They already beat us at chess, a professor at UC Santa Cruz created a program that can 'compose' like Mozart. Asimo can navigate a crowded room. What gauge do we use to measure it? What is 'as smart as humans'?
It can do something without us having to give it exact step by step instructions on how to do so. We figure out how to navigate a crowed room on our own, Asimo has extensive programming dictating exactly how to do so. The same applies to playing cheese or composing music. We can learn beyond our instincts, a computer cannot (a computers instincts would be its programming).
Posi
19-02-2008, 00:00
That's basically what math is, though, the study of patterns. Some patterns are already quite easy for computers, linear regression, for example. Others, such as identifying a chair, are much more difficult, but not impossible theoretically.
What may end up happening is that it is possible to create smarter than human machines, and the only stumbling block is that we're not smart enough to create them.Can it form the pattern for a chair or a linear regression itself, or did we have to give it that pattern ourselves?
Damor
19-02-2008, 00:17
We can learn beyond our instincts, a computer cannot (a computers instincts would be its programming).?
There are tons of machine learning algorithms; granted, nothing as advanced as what nature equipped us with, but then, nature had a few billion years of a head start.

Can it form the pattern for a chair or a linear regression itself, or did we have to give it that pattern ourselves?You'd have to give it some examples in the right format, for example bitmap images. Just as we need to see some examples before we can learn what a chair looks like. It'll then make an abstraction of the pattern "chair" by itself then.
Of course there's the problem that in the case of things like chairs, you'd have to tell the algorithm "this is an example of a chair". Supervised learning isn't really that impressive (not that it's easy, mind you; I doubt a chair recognition task would work well).
But a simple search/avoid task would be quite doable. Bump into something bad results in negative feedback (pain), bump into something good (e.g. battery charger) gives positive feedback (pleasure). Based on just that a computer with the right learning algorithm will learn to avoid bad things in 'life' and search for the good things. Just as animals tend to do.
Posi
19-02-2008, 00:18
?
There are tons of machine learning algorithms; granted, nothing as advanced as what nature equipped us with, but then, nature had a few billion years of a head start.
As I understand it, this thread is about some guy claiming we will have those algorithms approaching the complexity of our brains by 2029.

I just don't think that will happen.
Trotskylvania
19-02-2008, 00:42
The chance that this guy is full of shit are extremely high. It's a hazard that comes with the futurist field.

And I'm going to have to agree with Steve Wozniak on this one: "Never trust a computer you can't throw out a window".
Maineiacs
19-02-2008, 00:44
Computers as smart as humans? I'm sorry, Dave. I can't let you do that.


http://img149.imageshack.us/img149/9105/halye1.png (http://imageshack.us)
Conserative Morality
19-02-2008, 00:52
The question is... If a computer is that smart, could it reprogram itself? Like that game Darwinia? Just a thought.
Non Aligned States
19-02-2008, 03:01
Can it form the pattern for a chair or a linear regression itself, or did we have to give it that pattern ourselves?

I imagine what we would need is an initial basic pattern/input recognition algorithm, with the ability to write the data for future reference and build on it and slap on sensors.

The answer I suspect isn't in programming all the knowledge and information, but rather, programming a learning kernel, and giving it external input (senses) and tasks.
The Scandinvans
19-02-2008, 03:21
Well for something to develop a human intelligence it will have to be able to process many TERRAbites of memory and would not be confined by basic programing. Otherwise, the machine will very limited and could only understand commands, and not be able to understand the motives behind them or the desired human results of this.
Barringtonia
19-02-2008, 03:40
I essentially agree with Posi and I think the best example is computer issues with handwriting programs.

A human can instantly read a large variety of handwriting whereas a computer simply can't, there's simply to many variations to program.

An even simpler example is this:

Tkae tihs stenecne for emplxae, no cutomper can raed it - humans can.
Non Aligned States
19-02-2008, 03:42
Tkae tihs stenecne for emplxae, no cutomper can raed it - humans can.

Can what? Create gibberish?
New Limacon
19-02-2008, 03:59
Can it form the pattern for a chair or a linear regression itself, or did we have to give it that pattern ourselves?

Well, no, which is why so far no one is claiming computers are intelligent as humans.
I agree that in 2029, computers will not be as intelligent as any human. But there is nothing mystical about the human brain, and I see no reason why it can't be replicated artificially. It isn't a question of if, but when. Of course, in the situation where "when" can be something like "between zero and one million years," the question may as well be "if."
VietnamSounds
19-02-2008, 04:08
The first speech I heard at college was by an alumni who was convinced that robots will be as intelligent as humans in 20 years and that since the military funds so much of the research the robots will want to kill us.

To fight this problem he builds toy robots for a living.
Barringtonia
19-02-2008, 04:10
Can what? Create gibberish?

No, read gibberish.
Vetalia
19-02-2008, 04:11
28 years? Might be quite a bit earlier if the people working on it now keep going on their present path. The sheer potential of human-level, and beyond, AI is not to be underestimated. It may sound horribly cliche, but that same potential can be used to destroy just as easily as it can be used to create.

The most important thing to realize is that barring existential disaster, it will happen.
Non Aligned States
19-02-2008, 04:26
No, read gibberish.

That was readable?
G3N13
19-02-2008, 04:31
That was readable?

Vrey mcuh so.
Kyronea
19-02-2008, 04:44
That was readable?

Er...yes. It said "take this sentence for example, no computer can read it." Easily readable, actually.
Non Aligned States
19-02-2008, 04:45
Vrey mcuh so.

?
Non Aligned States
19-02-2008, 04:45
Er...yes. It said "take this sentence for example, no computer can read it." Easily readable, actually.

Ah. I see. I shall make a note of that.
Posi
19-02-2008, 06:52
28 years? Might be quite a bit earlier if the people working on it now keep going on their present path. The sheer potential of human-level, and beyond, AI is not to be underestimated. It may sound horribly cliche, but that same potential can be used to destroy just as easily as it can be used to create.

The most important thing to realize is that barring existential disaster, it will happen.
21 years, but HAL was supposed to exist in 2001. This would make him 28 years late.
Ryadn
19-02-2008, 07:54
How do we measure 'as smart as humans'? They already beat us at chess, a professor at UC Santa Cruz created a program that can 'compose' like Mozart. Asimo can navigate a crowded room. What gauge do we use to measure it? What is 'as smart as humans'?

*throws up the sign of the banana slug*
Ryadn
19-02-2008, 07:59
Er...yes. It said "take this sentence for example, no computer can read it." Easily readable, actually.

Well, I guess that proves one thing--we've already created computers with the same decoding abilities as NAS! ;)
Anthil
19-02-2008, 10:55
According to Now, assuming Kurzweil is not completely nuts (quite an assumption, I grant you

Read the book, think again:
www.amazon.co.uk/Age-Spiritual-Machines-Intelligent-Machines/dp/1587991225
Not too well written, but the ideas are ok.
Damor
19-02-2008, 11:32
As I understand it, this thread is about some guy claiming we will have those algorithms approaching the complexity of our brains by 2029.

I just don't think that will happen.In my professional opinion I quite agree with you.

I essentially agree with Posi and I think the best example is computer issues with handwriting programs.

A human can instantly read a large variety of handwriting whereas a computer simply can't, there's simply to many variations to program.There is no reason why one should need to program in all variations, though (even aside from the fact it makes more sense to have a computer learn them). All those handwritings have something in common, something which makes them equivalent to us. If we can find that something and put it in a computer, it wouldn't need to learn the variations anymore.

An even simpler example is this:

Tkae tihs stenecne for emplxae, no cutomper can raed it - humans can.The question though, is whether it is a problem in principle. Just because we haven't made a program that can do it now, doesn't mean one can't be made.
The last example is very much a problem people haven't tried to solve with computers. And the biggest problem there is not that the words are scrambled; the problem is that computers can't understand sentences in the first place (yet). If they could do that, it would be a piece of cake to unscramble the words using the context. But they'd need an understanding of the world for that (i.e. to know what sentences make sense).
And that is also the major obstacle with hand writing recognition. Try reading cursive* handwriting in a language you're unfamiliar with; that is the problem computers are essentially asked to do.

*) non-cursive handwriting recognition, and even online cursive (where the computer knows how the pen moved, is fairly good, even without the computer understanding anything of what is written.
Barringtonia
19-02-2008, 11:39
*snip*

Indeed - the two problems are related, which is why I used these two examples because they are at the heart of why AI is so difficult to achieve - they're not a question of recognition, they're a question of comprehension.
St Edmund
20-02-2008, 12:42
Well, I guess that proves one thing--we've already created computers with the same decoding abilities as NAS! ;)

Or NAS is a computer... ;)
Non Aligned States
20-02-2008, 13:16
Or NAS is a computer... ;)

I'm sorry Edmund, I can't let you come to that conclusion. *cycles airlock*
Sikun
20-02-2008, 13:47
I, for one, welcome our new machine overlords.

Seriously, though, I think AIs will happen, just not as fast as anyone's predicted. Or, if they do, they won't be like anything we're imagining now. Case in point: some 50 years ago, people who thought of future computers, they imagined big shiny things in big aseptic rooms, with lots and lots of blinkenlights. Proof? See "Logan's Run" or any other movie happening "in the future". Certainty nobody envisioned iPods.
Longhaul
20-02-2008, 16:52
As I understand it, this thread is about some guy claiming we will have those algorithms approaching the complexity of our brains by 2029.

I just don't think that will happen.
A lot of people agree with you. I'm still not sure. I was sure, for a while, but a number of theories which seem to indicate that it is impossible (Roger Penrose's, for example) have given me reason to move back onto the fence and I'm now firmly in the 'maybe' zone.

It's a machine learning problem, at the root of which is the fact that we don't properly understand how we (people) learn, and are therefore unable to replicate the process in an artificial construct. Research, as ever, continues apace. I read an article (http://www.sciencedaily.com/releases/2008/01/080129215316.htm) a couple of weeks back suggesting that our learning process - for language, at least - is very much akin to the processes that are used for carrying out data mining on computers, i.e. it's just pattern-seeking and the forging of cognitive links from unsorted stimuli. If that's correct, we may be on the right track for building a truly aware AI.

If it's accepted that intelligence/consciousness/insert-label-here is a property that emerges from a complex system then it is only a matter of time until an AI is created. Whether 'we' will have those necessary algorithms, rather than them being evolved by other, lesser computers who have been set the goal of building ever larger and more complex neural nets, is another question. I'm a little surprised that someone has stuck their neck out and put an ETA on it, but perhaps I shouldn't be - Kurzweil has never been a shrinking violet, after all.

Some people, of course, simply don't accept that such independent properties can emerge from a wholly artificial system, and think that any research in such a direction represents wasted effort. Time will tell.