We're one step closer to AI
Wilgrove
03-06-2007, 06:59
An event that saw the organisation proudly showing off a 33-kilogramme effigy that can make facial expressions, react to its surroundings by blinking and stand up with assistance. Giving it a set of skills and abilities that its maker claims allows CB2 (Child-Robot with Biometric Body) to emulate the physical abilities of a 1- or 2-year-old toddler.
http://www.wordpress.tokyotimes.org/?p=1591
If you watch the video, it really doesn't seem that impressive, but the robot is huge for a 1-2 year old that it's suspose to be simulating.
The not so little fella’s freakishly real – and at the same time not real – features quite possibly putting people off having kids forever.
I've already reached that point.
Regressica
03-06-2007, 07:05
How is that AI?
How is that AI?
Robots have AI.
The Potato Factory
03-06-2007, 07:11
Real AI would be terrible. Imagine the video games!
"Player moves pawn to C1"
"Computer launches nuclear missiles towards Russia"
Regressica
03-06-2007, 07:13
Robots have AI.
Not necessarily.
And I can't watch the video for quota-cheapness reasons, but judging from the text of the OP, it has little to do with AI.
Not necessarily.
And I can't watch the video for quota-cheapness reasons, but judging from the text of the OP, it has little to do with AI.
Um, yeah. Its internal computer acts without direct commands to act.
Lacadaemon
03-06-2007, 07:23
Real AI would be terrible. Imagine the video games!
"Player moves pawn to C1"
"Computer launches nuclear missiles towards Russia"
Yah, well you'd deserve that for such a crap impossible move.
Anyway, watched the video, and it seems that yet again we are no closer to real AI. (Though it may be some form of pseudo baby which will assuage the Japanese chicks of there need to reproduce, thus allowing the rest of us to examine their naughty parts with impunity, so I can't say it is all bad).
Dryks Legacy
03-06-2007, 08:13
Um, yeah. Its internal computer acts without direct commands to act.
Come back when someone's built a dynamic system, your static in-out with the processor remaining unchanged is old news. :rolleyes:
South Lorenya
03-06-2007, 08:26
Yah, well you'd deserve that for such a crap impossible move.
Actually it's a fully legal move...
...if you're black and have a pawn on c2. You will, however, need to state what you're promoting it into.
http://www.wordpress.tokyotimes.org/?p=1591
If you watch the video, it really doesn't seem that impressive, but the robot is huge for a 1-2 year old that it's suspose to be simulating.
I've already reached that point.
Well, it's interactive. That's something. I'm going to be simultaneously really happy and really sad when full A.I. on the sentience level of humans comes into true being. Really happy because it means a new sentient species, but really sad because it'll mean many years of fighting for A.I. rights.
Real AI would be terrible. Imagine the video games!
"Player moves pawn to C1"
"Computer launches nuclear missiles towards Russia"
Only if it was Company of Heroes! :D
Okay, seriously though, that's not something we should be afraid of. Any A.I. even close to sentience is going to be programmed with morality and emotions so they don't decide things through pure logic.
...actually, if anything, when one thinks about it morality and emotions are a required part of sentience.
Well, shrink it in half (or maybe by two-thirds) and it could pass as a baby (I'll admit, it's still cute though)...however, it still seems like it's reacting to a set of programming (for example, the presenter touched BB2 on the cheek and CB2's head moved). I have doubts we'll ever get to *full* AI, simply because a computer can't ever get that smart. It's still got to be programmed, and while you can program it to be able to deal with every kind of situation imaginable, it can't get close to our brains- we can at least comprehend and manage something we have never seen before, a computer can't (because it's not programmed into its system).
Okay, seriously though, that's not something we should be afraid of. Any A.I. even close to sentience is going to be programmed with morality and emotions so they don't decide things through pure logic.
...actually, if anything, when one thinks about it morality and emotions are a required part of sentience.
Exactly; moral awareness and conscience are two of the big, big must haves in order to create a true self-aware AI and to mitigate the risks that would come of such a development. In fact, I consider that the truest test of personhood there is.
Now, you all know me and how much I love technology, but I am completely unwavering in my belief that human-equivalent (or better, if possible) morals and a conscience are absolutely, unequivocally necessary in any artificial intelligence that is developed in the (fairly near) future. There is no option; it has to be done before we allow AI to control critical decisionmaking, and it it is a major prerequisite before we can grant them human-level rights and responsibilities. The risks that could occur if such a thing were not done would be greater than anything we have ever done in our entire history, including the development of weapons of mass destruction.
We as a society cannot tolerate the risk of allowing the emergence of what are really superintelligent sociopaths, and we can't allow them to be capable of making existential decisions about themselves or human beings without the human-level moral awareness that is needed to make such a decision.
Lunatic Goofballs
03-06-2007, 10:11
I've seen these movies. I know how they end.
If anybody needs me, I'll be cowering in an underground bunker. :eek:
Chumblywumbly
03-06-2007, 10:14
An event that saw the organisation proudly showing off a 33-kilogramme effigy that can make facial expressions, react to its surroundings by blinking and stand up with assistance. Giving it a set of skills and abilities that its maker claims allows CB2 (Child-Robot with Biometric Body) to emulate the physical abilities of a 1- or 2-year-old toddler.
“Emulate” being the operative word.
I’m still yet to be convinced that true AI is possible.
The semantics-syntax divide is a large one.
I’m still yet to be convinced that true AI is possible.
I personally believe true AI will require an artificial brain rather than something that is programmed; I don't rule out the potential for programmed AI, but it is far more likely (and in many ways easier) to build an artificial brain than to try and emulate intelligence through programming.
I mean, the Blue Brain Project has already shown patterns associated with brain activity in real rats that were not predicted or produced by the programmers of the simulation itself; this strongly suggests to me that a functioning artificial brain will produce artificial consciousness.
Big Jim P
03-06-2007, 10:19
I've seen these movies. I know how they end.
If anybody needs me, I'll be cowering in an underground bunker. :eek:
Dude, I'm with you. Considering that I've seen what natural intelligence has done to the world, i don't want to be around to witness how bad artificial intelligence can screw it up, especially considering the species that is doing the design work.
Chumblywumbly
03-06-2007, 10:23
<snip>
Surely an artificial brain would be programmed?
Or do you mean an artificial organic brain?
Dude, I’m with you. Considering that I’ve seen what natural intelligence has done to the world, i don’t want to be around to witness how bad artificial intelligence can screw it up, especially considering the species that is doing the design work.
*initiates Skynet*
Lunatic Goofballs
03-06-2007, 10:24
Dude, I'm with you. Considering that I've seen what natural intelligence has done to the world, i don't want to be around to witness how bad artificial intelligence can screw it up, especially considering the species that is doing the design work.
Exactly. God made us( if you believe in such a thing) and look how we turned out. Now we're going to make sentient life? I just want to know when so I can evacuate the planet with the dolpins, mice and anything else smart enough not to stick around. *nod*
If there is a God or not, we're not qualified to play Him. ;)
Big Jim P
03-06-2007, 10:27
Exactly. God made us( if you believe in such a thing) and look how we turned out. Now we're going to make sentient life? I just want to know when so I can evacuate the planet with the dolpins, mice and anything else smart enough not to stick around. *nod*
If there is a God or not, we're not qualified to play Him. ;)
But I am.:D:cool:
Surely an artificial brain would be programmed?
Or do you mean an artificial organic brain?
Artificial organic brain. In other words, a simulation of an organic brain, using either a computer model or physically artificial neurons. The main advantage to this would be that advances in human neuroscience could be applied to the computer model as well, continually improving the model and enhancing its capabilities.
Artificial neurons, of course, could also be used in humans especially to reverse the effects of degenerative diseases or losses from the aging process. That would be a key advantage of that model over computer simulation.
Lunatic Goofballs
03-06-2007, 10:33
But I am.:D:cool:
That's not what she said. ;)
Chumblywumbly
03-06-2007, 10:34
Artificial organic brain. In other words, a simulation of an organic brain, using either a computer model or physically artificial neurons
Kewel! Quite GITS-esque.
Though we’d need to fully map the human brain and all its functions and processes before this was possible.
Quite a formidable, if not impossible, task; albeit an exciting one.
But a simulation of an organic brain still leaves open the question of whether such a construct would be capable of semantic thought.
Rubiconic Crossings
03-06-2007, 10:36
All I remember about AI was some neural network stuff I was taught in Uni.
Back then I thought it far too complicated to have a strong commercial presence. Well in some ways I was right (it is not ubiquitous) and in others wrong ('fuzzy' logic in washing machines).
AI though is not only about the straight tech. It is also (as mentioned above) about moral structure.
I'm with the Loony Goofball and the Satanist on this one.
That's not what she said. ;)
That's not what they said. ;)
Kewel!
Though we’d need to fully map the human brain and all its functions and processes before this was possible.
Quite a formidable, if not impossible, task; albeit an exciting one.
It's tough, time consuming and very expensive, but it is possible. Advances in neuroimaging have been occurring at a fast pace, enabling scientists to better image the brain at higher levels of resolution than before; this in turn enables them to create more accurate models of the brain which are then put in to the simulations.
Nanotechnology is also becoming a big boost in the field, since nanodevices will likely be used to map the brain from the inside out to get a way better view of the way things work than we can obtain from imaging; another advantage is they can map it in real time, something still limited by conventional imaging technologies. Other good news comes in the form of improvements in supercomputing technology; it's constantly reducing the cost per neuron of simulation, enabling more accurate and bigger models for the same computing cost.
The Blue Brain is building the brain from the bottom up, adding more and more parts as the simulation expands; once they've got the neuron level simulated, they will begin to work on the genetic and molecular levels to refine the simulation further and further.
Chumblywumbly
03-06-2007, 10:50
<snip>
All very, very interesting, don’t get me wrong.
But all of what you have mentioned are syntactical simulations of a brain; would such a thing be able of semantic thought?
It might be able to simulate the exact brain state of the human emotion of love, but would it truly know the feeling of being in love?
Rubiconic Crossings
03-06-2007, 10:54
All very, very interesting, don’t get me wrong.
But all of what you have mentioned are syntactical simulations of a brain; would such a thing be able of semantic thought?
It might be able to simulate the exact brain state of the human emotion of love, but would it truly know the feeling of being in love?
I would think its highly difficult. Given that we are not only talking about the responses of the brain in relation to the mind but also you need to take into account the other physiological interactions as well. Like how do you re-recreate that 'gut feeling'? Or other responses that are seemingly instinctive.
All very, very interesting, don’t get me wrong.
But all of what you have mentioned are syntactical simulations of a brain; would such a thing be able of semantic thought?
It might be able to simulate the exact brain state of the human emotion of love, but would it truly know the feeling of being in love?
Well, in reality none of us can know that for sure, even between human beings. The qualia of human emotion are part of our subjective experience and can't be really explained or objectively measured; we can measure the things that produce them, but a diagram of neurochemicals or brainwaves says nothing about the actual experience itself.
The only way you could know is through their actions, really in the same way we know if other people are in love with us. We don't know if other people experience things the same way we do, and can only know through their actions.
Really, as I think about it more deeply, the only way you could really know subjective experience is if you were to somehow merge the conscious mind of a human being with a another human being, or in this case a machine, in order to experience their subjective mind as one with your own. That is assuming of course, that such a thing is possible; it may be that a neural interface directly between two brains would allow mental communication, but nothing more.
Chumblywumbly
03-06-2007, 11:01
I would think its highly difficult. Given that we are not only talking about the responses of the brain in relation to the mind but also you need to take into account the other physiological interactions as well. Like how do you re-recreate that ‘gut feeling’? Or other responses that are seemingly instinctive.
Or the semantic, “gawd I fancy the pants off of you”, feeling, as composed to the syntactical:
IF eyecontact=15 secs+
AND smilesignal received 5 times+
AND bodycontact=10 secs+
AND lipstouch=yes
THEN initiate love brainstate
Well, in reality none of us can know that for sure, even between human beings. The qualia of human emotion are part of our subjective experience and can't be really explained or objectively measured; we can measure the things that produce them, but a diagram of neurochemicals or brainwaves says nothing about the actual experience itself.
Yes, we cannot know if our subjective experience is the same as others, but IMO we can know that we are having an experience of love, and that that experience is more than just a response to stimuli.
Love is to irrational to be simply a behavioural mechanism.
Imperial isa
03-06-2007, 11:05
*initiates Skynet*
better start Stockpile weapons
Ruby City
03-06-2007, 11:36
A robot doll isn't a big step towards AI.
As Vetalia said we can't judge if a creature has feelings or not. To an outside observer human feelings seems to be nothing more then electrochemical signals but we still treat other humans as if they do have feelings because they behave like they do. We'll have to go by their behavior when guessing if AI has feelings.
AI can't be trusted, we know that from experiences with billions of intelligent human beings. If a computer program gets imagination and creativity it will come up with ways to break the rules just like we humans do with our imagination. In fact if it can't break the rules then I'd say it is not truly intelligent.
Our civilization has a long experience of keeping us humans under control but no experience of keeping intelligent computer programs under control. We can give the AI a bank account and tell it that if it wants something it must pay for it. Banks have plenty of security against physical robberies but have they made sure an AI can't take over their computers when it is invented?
Nobel Hobos
03-06-2007, 11:46
I suspect that consciousness is learned, that it is method of thinking imparted by being taught by other conscious beings.
Animals I find to have a degree of consciousness. People I find to vary in their degree of consciousness (for instance: the ability to respond to a wide range of different stimuli, to respond in more varied ways, and to make more 'realizations' and responses in a given time.) Of course, I don't know that others have differing degrees of consciousness, I can judge only by their external responses.
Big Jim P
03-06-2007, 15:24
That's not what she said. ;)
That's not what they said. ;)
Perhaps, but just remember: When you leave them speechless, then you don't have to hear them nag.:D
Lunatic Goofballs
03-06-2007, 15:28
Perhaps, but just remember: When you leave them speechless, then you don't have to hear them nag.:D
YAY! :D
Bodies Without Organs
03-06-2007, 15:31
It might be able to simulate the exact brain state of the human emotion of love, but would it truly know the feeling of being in love?
What is it in your brain that 'knows' the feeling of being in love?
About the creepy thing, I've read that people have the easiest time interacting with extremely inhuman or realistic robots. Semi-human robots creep people out.
I've seen these movies. I know how they end.
If anybody needs me, I'll be cowering in an underground bunker. :eek:
This robot will give the phrase "Baby Killer" a WAY different meaning.
Deus Malum
03-06-2007, 17:43
Well, shrink it in half (or maybe by two-thirds) and it could pass as a baby (I'll admit, it's still cute though)...however, it still seems like it's reacting to a set of programming (for example, the presenter touched BB2 on the cheek and CB2's head moved). I have doubts we'll ever get to *full* AI, simply because a computer can't ever get that smart. It's still got to be programmed, and while you can program it to be able to deal with every kind of situation imaginable, it can't get close to our brains- we can at least comprehend and manage something we have never seen before, a computer can't (because it's not programmed into its system).
That's not entirely true. With the right neural networking algorithm you could get an AI that not only reacted to set programmed stimuli but actually learned from things that it hadn't been programmed for already.
A very crude example of this is the 20Q neural network, which is programmed to not only be able to identify words already in the neural net, but also to add new objects to its network through interaction with its environment (a user).
We already have artificial intelligence.
It's just not very intelligent yet.
Greater Trostia
03-06-2007, 18:20
I for one welcome our robot overlords.
Big Jim P
03-06-2007, 19:59
We already have artificial intelligence.
It's just not very intelligent yet.
"We already have natural intelligence.
It's just not very intelligent yet."
There, I fixed it for you.
"We already have natural intelligence.
It's just not very intelligent yet."
There, I fixed it for you.
Yet implies that it might become intelligent eventually.......
Divine Imaginary Fluff
03-06-2007, 21:10
Okay, seriously though, that's not something we should be afraid of. Any A.I. even close to sentience is going to be programmed with morality and emotions so they don't decide things through pure logic.Great; you want to taint the first pure intelligence with human stupidity? *insert mad computer scientist grin here*
That said, emotions and a "conscience" - both of which I agree would make good parts of a sentient AI - are not mutually exclusive with perfect rationality. Emotions "clouding judgement" is a flaw - an error/bug - in most people. It is not neccessary, and can be avoided - even human people are capable of doing so with sufficient training.
To avoid an AI acting sociopathically, all that would be needed would be to set the goals according to which it operates accordingly. Though emotions could of cource be added on top of that.
We as a society cannot tolerate the risk of allowing the emergence of what are really superintelligent sociopaths, and we can't allow them to be capable of making existential decisions about themselves or human beings without the human-level moral awareness that is needed to make such a decision.I don't support the idea of "human-level" "moral awareness", though. Why simulate something ridiculously flawed and unreliable when you can build something better? If you model AI behavior according to human nature, expect it to be just as twisted, stupid, buggy, irrational, untrustworthy and suboptimal as that of humanity. Look at our history and the way we work: no rational being of significant intelligence with a choice in the matter would trust us. Nor should they, I'd say. Rather than focusing single-mindedly on making computers more human-like, we should focus just as much on making humans more computer-like. We need it, or we are pretty much screwed. And so should we be, lest we improve.
...actually, if anything, when one thinks about it morality and emotions are a required part of sentience.How so? Emotions are not all there is to experience and awareness; they are just one form of it, and even when removed from a human person, all of the rest remains. I can say that from personal experience, having previously become completely emotionally numb for a period of time. As for morality, are you claiming amoral people are non-sentient?
I have doubts we'll ever get to *full* AI, simply because a computer can't ever get that smart. It's still got to be programmed, and while you can program it to be able to deal with every kind of situation imaginable, it can't get close to our brains- we can at least comprehend and manage something we have never seen before, a computer can't (because it's not programmed into its system).Learning AIs are far from new, though they, relatively speaking, are still very primitive and limited in capability as of yet. The only thing needed is more raw processing power in order to get a system sophisticated enough running speedily. (it should be perfectly possible to make a very advanced and intelligent AI with today's computer technology, but it'd run ridiculously slow to a point far, far beyond complete uselessness)
I personally believe true AI will require an artificial brain rather than something that is programmed.Artificial brains (the neural network systems in themselves) are something programmed, in the same way that a brain artifically put together using real neurons would be constructed. What you mean by "something that is programmed" should rather be described as "something algorithmic".
Though I suppose you could create a wholly hardware-implemented system, just like you could hardwire complex computer programs of today. Neither would make much sense, though, and unless we are dealing with something paranormal, the abstraction level of an artificial brain implementation wouldn't make a difference in regards to sentience.
Hydesland
03-06-2007, 21:14
Emulation =/= AI
Colonel Krapp
03-06-2007, 21:27
NS is the new AI and we are all its slaves!!!!
Deus Malum
03-06-2007, 21:42
NS is the new AI and we are all its slaves!!!!
I for one welcome our mod overlords......wait. :eek:
I for one welcome our mod overlords......wait. :eek:
http://mods.willbedefeated.com/
>.>
<.<
The Loyal Opposition
03-06-2007, 23:34
I’m still yet to be convinced that true AI is possible.
Of course it's possible. The operation of your own brain basically boils down to electro-chemical reactions. It is fundamentally a physical machine that operates according to natural processes. Recreating these processes in man-made applications is simply a matter of understanding the physical machine.
AI-skeptics simply hang onto obtuse notions of the "soul" or other religious notions by clinging the antiquated notion that homo sapiens sapiens is somehow special or somehow not natural. It's silly.
That's not entirely true. With the right neural networking algorithm you could get an AI that not only reacted to set programmed stimuli but actually learned from things that it hadn't been programmed for already.
A very crude example of this is the 20Q neural network, which is programmed to not only be able to identify words already in the neural net, but also to add new objects to its network through interaction with its environment (a user).
20Q simply involves mathematics- very complex mathematics, but mathematics. It is programmed that Answer A leads to Answer B which leads to Guess. It is also designed to accept user input, so that if it makes an incorrect guess, it logs it into its memory and uses that information to compute future guesses. It appears to be learning when in fact, the input is just adding to its code. It's not like it's "experiencing" something it's learning about (like, say, visiting a place to learn more about it)- it's just taking someone else's word that what they are saying about a particular entity is in fact true. I could, for example, state in my answers that Cuba is in the eastern hemisphere and is the capital of Nepal. Of course, that's not true but if you put that into 20Q, it'll think it's correct. 20Q doesn't challenge the user on their answer- it just takes their word and "updates" its coding accordingly, even if that coding is extremely wrong.
I know 20Q is primitive, but it's far from the framework of artificial intelligence. The way it is set up, for 20Q to be really intelligent it would have to "stop" user input when it feels it has learned enough about a particular object so it can comprehend it. It also has to be able to evaluate a statement contradictory to what it may have been taught before (such as "Cuba is the capital of Nepal") and come to a conclusion concerning whether or not to change what it already knows. It cannot simply rely on a user to change that for the computer; and that just isn't possible. A computer can't arbitarily decide when it has "learned" (i.e., comprehended) something, and we can- thus, it cannot be deemed "intelligent".
Learning AIs are far from new, though they, relatively speaking, are still very primitive and limited in capability as of yet. The only thing needed is more raw processing power in order to get a system sophisticated enough running speedily. (it should be perfectly possible to make a very advanced and intelligent AI with today's computer technology, but it'd run ridiculously slow to a point far, far beyond complete uselessness)
Processing power isn't the question here- it's the ability to evaluate independently, and that involves deciphering abstract thoughts. A computer can compute a million instructions per second, but if it can't understand the ramifications of those instrictions (beyond producing a specific result) then it's really not learning anything. I covered some of this in the 20Q point, but to reiterate: all a computer can do is understand that Command A leads to Action B which produces a result- it cannot, say, evaluate Command A and judge for itself whether or not that command would be a worthwhile one to carry out.
To illustrate: say there's an andriod and me and we're both going to Vancouver. We both have no limitations to what we can do to get there- we can fly, take a train, walk, drive, etc. We're both commanded to walk. I, however, do not think walking is the best option, so I fly. The android's programming says it has to walk, so it's walking, even if it knows about flying. You could program the android to "know" (by either inputing that information in its programming or by giving it the appartus to collect that information) that flying would mean a quicker arrival time in Vancouver than by walking, but it's still going on what its master is telling it to do- it cannot arrive to that conclusion by itself. Furthermore, we have to also answer the question of the concept of "the better option"- because "better" is a value judgement (often arrived indepedently of others), it's different for different people and thus goes beyond the simple X+Y=Z thinking of a computer. A computer cannot decide on its own "what is better"- it requires someone to tell that for it, because then it can compute the necessary equations. In the case of travelling to Vancouver, the android is incapable of "knowing" that flight is "better" than walking unless it's specifically told that "arriving in the city as fast as you can" is actually the "better" option. I can come up with that decision on my own because I'm able to comprehend the idea of "better" and translate that to my current objective- a robot, unless it's told what to look for, will not arrive at that conclusion.
Now, I did say before that I believe we can get close to A.I.- it's possible to make a massive computer program that can detail every known entity known to humankind and make a robot consider each entity before it arrives at a decision, but once it finds something it doesn't know, it's stuck. I'd be able to comprehend something new and translate that into a requisite action, but unless a robot's programming covers what it's supposed to do, then there's nothing the robot will do.
Barringtonia
06-06-2007, 10:22
Now, I did say before that I believe we can get close to A.I.- it's possible to make a massive computer program that can detail every known entity known to humankind and make a robot consider each entity before it arrives at a decision, but once it finds something it doesn't know, it's stuck. I'd be able to comprehend something new and translate that into a requisite action, but unless a robot's programming covers what it's supposed to do, then there's nothing the robot will do.
Disregarding the rest of your post I think the above is a rather old way of thinking about A.I.
The brain doesn't work like this, it can use a lot less processing power by using means similar to the 20Q program, that is, taking previous experience (or data input) and then making a prediction.
If you tell a child that Cuba is the capital of Nepal, it has no way of telling whether that is correct or not until it's had enough data input to override that fact. Whether it's reading, going there or having a more trusted source, it's all data input.
So what you need is a lot of memory space, though not as much as you might think, and the means to take in multiple inputs as opposed to a lot of processing power.
Emotions are a reward/punishment system for our predictions, on which we base our actions.
So to create A.I., which I think we're getting closer to, we need a fairly complex algorithm to process input and sort it correctly into memory. It also needs to cross-reference memory as much as possible and have a reward/punishment system in order to accurately quantify that information.
The cross-referencing allows it to make predictions based on that data, which is a certain % similar. I don't need separate data to know that if a bicycle hits me and it hurts then any fast, metallic object will hurt me. It cuts down the need for processing power enormously for me to make a prediction.
That's all for now.
The Infinite Dunes
06-06-2007, 10:26
That's got nothing to do with AI, it's just robotics.
A potentially huge recent breakthrough for AI was this - http://inventorspot.com/robot_demonstrates_self_awareness
The robot was supposedly able to demonstrate self awareness by recognising its own reflection - something that very few animals are able to do. I haven't read up fully on it, but it does sound amazing. However there are critics who have described as a simple parlour trick rather than any sort of AI. But then that all depends on how you define conciousness and... welll it's all very fascinating. :)
James_xenoland
06-06-2007, 10:38
Real AI would be terrible. Imagine the video games!
"Player moves pawn to C1"
"Computer launches nuclear missiles towards Russia"
Win! lol
The robot was supposedly able to demonstrate self awareness by recognising its own reflection - something that very few animals are able to do. I haven't read up fully on it, but it does sound amazing. However there are critics who have described as a simple parlour trick rather than any sort of AI. But then that all depends on how you define conciousness and... welll it's all very fascinating. :)
Even if it is a trick, being able to simulate the appearance consciousness is only a stone's throw away from actually being conscious...frankly, it's a good sign either way.