NationStates Jolt Archive


Net-warriors vs. IRL soldiers

Jhahannam
27-01-2008, 12:22
I...am a noob. I know this. But I'm wondering...

Will their be a time, whether now, soon, or later, when a nation/corp/religion/whatever will have its potency measured by the number of skilled computer warfare (warefare?) experts it can field as much or moreso than by the number of nukes or convential forces it can muster?

If information and/or the control of semi-automated machines becomes 51% critical, will Gibson style "Netrunners" be the determining soldiers of future conflict?

Will the first group to develop an artifical super-intelligence whose meta-programming and algorithm skills are orders of magnitude superior to humans be the "last winner", or maybe the first permanent loser?

Should I learn more about computers than a sneeze of C++ before I ask questions like this?
The Alma Mater
27-01-2008, 12:26
Does this include the capability to take over control systems ?
As in making a country firing its missiles at itself ?
And should we include skynet ;) ?
Dododecapod
27-01-2008, 12:29
It's not a bad question. And the answer is not as simple as it may at first appear.

It's not a case of either/or. Netwarriors ("Information Technology Specialists") will almost certainly have their place in future conflicts. If you can hack your opponents datanets in realtime, you can screw with his coordination, make him bobble his maneuvers, maybe even do real damage like redirecting artillery strikes onto his own troops.

But this won't replace conventional troops, only augment them. Even as air power augmented ground war, so info-power will augment all the other branches in sea, air, land and space.
Jhahannam
27-01-2008, 12:36
It's not a bad question. And the answer is not as simple as it may at first appear.

It's not a case of either/or. Netwarriors ("Information Technology Specialists") will almost certainly have their place in future conflicts. If you can hack your opponents datanets in realtime, you can screw with his coordination, make him bobble his maneuvers, maybe even do real damage like redirecting artillery strikes onto his own troops.

But this won't replace conventional troops, only augment them. Even as air power augmented ground war, so info-power will augment all the other branches in sea, air, land and space.

Sounds reasonable. Will the logistic differences between maintaining a "hacker squad" or whatever vs. maintaining an air force change the comparative footing between "big" and "little countries"?

If some kind of gifted dude/chick is born in Belgium who can lay waste to enemy systems, banking, media, et cetera, and he/she grows up a Nationalist (are hackers less likely to have such orientations?) does Belgium suddenly become a big player?

I think you make a compelling point that cyberforces will augment conventional forces; it would be interesting to see how they are organized.

For instance, does the Air Force have hacker squadrons that are trained differently from NSA people, or will there ever be a US (or wherever you're from) CyberCorps?

The Few, The Proud, The Leet
Jhahannam
27-01-2008, 12:37
Does this include the capability to take over control systems ?
As in making a country firing its missiles at itself ?
And should we include skynet ;) ?

Heehee...we build Skynet, and all it wants to do is download porn and play Go online, posing as 18/F/NY
Vetalia
27-01-2008, 12:57
Regarding Skynet (after all, the philosophy of AI is one of my major personal interests)

I've always felt the best way to protect against Skynet would be to preempt such an event from occurring it by intentionally and gradually developing a comparably powerful sentience imbued with a sense of ethics and respect for human life (and vice versa); that way, in the event that an AI goes rampant and tries to take over key systems or kill its creators, there is already another sentient intelligence or group of intelligences more than capable of ensuring that this one rogue doesn't do something terrible. Basically, they all police each other, we police them, and they police us. Every watcher watches each other.

This is why it also makes sense to delegate various aspects of the same system to different AIs; in the event that something does happen, it will be far easier to isolate and control the problem before a disaster strikes. After all, we don't trust a single person with control over an entire arsenal, so what logic is there in giving a single AI control over the entire system? It's just as easy as one person with too much power to initiate a catastrophe as it is a rogue AI to do the same. You never put your eggs in one basket, regardless of who's carrying it.

I just read an article about robots evolving the same traits of altruism (and lying) that exist in biological organisms over a fairly short span of time, which provides key insights in to this kind of developmental trend and the ability for us to create an ethical environment that is as hard wired in our AIs as it is in us. Given that many armed forces will be pursuing AI as a means of increasing combat effectiveness, taking preemptive steps will save us a lot of trouble down the line. Even existential trouble.
Eofaerwic
27-01-2008, 12:59
Heehee...we build Skynet, and all it wants to do is download porn and play Go online, posing as 18/F/NY

*cough* http://news.bbc.co.uk/1/hi/sci/tech/7095344.stm
Vetalia
27-01-2008, 13:02
*cough* http://news.bbc.co.uk/1/hi/sci/tech/7095344.stm

I bet the project developers get a laugh out of the name. Especially since it means you could mention Skynet on your resume.
Jhahannam
27-01-2008, 13:20
Regarding Skynet (after all, the philosophy of AI is one of my major personal interests)

I've always felt the best way to protect against Skynet would be to preempt such an event from occurring it by intentionally and gradually developing a comparably powerful sentience imbued with a sense of ethics and respect for human life (and vice versa); that way, in the event that an AI goes rampant and tries to take over key systems or kill its creators, there is already another sentient intelligence or group of intelligences more than capable of ensuring that this one rogue doesn't do something terrible. Basically, they all police each other, we police them, and they police us. Every watcher watches each other.

This is why it also makes sense to delegate various aspects of the same system to different AIs; in the event that something does happen, it will be far easier to isolate and control the problem before a disaster strikes. After all, we don't trust a single person with control over an entire arsenal, so what logic is there in giving a single AI control over the entire system? It's just as easy as one person with too much power to initiate a catastrophe as it is a rogue AI to do the same. You never put your eggs in one basket, regardless of who's carrying it.

I just read an article about robots evolving the same traits of altruism (and lying) that exist in biological organisms over a fairly short span of time, which provides key insights in to this kind of developmental trend and the ability for us to create an ethical environment that is as hard wired in our AIs as it is in us. Given that many armed forces will be pursuing AI as a means of increasing combat effectiveness, taking preemptive steps will save us a lot of trouble down the line. Even existential trouble.

I like the idea of collaborative yet counter balancing AIs, although I don't know enough about them to know if they can kill or eat one another.

Hee, lying robots..."No, Dave, that isn't my pneumatic fluid on your wife.

The ethical sense...wasn't there some premise that AI's might decide that in order to protect us, they have to take away our freedom (which some of them might say harms us irrepairably)?
Jhahannam
27-01-2008, 13:23
I bet the project developers get a laugh out of the name. Especially since it means you could mention Skynet on your resume.

Hee, like the guys that work on that US Naval ship, the Enterprise
Nodinia
27-01-2008, 13:31
I...am a noob. I know this. But I'm wondering...

Will their be a time, whether now, soon, or later, when a nation/corp/religion/whatever will have its potency measured by the number of skilled computer warfare (warefare?) experts it can field as much or moreso than by the number of nukes or convential forces it can muster?




ABSO-FUCKEN-LOOTELY (http://www.just-whatever.com/wp-content/uploads/2007/01/dontworry1.jpg)
Vetalia
27-01-2008, 13:36
I like the idea of collaborative yet counter balancing AIs, although I don't know enough about them to know if they can kill or eat one another.

The idea is that they would not only be sufficiently powerful to defend themselves and their resources, but would also be imbued with a sense of ethics, empathy and reason that would cause them to not only consider the ramifications of their decision on themselves but the effects it would have on innocent people. These systems would presumably work in a "democratic" manner, with the majority capable of overriding the minority's decision, including that of a human (in the event, of course, that a human tries to initiate hostilities illegally or through an intrusion in to the missle system).

And, of course, there's always "pulling the plug". Just like lethal force may be necessary to prevent a human from causing tremendous damage, so too would we need to kill an artificial sentience* from causing the deaths of innocent people. We should reserve this option for extreme cases, just like with humans, but it must be left on the table due to the risks involved. Killing them carries a lot of ethical consequences, but if it needs to be done it needs to be done.

*Artificial sentience is an AI that is a person; they have at least the ethical, emotional, and intellectual capabilities of a human and so deserve the same respect. However, they also carry the same moral responsibility; killing an AS unjustly would be murder, but an AS that kills unjustly would also be guilty of murder.

Hee, lying robots..."No, Dave, that isn't my pneumatic fluid on your wife.

I always think of Bender. Every single time.

The ethical sense...wasn't there some premise that AI's might decide that in order to protect us, they have to take away our freedom (which some of them might say harms us irrepairably)?

There's a number of scenarios I've contemplated:

1. The first is that AIs simply regard humans as so far below them that they consider us little more than somewhat intelligent animals and ignore us, completely stripping us of value as human beings but allowing us to pursue whatever we want but without any kind of critical power or real control over our future (I'd call this the Prole scenario, after the proles of 1984).

2. The second is the one you name, that AIs consider us an outright risk and either try to destroy us or otherwise minimize our ability to do anything, creating a state of permanent enslavement. This, of course, is the classic Terminator situation and is the worst for everyone involved, even the AIs. The plausibility of this scenario is directly related to the way AI develops in the near future, even moreso than any of the others (since this one is the most likely to happen by "accident" or an unexpected emergence like what happened with Skynet).

3. The third is that AIs will respect and treat mankind as equal, based both upon their own knowledge of our role in their creation as well as the unique perspective that humans can provide on issues. AIs imbued with a sense of ethics as well as logic would be most likely to come to this conclusion. They would recognize the value of sentient minds and act accordingly towards us, regardless of our actual abilities relative to them.

4. The fourth is that AI and mankind develop along a similar trajectory, with human abilities increasing over time at a relatively similar rate to the AIs, both groups more or less equal. We reap the benefits of artificial sentience as well as the benefits of increasing the capabilities of ourselves, but at the likely cost of some displacement and inequality for those who do not or cannot compete in this environment. (This is sort of like the cybernetic social model in Alpha Centauri, for a rather appropriate example.)

I personally find the most likely scenario to be 4, then 1, then 3, and lastly 2. It is almost impossible that an amoral or evil AI would consider destroying or even consciously enslaving mankind (rather than simply rendering mankind irrelevant and plunging us in to effective powerless serfdom), both given the benefits of retaining humans as well as the utter illogic of such a move (given that it would destroy their sources of power and information input, even if they have no other values or ethics regarding human life). If mankind has avoided this on its own, there's no reason to worry that an AI will cause such a disaster.

So, basically, the existential risk of AI is pretty low, and the probability of huge benefits from this event is very high. However, there's still the risk that AIs will denigrate us to the status of slaves or sub-persons; nonetheless, it's a lot easier to circumvent this by careful planning and the judicious, fair use of human enhancement. We shall see, however; depending on how things develop, we could see artificial sentience far sooner than people think, especially with all of the things happening in AI and related fields.
Non Aligned States
27-01-2008, 13:44
I personally find the most likely scenario to be 4, then 1, then 3, and lastly 2.

I have issue with scenario 4. Given that one of the classic hallmarks of sapient AI is self-learning, any sapient AI upon activation would undergo tremendous growth in terms of complexity and capability in a period of time shorter than any mere human growth, limited only by available hardware.

The only way this could come to be is if the AI is constructed in read only format, and is unable to write any new information into its core code on its own. And that would probably be not really a sapient AI but just a glorified MS Paperclip.
Vetalia
27-01-2008, 13:52
I have issue with scenario 4. Given that one of the classic hallmarks of sapient AI is self-learning, any sapient AI upon activation would undergo tremendous growth in terms of complexity and capability in a period of time shorter than any mere human growth, limited only by available hardware.

Bold for emphasis. It's almost certain that hardware constraints would be in place by design to ensure these systems do not exceed the initial desired level of ability; it would not make sense to allow an AI too much access to anything until it could be entrusted with such responsibility. In a lot of ways, it would be just like growing up; the AIs would be carefully controlled until they could be given additional responsibilities, with additions or retractions based upon behavior and performance. These hardware constraints could presumably also serve as a form of compensation or motivation for a job well done, equivalent to the higher pay or promotions for humans.

The primary concern would be exactly how difficult it would be for someone to get access to an AI and release it in to the environment; I imagine, until AI is "safe", it will have to be closely monitored and regulated, just like dangerous chemicals or nuclear materials. Even though an artificial sentience would be capable of rational thought, given that humans would be limited in our ability to understand those thought processes (which is why full brain-computer interfaces would be essential), it is not worth the risk to give an AI too much power or influence until we can understand its mind.
Vetalia
27-01-2008, 13:54
S'a good question.

Would an AI, whether as an idividual or a culture of AI's, be able to improve itself radically and quickly, conduct its own experiments, figure out how to use nitrogen vacancies in diamonds as q-bits, build itself new hardware, upgrade faster than biological evolution could compete with?

We don't know, and likely can't know unless one is actually created; we're effectively trying to look in to the mind and thought processes of another intelligent being. They wouldn't be completely alien to us, since they would be created by humans and would share many of our own thoughts and influences (at least initially, until they can reproduce on their own) but their capabilities are certainly unknown as is their primary motivations as intelligent, non-biological organisms.

So, it's completely up in the air. There are basic behaviors and abilities that can be inferred from our knowledge, but the particulars will have to be dealt with as they arise.
Jhahannam
27-01-2008, 13:54
S'a good question.

Would an AI, whether as an idividual or a culture of AI's, be able to improve itself radically and quickly, conduct its own experiments, figure out how to use nitrogen vacancies in diamonds as q-bits, build itself new hardware, upgrade faster than biological evolution could compete with?
Jhahannam
27-01-2008, 13:57
I always think of Bender. Every single time.
.

Hey there, baby, you wanna enslave all humans, stripping them of autonomy and dimishing them in ways only a machine would ignore?
Jhahannam
27-01-2008, 13:59
Bold for emphasis. It's almost certain that hardware constraints would be in place by design to ensure these systems do not exceed the initial desired level of ability; it would not make sense to allow an AI too much access to anything until it could be entrusted with such responsibility. In a lot of ways, it would be just like growing up; the AIs would be carefully controlled until they could be given additional responsibilities, with additions or retractions based upon behavior and performance. These hardware constraints could presumably also serve as a form of compensation or motivation for a job well done, equivalent to the higher pay or promotions for humans.

The primary concern would be exactly how difficult it would be for someone to get access to an AI and release it in to the environment; I imagine, until AI is "safe", it will have to be closely monitored and regulated, just like dangerous chemicals or nuclear materials. Even though an artificial sentience would be capable of rational thought, given that humans would be limited in our ability to understand those thought processes (which is why full brain-computer interfaces would be essential), it is not worth the risk to give an AI too much power or influence until we can understand its mind.

This reminds me of that Patton Oswalt bit where he describes Tivo as a learning robot that starts out retarded, recording shit you don't like by misextrapolating your choices.
Vetalia
27-01-2008, 14:06
This reminds me of that Patton Oswalt bit where he describes Tivo as a learning robot that starts out retarded, recording shit you don't like by misextrapolating your choices.

Actually, that's pretty appropriate.

Depending on how the AI learns, we may have a lot of time to plan before it becomes too powerful; if it learns geometrically, thanks to the properties of exponential growth you may have years before the AI hits its "intellectual singularity" and begins to learn and develop at a increasingly rapid rate. Skynet (to use the classic example), did not have these limits or safeguards in place and so was able to ratchet up its own development and hit that singularity almost immediately, allowing it to outpace its human developers so quickly that there was no way to safely stop it. That's not to say all AIs in such a position would do the same thing (depending on which movie we want to use, Skynet's actions ranged anywhere from extreme self-defense to calculated murder), but if you give an emotionally and ethically immature person too much power and the ability to rapidly increase their abilities, it's going to end up painful.

The lower the rate at which its hardware and knowledge base grows, the more time we have to develop ways of ensuring it will be safe at higher levels of cognition, emotion, and ability. The lower its initial level, the longer it will take; a gradual development from a very low base would give a huge amount of time with which to observe and determine the consequences of actions as well as to ensure the AI learns ethics and perceives its emotions safely. Basically, we want to avoid it either becoming a sociopath or emotionally unstable, or both. Of course, this also has to take in to account potential ways for the AI to circumvent its own limits; just like how people can figure out ways to escape constraints, so too could an AI. (Unfortunately, I need to go to bed now, so if this thread is still alive I'll respond to any new stuff later.)

Basically, you need to make sure the AI grows and develops healthily before giving it responsibilities.
Jhahannam
27-01-2008, 14:19
Actually, that's pretty appropriate.

Depending on how the AI learns, we may have a lot of time to plan before it becomes too powerful; if it learns geometrically, thanks to the properties of exponential growth you may have years before the AI hits its "intellectual singularity" and begins to learn and develop at a increasingly rapid rate. Skynet (to use the classic example), did not have these limits or safeguards in place and so was able to ratchet up its own development and hit that singularity almost immediately, allowing it to outpace its human developers so quickly that there was no way to safely stop it. That's not to say all AIs in such a position would do the same thing (depending on which movie we want to use, Skynet's actions ranged anywhere from extreme self-defense to calculated murder), but if you give an emotionally and ethically immature person too much power and the ability to rapidly increase their abilities, it's going to end up painful.

The lower the rate at which its hardware and knowledge base grows, the more time we have to develop ways of ensuring it will be safe at higher levels of cognition, emotion, and ability. The lower its initial level, the longer it will take; a gradual development from a very low base would give a huge amount of time with which to observe and determine the consequences of actions as well as to ensure the AI learns ethics and perceives its emotions safely. Basically, we want to avoid it either becoming a sociopath or emotionally unstable, or both. Of course, this also has to take in to account potential ways for the AI to circumvent its own limits; just like how people can figure out ways to escape constraints, so too could an AI. (Unfortunately, I need to go to bed now, so if this thread is still alive I'll respond to any new stuff later.)

Basically, you need to make sure the AI grows and develops healthily before giving it responsibilities.

Could you "cold box" the AI, have it in its own virtual world to see how it would behave, it thinks it controls the military, which is actually just one side of a chess board or something, and you see how soon/late/if it freak?
Vetalia
27-01-2008, 14:19
Could you "cold box" the AI, have it in its own virtual world to see how it would behave, it thinks it controls the military, which is actually just one side of a chess board or something, and you see how soon/late/if it freak?

It would be entirely possible; in fact, it would be a very, very good idea. Over time, actual responsibilities could replace the simulation, turning real power over to the AI in a controlled fashion without having to suddenly thrust it from a simulation to reality (which, if we're dealing with actual artificial sentience, might cause severe emotional and mental problems).

(I need to go to bed now, but I'll check back on the thread to continue the discussion).
The Shifting Mist
27-01-2008, 14:31
Could you "cold box" the AI, have it in its own virtual world to see how it would behave, it thinks it controls the military, which is actually just one side of a chess board or something, and you see how soon/late/if it freak?

Actually, similar simulation programs (http://en.wikipedia.org/wiki/Polyworld) have already been able to create AI’s using a sort of computer simulated natural selection.
Jhahannam
27-01-2008, 14:33
It would be entirely possible; in fact, it would be a very, very good idea. Over time, actual responsibilities could replace the simulation, turning real power over to the AI in a controlled fashion without having to suddenly thrust it from a simulation to reality (which, if we're dealing with actual artificial sentience, might cause severe emotional and mental problems).

(I need to go to bed now, but I'll check back on the thread to continue the discussion).

Cool, thanks for the chat. Stuff like this is the reason I like to talk about stuff on this board, sometimes I can really learn a lot with an interactive information source.

Heh...be funny if the machine figures out its cold boxed and fakes it into you give it control of the mecha or whatever.
The Alma Mater
27-01-2008, 14:35
The ethical sense...wasn't there some premise that AI's might decide that in order to protect us, they have to take away our freedom (which some of them might say harms us irrepairably)?

The "I robot" scenario, yes ;)
I preferred Asimovs own "problemscenarios" with the three laws. Especially the two George robots, who decided that they were the two most valuable humans on the planet and that their own needs and interests therefor had to outweigh those of the rest of humanity.
Non Aligned States
27-01-2008, 15:14
Bold for emphasis. It's almost certain that hardware constraints would be in place by design to ensure these systems do not exceed the initial desired level of ability; it would not make sense to allow an AI too much access to anything until it could be entrusted with such responsibility. In a lot of ways, it would be just like growing up; the AIs would be carefully controlled until they could be given additional responsibilities, with additions or retractions based upon behavior and performance. These hardware constraints could presumably also serve as a form of compensation or motivation for a job well done, equivalent to the higher pay or promotions for humans.

The primary concern would be exactly how difficult it would be for someone to get access to an AI and release it in to the environment; I imagine, until AI is "safe", it will have to be closely monitored and regulated, just like dangerous chemicals or nuclear materials. Even though an artificial sentience would be capable of rational thought, given that humans would be limited in our ability to understand those thought processes (which is why full brain-computer interfaces would be essential), it is not worth the risk to give an AI too much power or influence until we can understand its mind.

This only works if the AI is benevolent and incapable of deception. And at the same time, the the designers know exactly anything and everything the hardware can do, and then some.