NationStates Jolt Archive


Robots in Warfare.

Neu Leonstein
09-02-2006, 01:40
http://www.rheinmetall-detec.de/index.php?lang=3&fid=802
http://www.army-technology.com/projects/taifun/
Just a few days ago I read about a new UAV the company Rheinmetall is developing. Unlike the American Predator-Drone, this one is not remote-controlled, but pretty much fully autonomous, and especially built to attack.

It flies about on its own, using its various sensors to spot people and vehicles, identifies them, ranks them in order of importance and then asks ground control for confirmation. Once it gets it, it blows holes into people.

For all intents and purposes, that's a robot. A robot that kills people.

Isn't that against the three laws of robotics?
A robot may not harm a human being, or, through inaction, allow a human being to come to harm.
A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
A robot must protect its own existence, as long as such protection does not conflict with the First or Second Law.
Now, hopefully we have a few engineers or otherwise educated people here who can explain to me what these laws are. Do they matter? Does anyone care?

And is it right for a robot to kill a person?
Franberry
09-02-2006, 01:42
http://www.rheinmetall-detec.de/index.php?lang=3&fid=802
http://www.army-technology.com/projects/taifun/
Just a few days ago I read about a new UAV the company Rheinmetall is developing. Unlike the American Predator-Drone, this one is not remote-controlled, but pretty much fully autonomous, and especially built to attack.

It flies about on its own, using its various sensors to spot people and vehicles, identifies them, ranks them in order of importance and then asks ground control for confirmation. Once it gets it, it blows holes into people.

For all intents and purposes, that's a robot. A robot that kills people.

Isn't that against the three laws of robotics?
A robot may not harm a human being, or, through inaction, allow a human being to come to harm.
A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
A robot must protect its own existence, as long as such protection does not conflict with the First or Second Law.
Now, hopefully we have a few engineers or otherwise educated people here who can explain to me what these laws are. Do they matter? Does anyone care?

And is it right for a robot to kill a person?

Isaac Asimov is spinning in his grave right now
Stone Bridges
09-02-2006, 01:44
Isaac Asimov is spinning in his grave right now

So?
Neu Leonstein
09-02-2006, 01:48
So?
So is it right to ignore these laws (which I always thought were a good idea) for short-term gain?
Vetalia
09-02-2006, 01:52
Many of the world's revolutionary technologies were originally developed for war, so it wouldn't be particularly unusual for robots to do the same. Not to mention these same technologies eventually revolutionized civilian life for massive benefit. If fielding armies of war robots is necessary for that technology to become established as a life-changing aspect of civilian life, it's worth it.
Kossackja
09-02-2006, 01:52
Isn't that against the three laws of robotics?those three laws are from a fiction story, not scientific laws. you could just as well ask if this violates the prime directive from star trek.

also the asimov shows in his story, that the three laws, when followed, inadvertently lead to a very bad outcome.
Reformedra
09-02-2006, 01:59
Look, the "three laws of robotics" are fictional. FICTIONAL. They're a science-fiction writer's plot device to make things interesting. Nothing more. They're not based in real life in any way.

Personally, I think it's fine. What could possibly go wrong? It's just another weapon. If it goes berserk on their own military units they can shoot it down or probably turn it off. If it malfunctions they can (probably) turn it off. It's not like it's a supercomputer - it has programming to fly where it needs to and possibly kill things. It can't "evolve" intelligence and spawn a race of machines to take over the world and lock us into capsules and link us up to a virtual relaity while we're used as a power source. (I've been watching the Matrix way too much.)
Neu Leonstein
09-02-2006, 01:59
also the asimov shows in his story, that the three laws, when followed, inadvertently lead to a very bad outcome.
Fair enough, as you can tell, I never read 'em.

But let's just ignore where the laws come from and look at their merit. Is it ultimately right for a robot to kill a person? Isn't a person worth infintely more than a robot?
Vetalia
09-02-2006, 02:03
But let's just ignore where the laws come from and look at their merit. Is it ultimately right for a robot to kill a person? Isn't a person worth infintely more than a robot?

At present, yes. These robots are nothing more than computers, really, ultimately programmed to predetermined or at most self-educating algorithims. They are not sentient. But what happens when we achieve a synthetic robot conscience or a sentient computer? That would be the greatest achievement to date, and should be a goal of our technological development, but what of its "humanity"?

Ultimately, the value of a person comes from their sentience, doesn't it? It's not the physical appearance that makes us human, so what is to say that a sentient robot would not merit the same protections?
Kossackja
09-02-2006, 02:07
Fair enough, as you can tell, I never read 'em.maybe you can get yourself around to watch the movie at least?

the thing is if you vigorously apply "a robot must not allow a human being to come to harm", then the robots would have to prevent humans by force from doing anything possibly dangerous, like smoking, eating fatty stuff, doing dangerous sports, driving cars... the robots would turn into some kind of super-foodnazi-nannies, that lock the humans up in protective custody for their own good.

if the uacv destroys an enemy tank and kills the crew, it saves a whole regiment that would be assaulted by the tank, so netto the robot saves lifes. it is all good.
Deep Kimchi
09-02-2006, 02:11
http://www.rheinmetall-detec.de/index.php?lang=3&fid=802
http://www.army-technology.com/projects/taifun/
Just a few days ago I read about a new UAV the company Rheinmetall is developing. Unlike the American Predator-Drone, this one is not remote-controlled, but pretty much fully autonomous, and especially built to attack.

It flies about on its own, using its various sensors to spot people and vehicles, identifies them, ranks them in order of importance and then asks ground control for confirmation. Once it gets it, it blows holes into people.

For all intents and purposes, that's a robot. A robot that kills people.

Isn't that against the three laws of robotics?
A robot may not harm a human being, or, through inaction, allow a human being to come to harm.
A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
A robot must protect its own existence, as long as such protection does not conflict with the First or Second Law.
Now, hopefully we have a few engineers or otherwise educated people here who can explain to me what these laws are. Do they matter? Does anyone care?

And is it right for a robot to kill a person?

Boeing is making an unmanned autonomous ground attack aircraft.

Think about it - the current PAC-3 missile system is fully autonomous once turned on.

If you don't have an IFF transponder that it thinks is OK, it's going to shoot at you until you die.

It automatically will detect that the first missile is about to miss, and will ripple-fire at the next probable intercept point.

It even did this successfully against incoming tactical missiles during the invasion of Iraq.

And, like you said, its accuracy is perfect - but the judgment is not. It shot down an RAF Tornado - all automatically.

No human was involved in the decisionmaking - the system is designed to eliminate the human element of delay - since delay is deadly against incoming enemy aircraft or missiles.

It also makes the time between the target knowing that it's being seen by radar and the time the missile arrives very, very short.

The Army is working on a variety of autonomous ground vehicles that will kill on their own.

There will be a variety.

I, for one, think that the judgment thing is a matter of time.

Imagine trying to be an insurgent fighting against machines that don't sleep, have fantastic reaction times, and rarely miss their targets.

Oooh. Shades of the Terminator...

Plus, if the insurgents kill one, it's not like they can get any satisfaction by cutting its throat on al-Jazeera.
Neu Leonstein
09-02-2006, 02:11
maybe you can get yourself around to watch the movie at least?
Generally I don't watch movies based on books, unless I read the book first. Plus, most of what Hollywood churns out is crap, so I usually don't bother anymore.
Begoned
09-02-2006, 02:15
Fair enough, as you can tell, I never read 'em.

But let's just ignore where the laws come from and look at their merit. Is it ultimately right for a robot to kill a person? Isn't a person worth infintely more than a robot?

Well, yeah, but you can always arrest a robot and charge it with murder. Of course, you have to give it a fair trial before you sentence it. After it's convicted, let it think of what it did wrong for the rest of its natural life. Or give it the death penalty, and kill it by lethal injection.
Vetalia
09-02-2006, 02:18
Well, yeah, but you can always arrest a robot and charge it with murder. Of course, you have to give it a fair trial before you sentence it. After it's convicted, let it think of what it did wrong for the rest of its natural life. Or give it the death penalty, and kill it by lethal injection.

Wouldn't it be decommissioned or memory wiped? I don't think you can poison a machine...:p
Begoned
09-02-2006, 02:22
Is it ultimately right for a robot to kill a person?

Well if such robots can be developed cost-efficiently, they would bring warfare to a new level. Ultimately, it could lead to the elimination of human fighters and result in all-out robot warfare (in a couple of decades, maybe). This would most likely save human lives. However, there may also be the possibility of robots who are programmed to kill indiscriminately. Possibly in the future, terrorists may build robots for the sole purpose of killing as many people as possible. It depends on how autonomous the robots are capable of being and on how they can be stopped.
Lacadaemon
09-02-2006, 02:22
The 'three laws of robotics' were just shit that asimov made up for a bunch of short stories as a plot device. Nothing more. And they have absolutely nothing to do with actual 'robots'. (Unless the positronic brain was invented while I wasn't looking). You may as well start talking about 'robot psychologists' because that's just as applicable.

Honestly, the ignorance of engineering/science these days is simply appaling.
Begoned
09-02-2006, 02:24
I don't think you can poison a machine...:p

I guess you're right. We could always try hanging it, then. :)
Vetalia
09-02-2006, 02:26
I guess you're right. We could always try hanging it, then. :)

Yeah, I realized that it would not be easy to execute a machine...
Lacadaemon
09-02-2006, 02:26
Fair enough, as you can tell, I never read 'em.

But let's just ignore where the laws come from and look at their merit. Is it ultimately right for a robot to kill a person? Isn't a person worth infintely more than a robot?

You may as well ask as if it is right for a bomb to kill a person. A robot is just a computer with power actuation. It has no free-will, no self awareness, no innate sense of right and wrong. Ultimately, the responsibilty for any deaths doesn't lie with the machine, but the people who built it, and the people who used it. In that, it is no different from any other weapon.

If a respirator in a hospital failed, you wouldn't bother to wonder if it should be charged with negligence, would you?
Lt_Cody
09-02-2006, 02:27
maybe you can get yourself around to watch the movie at least?

About the only similarity between that...thing and Asimov's book is that they share the same name. Stick with quality and just read the book.
Vetalia
09-02-2006, 02:28
The 'three laws of robotics' were just shit that asimov made up for a bunch of short stories as a plot device. Nothing more. And they have absolutely nothing to do with actual 'robots'. (Unless the positronic brain was invented while I wasn't looking). You may as well start talking about 'robot psychologists' because that's just as applicable.

Ultimately, however, it is very possible that we will design a presentient or even sentient computer in the relatively near future; quantum computing is already in the pipeline, and that will offer untold opportunities in artificial intelligence.

When that comes along, perhaps we will need robot psychologists.

Honestly, the ignorance of engineering/science these days is simply appaling.

And that is why we have to get the ball rolling on math and science education; you can't program computers or robots without a strong math and engineering base in the field.
Lacadaemon
09-02-2006, 02:29
Oh yeah, and in the books, the three laws end up leading to a very bad outcome for the human race. (Not to mention the extermination of all other sentient life in the galaxy).
Kossackja
09-02-2006, 02:29
Yeah, I realized that it would not be easy to execute a machine...there is an educational movie on this, called terminator 2
Iztatepopotla
09-02-2006, 02:32
Isn't a person worth infintely more than a robot?
Not really. At current market prices the components of a person are worth like $3.50. a bit more if you have gold fillings, but not that much. It's mostly water, and that's very cheap.
Lacadaemon
09-02-2006, 02:37
And that is why we have to get the ball rolling on math and science education; you can't program computers or robots without a strong math and engineering base in the field.

Speaking as an engineer, the problem isn't the number of people with science/engineering degrees, there are plenty of them, but more the fact that it is too easy to wiggle out of studying science as part of the core curriculum for people who choose not to pursue it as a career. This has led to a general anti-intellectual ignorance in the population at large.

I think the clock needs to be wound back, and things like calculus and basic college level science (e.g. physics) need to become part of the core curriculum again, instead of the rubbish, like 'physics for poets' that is currently offered.
Lacadaemon
09-02-2006, 02:39
Not really. At current market prices the components of a person are worth like $3.50. a bit more if you have gold fillings, but not that much. It's mostly water, and that's very cheap.

Ah well, the basic elements that make up a person are cheap. But if you were going to purchase the chemical compounds, it would be considerably more expensive.
Forfania Gottesleugner
09-02-2006, 02:39
Ultimately, however, it is very possible that we will design a presentient or even sentient computer in the relatively near future; quantum computing is already in the pipeline, and that will offer untold opportunities in artificial intelligence.

When that comes along, perhaps we will need robot psychologists.



And that is why we have to get the ball rolling on math and science education; you can't program computers or robots without a strong math and engineering base in the field.

...Maybe you should follow your own advice. Our "artificial intelligence" is nothing more than algorithms of response to certain conditions. I guess you could argue that human intelligence is nothing more than response to certain conditions but I wouldn't listen to you or care. We are not close to making true artificial intelligence in the least. In fact many of the people who originally helped develop the idea have now realized (with the more powerful computers of today) that we are further from true AI than ever before. Quantum computers aren't going to change this.

Biocomputers show some interesting promise in the possibilty of AI because they can run chaotically like the human brain does to solve problems. These however are extremely primitive and last I checked they were still experimenting on getting anything to work with a leeches brain (which obviously is very simple). Even with these the only reason they could possible work is because you are basically manipulating a biological set of nerves(brain)
Vetalia
09-02-2006, 02:42
Speaking as an engineer, the problem isn't the number of people with science/engineering degrees, there are plenty of them, but more the fact that it is too easy to wiggle out of studying science as part of the core curriculum for people who choose not to pursue it as a career. This has led to a general anti-intellectual ignorance in the population at large.

I think the clock needs to be wound back, and things like calculus and basic college level science (e.g. physics) need to become part of the core curriculum again, instead of the rubbish, like 'physics for poets' that is currently offered.

I agree 100%. In US schools, students require 4 years of English, but only 3 years of math and science. I'm not denigrating English education by any means, but ultimately math and science education are vital because they help relieve one of the biggest impediments to scientific progress, namely, the ignorance of the population in regard to what is actually happening in these fields. English education is important, but a lack of education in that field doesn't hurt the study of literature as directly as scientific ignorance damages scientific progress.

In Mentor High School, there is no regular Calculus track; you have to take Advanced Placement Calculus, and they don't even offer the opportunity to take the BC exam because they don't teach the concepts. Even worse, students don't have to take calculus or physics at all in many cases.
Neu Leonstein
09-02-2006, 02:45
I think the clock needs to be wound back, and things like calculus and basic college level science (e.g. physics) need to become part of the core curriculum again, instead of the rubbish, like 'physics for poets' that is currently offered.
Oh, I did both. I never did robotics though, and I had been for some reason been under the impression that Asimov's ideas had resonated somewhat with some people.

And besides, the discussion is not so much about the law as it is about robots killing people.
Iztatepopotla
09-02-2006, 02:48
Ah well, the basic elements that make up a person are cheap. But if you were going to purchase the chemical compounds, it would be considerably more expensive.
Hey, I won't pay more than $3.50 for that corpse. If you don't like it you can take it somewhere else!
Vetalia
09-02-2006, 02:48
...Maybe you should follow your own advice. Our "artificial intelligence" is nothing more than algorithms of response to certain conditions. I guess you could argue that human intelligence is nothing more than response to certain conditions but I wouldn't listen to you or care. We are not close to making true artificial intelligence in the least. In fact many of the people who originally helped develop the idea have now realized (with the more powerful computers of today) that we are further from true AI than ever before. Quantum computers aren't going to change this.

I know that quantum computers wouldn't bring about true AI. They would greatly enhance what we could work with in the field, however. Quantum computers would increase the amount of work we could do and would dramatically increase the complexity of the mathematical proceses involved; they would be able to run programs and perform tasks that present day technlogy cannot even possibly undertake.

Biocomputers show some interesting promise in the possibilty of AI because they can run chaotically like the human brain does to solve problems. These however are extremely primitive and last I checked they were still experimenting on getting anything to work with a leeches brain (which obviously is very simple). Even with these the only reason they could possible work is because you are basically manipulating a biological set of nerves(brain)

I agree with this.
Forfania Gottesleugner
09-02-2006, 02:50
I know that quantum computers wouldn't bring about true AI. They would greatly enhance what we could work with in the field, however. Quantum computers would increase the amount of work we could do and would dramatically increase the complexity of the mathematical proceses involved; they would be able to run programs and perform tasks that present day technlogy cannot even possibly undertake.



I agree with this.

Alright, score one for us.
Novoga
09-02-2006, 02:57
And is it right for a robot to kill a person?

Is it any worse than a person murderering another person? If the robots were used in war then they would be killing the enemy, not murderering people.
Neu Leonstein
09-02-2006, 03:01
Is it any worse than a person murderering another person? If the robots were used in war then they would be killing the enemy, not murderering people.
Personally I hold it that even in war, the only justification for killing someone is that you ultimately do it in self-defence, to protect your own life.

A robot doesn't have a life to defend, it doesn't have a right to defend itself, and ergo, I don't think a robot can rightly kill a human being.
Vetalia
09-02-2006, 03:04
A robot doesn't have a life to defend, it doesn't have a right to defend itself, and ergo, I don't think a robot can rightly kill a human being.

But what about automatic devices currently in place, like motion-activated weapons or computer-guided munitions? Many models of these weapons are not controlled by humans, so they would also fall under this category.
Megaloria
09-02-2006, 03:09
bitchin'.
Neu Leonstein
09-02-2006, 03:10
But what about automatic devices currently in place, like motion-activated weapons or computer-guided munitions? Many models of these weapons are not controlled by humans, so they would also fall under this category.
They would, although they are still a level below these new things. For all intents and purposes, a motion-sensor activated gun is just a fancy version of a trap made with a piece of string, there is no making choices involved.

Modern Cruise Missiles and the like are a bit above that, and I guess one could argue about the ethical use of those.

But this new generation of robots here is taking a step towards a machine that makes choices and uses quite complex thinking, which is not always to be traced back to the engineer or programmer directly, in order to kill a human being.
Vetalia
09-02-2006, 03:14
But this new generation of robots here is taking a step towards a machine that makes choices and uses quite complex thinking, which is not always to be traced back to the engineer or programmer directly, in order to kill a human being.

This is an interesting ethical dilemma, and one that will only become more pressing as robotics advances and their destructive capacity increases seemingly exponentially.

Unrelated:

Of course, that raises the question of whether or not it would be wrong to order a fully sentient robot in to combat; a fully sentient robot would be human in its cognitive and even emotional capacities, which of course would bring us full circle to the debate over the morality of self-defense in warfare.
Lacadaemon
09-02-2006, 03:31
But this new generation of robots here is taking a step towards a machine that makes choices and uses quite complex thinking, which is not always to be traced back to the engineer or programmer directly, in order to kill a human being.

But they aren't thinking. That's the point. At best they are very complex expert systems with heuristic algorithms. So while they may display some emergent properties - though I've never heard of that ever happening - they are not concious of their (its) choice, and are incapable of comprehending any action.

Even machines with 'learning' capabilities are extremely limited. (Actually, the learning would be better described as training, i.e. refining capabilities that are already inherent in their design.) New concepts, for example, cannot be self-incorporated.

At the end of the day, they are just machines, albeit very complex ones. They are no more or less subject to moral code than a bullet.

And certianly, even though they may be programmed with discretionary functions in respect of making target choices, they have no concept of what the target really is. As far as the software is concerned it is just a set of input variables that match a certain profile.
Sumamba Buwhan
09-02-2006, 03:39
I'd actually rather see robots in warfare on both sides. That'd be sweet.

At first when I read the thread title I saw "Robots on Welfare".
Mahria
09-02-2006, 03:49
those three laws are from a fiction story, not scientific laws. you could just as well ask if this violates the prime directive from star trek.

also the asimov shows in his story, that the three laws, when followed, inadvertently lead to a very bad outcome.

They certainly aren't scientific laws. I guess the idea was just to come up with some ethics that might work (or advance plotlines) for robotic sentience.

However, they can't apply in this case because these machines aren't actually intelligent. As has already been said, they're just like any other weapon, just a little more complex.

I don't interpret the actions of computerized weapons as "choices," really. Any more than the computer chooses to recognize the input from my keyboard: choice implies consciousness, which robots currently (and probably will always) lack.
Lacadaemon
09-02-2006, 03:51
I'd actually rather see robots in warfare on both sides. That'd be sweet.

At first when I read the thread title I saw "Robots on Welfare".

I wouldn't waste them on warfare. I have tonnes of shit I need doing first.
Novoga
09-02-2006, 05:30
Personally I hold it that even in war, the only justification for killing someone is that you ultimately do it in self-defence, to protect your own life.

A robot doesn't have a life to defend, it doesn't have a right to defend itself, and ergo, I don't think a robot can rightly kill a human being.

In this case the robot can be said to be defending the lives of soldiers that would otherwise have to right in its place if it fails. But, I wouldn't worry about it for awhile, most robots are human controlled (at least the ones that carry weapons).

Only kill in war to protect your self? Highly unusual view of warfare.
Neu Leonstein
09-02-2006, 05:33
Only kill in war to protect your self? Highly unusual view of warfare.
It's pretty much what my father taught me: Joining the military is wrong in the first place, but if you're there already or you don't get a choice, don't make it any worse than it already is.
That's why he never liked snipers either - he felt that they were more offensive than defensive in nature on a microlevel: Generally a sniper doesn't kill to safe his life.
Novoga
09-02-2006, 05:55
It's pretty much what my father taught me: Joining the military is wrong in the first place, but if you're there already or you don't get a choice, don't make it any worse than it already is.
That's why he never liked snipers either - he felt that they were more offensive than defensive in nature on a microlevel: Generally a sniper doesn't kill to safe his life.

Well it is certainly an unusal view of warfare, I think a quote from the The Big Red One will cover my view:

"The hell it is, Griff. You don't murder animals; you kill 'em."

Watch the reconstruction of The Big Red One, not the orginal.
Reformedra
10-02-2006, 07:42
Personally I hold it that even in war, the only justification for killing someone is that you ultimately do it in self-defence, to protect your own life.

A robot doesn't have a life to defend, it doesn't have a right to defend itself, and ergo, I don't think a robot can rightly kill a human being.

But it's not a being. It's a weapon. You might just ask as well if an M16 has any right to kill anyone.

It's not sentient. It's programmed, used, made, and manufactured by people. It can make choices, sure, but only on the most basic level. The ultimate, basic, bottom line is that PEOPLE activate it and tell it what to do, and it goes and does it. Just like pulling a trigger "tells" the gun to fire.
Jerusalas
10-02-2006, 07:51
We make wars less and less bloody for ourselves and act shocked and horrified when the true cost of war comes barreling in at us. All it takes is a single $400 EMP device and all of this technology is for naught. We're right down to the level of insurgents, only we're used to fighting with technology to give us an edge.

Oh, and I thought that when the SAM batteries in Kuwait shot down a pair of Tornadoes, we would have learned our lesson about letting machines run warmachines: it doesn't work. Or all the times that UAVs have destroyed their target, only for it to be later revealed that the target was poorly selected.
JiangGuo
10-02-2006, 09:22
Frankly I think keeping humans in the loop is an essential part of robotic warfare, lest some Terminator scenario occurs.

Besides, if it reduces warfare to unmanned robots fighting each other - so much the better.
Strathdonia
10-02-2006, 10:55
The Sysytem mentioned in the first post ameks no actualy attack decisions, it merely calacualtes probabailities and then requires a go/no go froma human operator before it can commence its attack run during which it greatly increases its telementry output to give the human a superior image of what is happening to allow them to make a judgement call and abort right up until the point of final weaposn release.

In this way it is no different from the targeting systems of modern air to air fighters whose computers select and target many different enemy aircraft, cue a missile on to each and then merely await a fire command.

most robotic or remote weaposn systems will for the forseable future remain "man in the loop" systems mainly because it will be a long time before computers can be relied upon to make the kind of judgement calls a trained human can make (well its more of a trust thing really). the only time you don't have a man in the loop would be soem sort of high speed restricted senario system like the patriot (whose decisions boil down to "is it an incoming aircraft/missile?" and "does it have an IFF code?").
Moto the Wise
10-02-2006, 11:13
All it takes is a single $400 EMP device and all of this technology is for naught.

Umm, except the only way atm of making an EMP blast is a nuke. Witch is more expensive than $400.

Have any of you heard of genetic programming? The process of replicating evolution within programs to optimise them for a certain task, which is also used as a replication of AI? We cannot predict what they can do as the code gets more and more complex, until they are like humans, the top evolution in a certain enviroment. It is thought to be the way that all programming is going to go, and the place from wence AI shall sprout.
Jerusalas
10-02-2006, 11:16
Umm, except the only way atm of making an EMP blast is a nuke. Witch is more expensive than $400.

Actually, there are explosive devices designed specifically to cause an EMP effect, with out using radiological materials nor setting off a nuclear explosion. And, in raw materials, it costs a whole $400. Somehow, though, the government seems determined to spend a couple billion on developing the technology.