Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Technology

Robot Wars 362

EyesWideOpen writes "According to this New York Times article (free reg. req.) the Office of Naval Research is coordinating an effort to determine what it will take to build a system that will make it possible for autonomous vehicles (in the air and on the ground), or A. V.'s, to serve as soldiers on the battlefield. The project, called Multimedia Intelligent Network of Unattended Mobile Agents, or Minuteman, would consist of a network in which the highest-flying of the A. V.'s 'will communicate with headquarters, transmitting data and receiving commands. The commands will be passed along to a team of lower-flying A.V.'s that will relay them in turn to single drones serving as liaisons for squadrons of A.V.'s.' The article also mentions that the A. V.'s will have the ability to send high resolution color video as well as still photographs using MPEG-4 compression. Pretty interesting stuff."
This discussion has been archived. No new comments can be posted.

Robot Wars

Comments Filter:
  • by quark2universe ( 38132 ) on Friday July 12, 2002 @03:34PM (#3873249) Homepage
    Won't these people learn? Didn't they see the Terminator? Don't they know if they build this it will come back and bite them in the ass? Haven't I asked enough questions for one post?
    • Won't these people learn? Didn't they see the Terminator? Don't they know if they build this it will come back and bite them in the ass? Haven't I asked enough questions for one post?

      Hopefully, time travel will have been invented in time for the war against the machines, or else we will be in for some real problems [whitepages.com]

    • Obligatory Bob The Angry Flower [angryflower.com] reference: Terminator 5: Skynet Triumphant!" [angryflower.com]

      (OK, I admit it, it's time to get a grip on my total fixation with robots [angryflower.com])

    • by jejones ( 115979 ) on Friday July 12, 2002 @03:43PM (#3873332) Journal
      Won't these people learn? Didn't they see the Terminator?

      Haven't you read the Bolo stories? If I remember Laumer's timeline, we're way overdue for GM to start on the Mark I. :)

      <serious>I share Asimov's disgust with the pessimism and "there are things man was not meant to know" attitude, a disgust which pushed him to write his robot stories. There are good and evil humans (I see the Bill Gates Borg icon as I type....)--what is it about AI that makes people think it will automatically be evil?</serious>

      For the honor of the regiment,
      jejones

      • What is it about AI that makes people think it will automatically be evil

        Because it's a) not human (and therefore to be distrusted. Humans are instinctively xenophobic.) and b) not alive (and therefore has no soul, no pity, no remorse, etc.). As irrational as that sounds, I believe those two points are the major basis.
        • by sab39 ( 10510 )
          I'm not opposed to AI myself, but there is one major reason why we (as humans) should potentially distrust it: It's not human, and therefore its loyalty is to its own kind, not to us. Judging from our own behaviour (an "intelligent" species) towards animals, it's clear that no matter how enlightened we may be and sympathetic to the plight of poor little furry things, we don't hesitate to choose our lives over theirs on numerous occasions. It's clear (and, in fact, perfectly ethical from the A.I's point of view) that if the situation ever came up where an A.I. had to choose between the life of an A.I. and the life of a human, that it would choose the A.I.

          From the human's point of view, that's "evil". From the A.I's point of view, it's a regrettable necessity. From Darwin's point of view, it's survival of the fittest.

          Either way, it's inevitable: if A.I. becomes smarter than us, we'll live or die as a species at it's sole discretion. Most humans don't seem too ready to deal with that reality, but there you go...
          • What makes you think that an AI can distinguish between or even value a human and/or another AI?

            What defines ethics to an AI anyhow?

            Either way, it's inevitable: if A.I. becomes smarter than us, we'll live or die as a species at it's sole discretion.

            Does this keep you up at night?

            Are you Bill Joy?

      • "what is it about AI that makes people think it will automatically be evil?"

        Perhaps it's not the AI in general. Perhaps it's the fact that the program's acronym also happens to be the name of the USAF's ICBM of choice. I've gotten to the point where the word "minuteman" makes me immediately think of a rocket instead of a militia member.
      • There are good and evil humans (I see the Bill Gates Borg icon as I type....)--what is it about AI that makes people think it will automatically be evil?

        Interesting point. However I think it's more likely that AI (if was smart enough and objective enough) would think that humans are evil (because on the whole we are selfish, etc). Why should AIs value human life, especially if we refuse to value theirs?

        Just a thought.
  • Question... (Score:3, Funny)

    by T3kno ( 51315 ) on Friday July 12, 2002 @03:34PM (#3873252) Homepage
    In order to be pollitically correct are they going to build a female version called MinuteMaid?
  • does anyone else hear the soundtrack of Terminator 2 [amazon.com] when they read about this?

  • Autonomous (Score:2, Interesting)

    Autonomous sounds scary.

    At times when armies to the "Wrong Thing" there are deserters. With robots, or especially autonomy, that sounds rather scary.

    I think Terminator's (the movie) vision was a bit too far fetched, but it brings up a good point. It's a *really* cool idea, but we best make sure someone has tight control over it.
  • Sounds like the Bolo, Mark XX to me.

    http://www.iislands.com/hermit/bolo.html

  • "...Multimedia Intelligent Network of Unattended Mobile Agents, or Minuteman..."

    Tell you what...ditch the robots, get someone who can make cool acronyms and go from there.

    For example: B.A.D.A.B.O.O.M.

    Ballistic Aeronautic Destructive Assault Bullet [which has a tendency to be] Overly-Optimistic [in it's] Massacre.
  • Future war (Score:3, Insightful)

    by Graspee_Leemoor ( 302316 ) on Friday July 12, 2002 @03:36PM (#3873271) Homepage Journal
    So what they're going to do is basically conduct future wars like in certain RTS games- i.e. we see in certain RTS games in FMV footage, that "you" are some guy controlling remote units via some terminal in some concrete bunker.

    This of course has been predicted by many SF authors for years, and even surpassed where we have the case of AIs continuing to generate units and attack each other long after all the humans are dead.

    Karma will now be dispensed, yea! I say, dispensed to those posters who can cite authors and works as examples of this.

    graspee

    • Blizzard Entertainment announed its entry into the military control software market.

      Our advanced unit control interface will allow the easy, dynamic control of a large number of military units of various types. Unit divisions can be formed on-the-fly allowing for easy regrouping of units.

      Our revolutionary interface provides not only visual information but also features our advanced Aural Notification of Unit Situation system (A.N.U.S.). Simple audio queues inform the operator what military units are up to both on and off screen. Aural queues such as "daboo", "zug-zug" and "work completed" will inform operators of the current status of infrastructure units and codes such as "We're under attack!" will provide data pertaining to attack units.

    • The classic Berzerker books by Fred Saberhagen come to mind. They repair, build, and attack, they even temporarily make alliances with 'goodlife' to advance their overall goal of destroying all life in the universe. Then there's the Doomsday machine episode in ST:TOS, which was revisited in ST:Voyager with the non-replicating robots who destroyed their creators. There are a bunch more, but I can't remember the author or short story titles.
  • Sure (Score:3, Funny)

    by scott1853 ( 194884 ) on Friday July 12, 2002 @03:37PM (#3873280)
    And they can use that wonderfully accurate facial recognition technology to differentiate between good and bad guys and kill the right one.
  • They just started playing adds in the movie theatres for T3, Rise of the Machines.

    It looks like the army is continuing their new public relationship actions of making the forces look cool.

    • > They just started playing adds in the movie theatres for T3, Rise of the Machines.
      >
      > It looks like the army is continuing their new public relationship actions of making the forces look cool.

      Look cool?

      Dude. This is Slashdot. Giant armies of killer robots don't look cool -- giant armies of killer robots are cool.

  • It's interesting... (Score:2, Interesting)

    by ArthurKing ( 577487 )
    ...to think about this. It seems that it could possibly become the exclusive means for fighting wars in the distant future, which more or less flies in the face of the concept of war. As I see it, in the past, the goal of a nation at war with another has been to cause it the most casualties, thus preventing the other nation from defending itself against further attacks. With this method, however, (bearing in mind that we're in the distant future) the robots could be turned out quickly and cheaply. There would be no concept of morale among machines, and no loss of manpower to a nation that suffers great mechanical casualties. Does this alter the idea of war, making it a longer, more drawn out affair?

    Additionally, someone commented that the system would not be impervious to a hack attack launched against it (what system is?). Thus, the concept of wars being fought almost exclusively from a command prompt comes into play (I seem to remember this being a hot topic not too long ago... power grids taken down at key times, etc). I suspect that things such as these will have very interesting ramifications in the way that war is fought...
    • You make a very valid point that this could be used to draw out combat and wars into prolonged, non-human affairs. However, you miss one important fact. You are assuming that both sides have robots to do their bidding. Such a situation as the cold war; super-power vs super-power? Who is gonna develop the massive robot army to oppose us? Russia? too broke. China? they cant even feed themselves. The entire point of this use of robotics is to allow the almighty US citizen soldier to be out of harms way. Would you prefer a mechanized division of remotely controlled machines to attack an enemy (whoever, that enemy may be). This both gives us many advantages on the battlefield, as well as protects our soldiers. I mean come on, cut the "AI is gonna take us over so we can all live in the matrix" crap and realize that in the end this saves the lives of American soldiers.
    • If you're worried that an enemy would simply crank out more machines to slow you down, the obvious approach is to knock out the factories and the humans who support them. At the extreme, the range and response time of ICBMs means that you can't hide behind a robotic front, at least in all-out war. Cruise missiles launched from aircraft, surface vessels, and submarines provide similar capabilities, although with lesser range and more deployment time.
  • What's the progress? (Score:4, Informative)

    by Pulzar ( 81031 ) on Friday July 12, 2002 @03:41PM (#3873310)
    The official web site [ucla.edu]. The quality and the amount of information on this web site seems to indicate that this project is in a very early stage, i.e. they haven't really done much. The links on the side mostly go to other UCLA departments. Altough, they do have some interesting looking demo units [ucla.edu] available. They don't seem to pack much of a punch, though ;).

    Maybe somebody from the project is reading this, and can provide some real information?

  • finally (Score:2, Interesting)

    by tps12 ( 105590 )
    It is high time we put a stop to the needless waste of human lives. Our sons have fought victoriously in war after war, and we as a nation have paid our dues in full. It's time to let the robots step in and do our dirty work.

    Also, I see no reason to limit the applications of this technology to peacekeeping and stablization of foreign lands. Once it's been tested for several years against hostile populations, we could bring a scaled-down version back home, for use in some of the high crime areas of the US.

    People complain about how cops and soldiers are unfair, well we can program fairness right into them. They can't be bribed, don't have prejudices, and they're bullet-proof.

    Also, we are starting to develop the technology to grow body parts and organs. Why not incorporate the two? Give a robot cop some real human hands, for superior weapon-handling skills! We could even breed entire brainless bodies, equip them with computer systems, and put them on the street. Economical and effective, and our children don't end up dying for some empty slogan.
  • We should be working on clones! Clones, man, not droids! Droids suck!
  • by swb ( 14022 )
    But won't they get pissed when they find out what the royalty payments are?
  • Morality of war... (Score:3, Insightful)

    by telbij ( 465356 ) on Friday July 12, 2002 @03:47PM (#3873370)
    Well war ethics are going to have to be completely re-written if this happens, because previously the idea was that to win a war you had to send some soldiers to their death. If we don't have to send in soldiers anymore then the American public will be easily distracted from our hideously hypocritical foreign policy decisions since they don't actually have to worry about their sons and daughters.
    • If you haven't noticed, the majority of casualties from recent wars have been from *friendly* fire. Plus we stopped the draft, so every soldier out there ASKED to join the military.

      We of the American Public couldn't give one rat's ass about what the military does, in a capitalisitc sense. We've got moral and fanboy caring, sure (I personally find a just war morally necessary sometimes, and the geek in me says "yeah!" whenever it hears about a new high-tech way we've waged a war), but not a capitalisitc measure--War does not, in any way aside from slightly higher taxes, affect our everyday lives.

      Well, except for that NYC and DC thing 11 months back. If Pres. Bush had said "we need more soldiers, we're going to swarm the entire subcontinent and put and end to this" myself and most of the peopel I know would be in the military right now.
      • We stopped the draft? When? Admittedly, it's been awhile since I turned 18, but I distinctly recall having to fill out a selective service card.

        We don't currently draft the military via selective service. That's not at all the same as stopping it entirely.

        Other than that, can't disagree with pretty much anything you said. Frankly, I wouldn't be surprised if we lose more military personel during peacetime training due to mistakes than during wartime. But I certainly don't have the numbers to back that up.
      • We of the American Public couldn't give one rat's ass about what the military does, in a capitalisitc sense.

        Your arguments are extremely short-sighted. The military is the backbone of the country, the government, and the capitialistic system. The two issues that people seem to forget are that (1) you need a military to have a society and (2) you need a military that listens to the society.

        In regards to the first point, the American government is meaningless without the ability to put its decisions into force. Trade with Taiwan? What if China says no and sinks all merchant vessels? Note that, in the US, law enforcement is rolled into this because the government must be able to enforce its decisions domestically as well as abroad. In other nations there is little to no distinction between the military and law enforcement.

        As the the second point, assuming the military has the strength to impose the nation's will, the military must also listen to the government (meaning that it must serve the citizens). This doesn't always occur. Countless governments have been overthrown by their armed forces. What if the US military personnel decided that they're sick of low pay and getting sent around the world do to shit work (like peacekeeping)? With a draft, the military is composed of "common citizens". Without it the military is, essentially, composed of mercenaries. There is no obligation from the general population. In many European nations there is a mandatory period of military service. This means that every citizen has a stake in how the military is used. Without that connection people begin to not care how the military is used.

      • War is useful for population control. If you have more people then can live the lifestyle they want on the land you have, then war is a good way to randomlly get rid of a few.

        Note the the above needs to be vague. If everyone wants to live like I want to live (1000 acres of land all for me, with a private 300 acre lake, within 2 miles of a modern super market), that is very different from people living another life. (ex small apartment in a skyscrapper near plenty of theator and night life) Resource limits are different for each style. There is a big different between beaf and rice as a main staple of the diet, though you can be healthy with either. When there isn't room for you to live your lifestyle you have to get rid of some people, or change your life style.

    • Well war ethics are going to have to be completely re-written if this happens, because previously the idea was that to win a war you had to send some soldiers to their death.

      I can see how military strategy would need to be rewritten but I don't understand why lack of American casualties is somehow going to change the ethics of war.

      If we don't have to send in soldiers anymore then the American public will be easily distracted from our hideously hypocritical foreign policy decisions since they don't actually have to worry about their sons and daughters.

      I would argue that people are already distracted from our two-faced foreign policy. The American public is almost always in favor of war if the President tells them it's necessary.

      I don't quite understand everyone's moral qualms about mechanised warfare. I can see robot vs. robot being pointless but that's not likely to happen for some time in the future. In fact I can see a potential benefit to heavily mechanised, disposable warfighters. Suppose some very powerful country blatantly invades a weaker neighbor. The international community recognizes that it's a terrible act but no one is willing to go to war against the powerful aggressor because they are scared of casualties on their side. Robotic solders would allow us to "do the right thing" and not worry about how the price we'll pay.

      Unfortunately, this idea only works if you trust your elected officials to only fight just wars. But that's another matter. There is nothing wrong with robotic warriors in theory. In practice, however, it may give the President Carte Blanche to wage any war he wants. However, I would argue we're not too far away from that right now.

      Just some thoughts...

      GMD

    • Let's clarify something. The most moral way to wage war is to get it over as quickly as possible, with the least amount of casualties on both sides. If you're going to fight a war, win the war, and win it fast.

      The question of whether killer robots are moral or amoral is in my view a complete waste of time. Once you've decided to wage war, you want to win it (note that I'm talking about *war* here, not peacemaking and peacekeeping operations, which are frequently confused with, but are completely different in character from actual war).

      The United States has become a leader in warfare technology precisely because the American public values the lives of its sons and daughters. If our opponents had access to this sort of technology (assuming it works reliably and effectively) they'd use it. Would the Chinese government have used human wave tactics during the Korean War if it could have used less horrific means of persuing its military goals? Of course.

      I'd make the suggestion that if the technology exists, and you don't use it, you're willingly killing more of your own and potentially of the enemy as well.

      Which is more moral?

  • On hacking. (Score:5, Interesting)

    by yasth ( 203461 ) on Friday July 12, 2002 @03:54PM (#3873412) Homepage Journal
    I am far less worried about hacking then some people seem to be. What I am worried about is that they will obey commands. I mean what happens when say these are sent against Cuba, but the General/Admiral decides that he really want all of south florida to retire in, and captures it with his drone army. Normally it isn't possible becuase American troops are (suposed to be) loyal to thier country first and not thier officers, but now you are reducing the number of people needed to enable a coup or power grab. Less people is both easier, and more liekly to be sucessful.
    • the General/Admiral decides that he really want all of south florida to retire in, and captures it with his drone army.

      Not too likely, unless the general can use the 'bots to convince the soldiers that they are in Cuba. See, these robots don't shoot guns and fire missles, so we can rule out the Terminator scenerios. They just provide information about the battlefield, and act as wireless network transceivers.

      When we eliminate the need for soldiers entirely, then we have something to be concerned about. Besides, who's gonna miss South Florida? Not like Florida ever made a difference.
  • This really reminds me of Philip K. Dick's short story "Second Variety", about a race of "claws" (both little choppy chainsaw robots and human-mimicking "bunker busters" who got you to invite them back home).

    I really wish we just decided we weren't going to be the monsters who open this box. It's worse than the A-bomb. At least an A-bomb had a relatively confined kill zone.

    I'm sure I'll be dead before things have a chance to get so bad, but why are we in such a hurry to do this?

    • YES! This is the story I was thinking about. The US and the USSR are duking it out on the Earth, and the US is losing, so we drop little self-replicating killer-robot/bomb factories from our base on the moon. The factories continue to improve, and we find out to our horror that not only are the little buggers winning the war, they're getting smarter as well...
  • Once again, the Onion - one-stop shopping to meet all of your satire needs.

    I Believe The Robots Are Our Future [theonion.com]

  • please, no. (Score:3, Interesting)

    by supernova87a ( 532540 ) <kepler1.hotmail@com> on Friday July 12, 2002 @03:59PM (#3873461)
    While this story isn't really new (we already have flying drones, cameras, etc.), I have to say that I am disturbed by it.

    If robots are put to use as our new soldiers, what restraint will there be on those people in the military who are already too eager to send our forces overseas to police/invade/kill others? No one will complain that their sons/daughters are paying with their lives, and it will only make it easier to engage in armed conflicts. This is the nightmare of the future, when everyone sends their robots to fight each other.

    There will be those who say, "but anything that saves our boys from dying is good." But this is not a sustainable policy -- it's not ethical for us to want to come up with a force that is only to our benefit, so that we can fight without the consequences of fighting. If everyone took that position, we'd be fighting all the time.

    The true sustainable solution would be to work on the real causes of conflict in the world, and spend our billions of dollars to try to educate and help peoples so that we're not the target of violence. I tell you, it's much more efficient than trying to put out the fire once it's started. Why can't people see that long term issue, and work on that, rather than just coming up with new/better ways to kill others in the short term?
    • > If everyone took that position, we'd be fighting
      > all the time.

      Fairly cynical view of humanity, eh?

      I think your fears are unfounded, or at the least, exaggerated. Yes it can enable unsavory individuals to launch their plans of world domination with fewer restraints, but in a world where a single man can encourage his henchmen to fly planes into skyscrapers, someone will find a way to do it regardless of what the options are.

      For every person that loves violence and would eagerly "fight all the time", I bet you there's two more people who want nothing more than a full belly, a warm bed, and some peace and quiet.

      And as long as those peaceful people cut out the cancer when it becomes a problem, even the seductive power of a fully automated army doesn't ensure we're doomed to a future of eternal warfare.
    • 1. In the United States, the military takes orders from civilians. If the president, draft-dodger or no, gets a declaration of war approved by the Congress, it goes. If there are no such orders, it doesn't. And, actually, the Pentagon is quite cautious these days -- it's the civilians who aren't.

      2. Some of us actually pay attention to things beyond our own lives, and consider factors beyond "gee, is a family member risking his life" such as the economic and diplomatic ramifications, as well as whether or not a military action seems feasible. The US does /not/ invade places on a whim.

      3. It is ethical to promote justice. This normally requires using force, because those who behave immorally (such as attempting the destruction of others merely for having different belief systems) tend not to cease doing so just when asked. The world will not become more just simply by wishing it; a large part is incapacitating those who persist in injustice.

      The true sustainable solution is to eliminate all people, as that is the only way to stop conflict. Education is not particularly feasible on people who do not want to be educated; in fact, many people will label "hate speech" just about any criticism of other cultures, let alone any (doomed to failure...) attempts at mass indoctrination that do not involve invasion and annihilation of existing power structures (as would be required for true indoctrination; one has to totally dominate the communications systems to control input...). In the meantime, until genocide of the species has been achieved, I would recommend that states not lower their guard. Intolerant doctrines such as Wahabbism (does not tolerate anything but puritanical Islam) won't disappear anytime soon when institutions (such as the Saudi government) benefit so enormously from them.
      • 2. Some of us actually pay attention to things beyond our own lives, and consider factors beyond "gee, is a family member risking his life" such as the economic and diplomatic ramifications, as well as whether or not a military action seems feasible. The US does /not/ invade places on a whim.

        This is an important point. All this "robots will make war too clean" stuff is crazy. War is incredibly destructive. Not just in the number of people who die but in economic and political terms. There are some who believe that GWB is waiting for the American economy to bounce back before he fulfills his dream of knocking off Saddam. Right now, our economy probably couldn't take the strain of a difficult conflict (the Afghan conflict hasn't been too tough on us, you have to admit).

        GMD

    • It sure IS ethical to come up with robots to fight your wars for you. A soldier is a tool. So is a robot.

      War is not fair.

      THe one thing it will mean is those in command will be more directly responsible for the actions of the drones rather than blaming it on soldiers misbehaving/chain of command breakdown/whatever.

  • by Scutter ( 18425 ) on Friday July 12, 2002 @04:01PM (#3873476) Journal
    Everyone knows all you have to do is fly your ship into the hangar of the mothership and destroy its reactor, and all the drones will cease working.
  • ...I think they should call them Terminators.
  • Why can't robot contests on TV use explosives and machine guns? That would be far more interesting than a big hatchet that never does any real damage.

    True, you couldn't have a live audience, but who needs them anyhow?
  • Cordwainer Smith [cordwainersmith.com] wrote about this, sort of.
  • Robotic Battlefield? (Score:3, Interesting)

    by sinister minister si ( 589328 ) <maniacsoup AT hotmail DOT com> on Friday July 12, 2002 @04:05PM (#3873519) Homepage
    Let me give some possible scenarios. After reading the scenarios, tell me if it sounds plausible for real-world use.

    Scenario One: System has tracked enemy troop movement and friendly troop movement. Enemy troops and friendly troops clash in battle. At this point, on the grid, everyone looks like they are in the same place. There's no way to distinguish friendly from enemy. As the combattants regroup to different geographical points, an airstrike arrives. There has been no time for communications to propogate to the system which group is the friend and the enemy, and it is doubtful that the system has a database of the facial structure of every single friendly in our forces. What happens? Does the system pick randomly one group and tell the autopilot to bomb that group? Does it use probabilities? What is the acceptable margin of error, when that error is a 1000 lb bomb falling on you? Who in our government decides the number of our own solder that we can kill and still think it is ok?

    Scenario Two: The system is flying above a battlefield. A situation develops that the programmers of the software running these things never thought of. How does the system react? Please, and I speak mainly to any combat veteran at /., somebody tell me how many variables are in a live battle. What happens when the system is exceeded? Suddenly, the information that is new needed for combat can not be transmitted because it does not exist.

    I ask you, would you trust an unmanned computer to shield you from a live machine gun pointed at you? I wouldnt. A manned computer, maybe, but not unmanned.
    • Maybe the friendly soldiers wear special "don't kill me" underwear that takes part in some sort of encrypted authentication with the killer robot dogs.

      Of course the round trip for something like that would take a while ... and we can't be storing all our private keys in robot dogs that could potentially be captured, nor would we necessarily want enemy soldiers depancing our POWs.
    • They would probably just bomb the shit out of everything just like they do now.
    • The NY Time author and the submiter got it wrong. This system, based on the UCLA Engineering website, is simply designed to be a Mobile communication grid. This quote says it best:
      "Minuteman will enable the Navy to bring fully networked force to the battlefield," Gerla said. "This will be the 'glue' that holds together supporting technologies such as mission planning, path planning, reasoning, decision making and distributed real-time computing and control."
      These things are not designed to carry the bombs. That's for the X45A to do. And that, has a guy controlling it back in a bunker. He's the one using standard military protocol who makes the decision on friendlies or not-friendlies. And don't think a human in a cockpit or a bombing run has any better idea about what he's droping his bombs on. Either he's guided to the enemy, or he commits fratricide. Its the men with the plans, and the boys in the AWACS who are ultimately responsible some munition isn't droped on a friendly.

      There is a top level project called "Intelligent Autonomous Agent Systems" of which this is part of. But there's nothing coming out of that which resembles T2 style aggresive AI controlled vehicles. Most of what they mean by autonomous, is the ability for the system to reconfigure itself if it loses an 'agent'. IE, and information node point. Another UAV could move from Group-A to Group-B to cover a lost eye-in-the-sky.

      Although, I think there is room for truly autonmous agressive UAV. During desert storm, much of the day-day airborne offense took place in kill-boxes. They basically put a grid over the desert, and certain pilots or squadrons were told to destroy anything moving in grid X:Y. These boxes we're very much outside the 'Fire Support Coordination Line' meaning these air mission didn't need to be coordinated by someone on the ground. They were truly deep in enemy territory. When you run missions near troops the FSCL becomes the important factor. You can't target or shoot anything behind it (your computer won't let you either) Also, anything behind the FSCL requires a on-the-ground coordinator to give you the go ahead. I think we could see in 10 years roving aggresive UAVs that patrol grids and kill anything it finds in them. It's no different than what our pilots do now.

      In fact, our humans pilots make mistake more than machines. There's famous video tape of an Apache captain taking out a Bradley and an M-113 at night, all capatured on his FLIR. He was providing FSCL support. His computer would not give him the green light to fire, he in fact had to override it in order to attack. His ground command did clear him for the shot verbally, telling them they had no vehicles in that area. There could be an argument that a mistake like that would not happen if it was a machine making the decision. I believe the real cause of that incident was the moving of the FSCL, and the airborne guys not getting the most recent FSCL coordinates (although his computer did have it).

      -malakai

  • Interesting Idea (Score:3, Interesting)

    by sielwolf ( 246764 ) on Friday July 12, 2002 @04:10PM (#3873548) Homepage Journal
    I always thought that this was a good idea (a pipedream maybe, but a good idea nonetheless). And I think the key is not the fact that human lives would not be put in danger, or any inherent accuracy, or organization.

    I think it is that, through technology, a 24 hour military force is possible and may be the greatest military force ever created.

    Patton said that the most important factor of a soldier is not his skill but his willingness to fight. A soldier that can break his enemy's will through shear determination is the pinnacle of design.

    To put it another way: the key of combat is not to win, but to assure you don't lose (I forget but there is a quote where some North Vietnamese officer was talking to his American counterpart during the formation of the cease fire. The American officer said "We never lost a single battle." and the NVA officer responded "But you lost still lost the war."). The will to fight is how a tenacious and weak force can beat a better but less determined foe (almost any sort of successful freedom fighter action of the last 300 years).

    Now if a robotic force could be fielded that could outlast all human opponents (not necessarily overrunning them, zapping them with laser rays) would be exceptional by the fact it could wear down and break any force with constant pressure.

    An enemy that does not sleep, does not eat, does not take 10 minutes out of the day wipe its ass, does not worry about anything other than the elimination of you is a truly scary thing indeed.
  • by Darth_Burrito ( 227272 ) on Friday July 12, 2002 @04:16PM (#3873581)
    Heh, the next time a script kiddie says he owns you he'll mean it and you will refer to the entire US via it's military network. Seriously large networked armies sounds like a recipe for Code Red meets Red Dawn [imdb.com].
  • 12 years ago, a close relative of mine was working on the autonomous air vehicle and the autonomous land vehicle at Martin Marietta (before it was Lockheed Martin). The UAV was supposed to be able to recognize and avoid threats, while shooting smart bullets at targets it prioritized. I have a feeling this eventually formed the basics of the Predator drone.

    The ALV was basically an unmanned tank. It was a much bigger problem (visual recognition of terrain and route plotting). I do remember they had a couple of prototypes. The tech ended up being of more interest to smart car people.

  • by grip ( 60499 ) on Friday July 12, 2002 @04:31PM (#3873664)
    The wars of the future will not be fought on the battlefield or at sea.
    They will be fought in space, or possibly on top of a very tall mountain. In either case, most of the actual fighting will be done by small robots. And as you go forth today remember always your duty is clear: To build and maintain those robots.
    -- Military school Commandant's graduation address, "The Secret War of Lisa Simpson" as found on the best Simpsons site http://www.snpp.com
  • Anyone remember the game ROBOTWAR by Silas Warner, the same company/author that published the first Castle Wolfenstein? You wrote small AI scripts for your robot and put them on a battlefield and they duked it out. It all ran on Apple ][ machines.

    I pictured the government robots making the 'plink plink plink' sounds of a Mockingboard-C...

    • 'Omega' by Origin was along the same lines.. really cool game. Your little programmed tank units could even communicate and co-ordinate with each other. Ah.. the golden years of the C-64 :)
  • What would you rather have in the line of fire in a war, humans with parents and spouses and children, or robots? Seriously...think about that a little. Human beings are going to fight wars whether we like it or not...why not minimize our human casualties? Certainly it would be neater (in both senses of the word) if both sides fought with entirely cybernetic armies, but better a robot lay its metal ass on the line rather than a human.

    Imagine if all conflicts were settled with Battlebots/Robot Wars-style bot fights! That would rule! Gives new meaning to the term "Rock 'em, Sock'em Robots!"
  • Why do so many allegedly smart people (nerds) cite fiction as basis for an opinion? You know thats not real, right? Yeah, I know about half the posters are making a joke, but I really worry about the tenuous grasp on reality that the other half has.
  • ... is that the USN is the branch of the military researching autonomous aircraft, not the USAF. Sheesh, not only are they behind in fighters but they'll also be behind in UAVs!

    Of course, considering the USAF to be a brach of the military is really stretching it... :)
  • mesa hate osama. send in the clones.

  • Although this is a stretch, I'll get in on this one, because the humanistic issues are astounding. War involves death and destruction, but honestly, it also involves some morals, even to win.

    Honestly, the object of war is defeating one's enemies, not destroying them utterly. Creating a machine that might not have sympathy for non-combattants, personal property, innocent victims, or even animals scurrying away seems like a terrible idea. And an utter waste.

    Without the concept of losses on your side, you see total coquest as the only way. Total conquest can mean total death. Here is a short version of my argument:

    Mechanical weapons have pinned down an group, and that group decides to surrender. The person or entity on the other side of that machine feels no threat to his life, so like an execution, they might just "pull the switch" on them. WHY? It is a colder decision... or that decision is automated for "no quarter" fighting.

    Either way, you are not going to feel the sympathy required to cut a break surrendering in battle if you are removed from it. You might let a group surrender if you are getting bullets over the top of your head too, but I find that less likely that you would let them surrender if you were making a decision in a air-conditioned military building in a suburb.

    However, if you made them impenetrable pacifying machines instead a weapons platform, then that is an idea. Robots might be used for capture, but using them to kill sounds dreadful.
    • > Mechanical weapons have pinned down an group, and that group decides to surrender. The person or entity on the other side of that machine feels no threat to his life, so like an execution, they might just "pull the switch" on them. WHY? It is a colder decision... or that decision is automated for "no quarter" fighting.

      The CO of the guy "pulling the switch" takes one look at the archived MPEG-4 stream and throws him in the stockade for the rest of his life for a war crime.

      The excuse "I was just following orders" or "they were comin' right for us!" doesn't fly when the video stream's there for all (all along the chain of command) to see.

      Perhaps the chain of command can be corrupted and will cover it up. But that's a far greater risk with manned warfare (which, by defintion, features fewer witnesses) than with our hypothetical war-by-remote-control-robot.

  • Might I suggest a design like this [bbc.co.uk] (AV's on left and right, Doctor and companion in middle).
  • Yes, [Ender].... this game is going to start being really difficult.

Order and simplification are the first steps toward mastery of a subject -- the actual enemy is the unknown. -- Thomas Mann

Working...