Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Technology

Unmanned (But Armed) Aircraft Experiments In 2001 449

John Warden, architect of the Gulf war air campaign, believes that by 2025 90% of combat aircraft will be unmanned. Next spring, the first armed aircraft without pilot, the X-45A UCAV will make its maiden flight. Replacing the pilot by a ground controller cuts the price of each unit by two-thirds, and makes it easier to transport. The Economist has more, and states 'the decision to fire weapons should be made by a human, to reduce the risk of "friendly fire."' This is not logical: Since the planes can be networked and thus know each other's relative positions, preventing friendly fire is a much simpler problem than the visual recognition required to determine what to shoot at, unless you don't mind hitting non-military targets. I wonder what Asimov would think.
This discussion has been archived. No new comments can be posted.

Unmanned aircraft experiment

Comments Filter:
  • wow.. can we say robocop.. just found this [navy.mil] over at navy.mil.

  • Alrighty, i've seen some presentations on these kinds of things. The basic problem in current aircraft design is that aircraft can handle multiple times the acceleration the human body can. The F-16 can already take g-loads that would have its pilot's brain squishing out his/her ears. And thats still a 4th generation fighter mind you, not the 5th generation like the F-22 which probably can outperform the pilot even worse...

    Once you remove the pilot a lot of interesting ideas become possible since you just ditched about 25% of the aircraft's weight and a majority of the physical requirements. Entirely new designs become possible because you don't need a cockpit etc...

    This is a great idea then except for one thing. The Air Force is expecting the next generation of air combat to be quieter than the previous generation. The F22 can fly in passive mode and at least locate (if not target) enemy aircraft from the noise they put out (like their active radar, radio chatter etc). Things like IFF are ariel bulls-eyes in these cases. Also a continous transmission like the UCAV would put out isn't going to be much better. There is a great benefit to having a self contained fighting aircraft in this case which is something a UCAV is not and most likely never will be. AI is nowhere near where it needs to be for these things to be fully autonomous instead of simply unmanned.

    As for men pulling triggers being unneccessary, dream on. One of the most important parts of combat is making sure the enemy doesn't know where you are. Would it be wise to create a remotely accessible database (which could conceivably be hacked) showing where all your aircraft are at any given time? It would be a target list if the enemy got a hand on it and could exploit it. Think security here.

  • It is very hackable, thats why we dont use it during attack missions.

    Sanchi
  • I find this trend disturbing. As war becomes more mechanized, we lose touch with the human cost of war-fare. Time may come when a rutheless enemy born of oppression only part-intentional will be able to turn such weapons against the current major powers of the world.

    Am I alone here in thinking that the contemporary cartoon show "Gundam Wing" is an interesting critique of mechanized warfare which is valid today, though marketed to children?

  • Plus remember, even if the force the gun is pointed at is an enemy, the decision to pull the trigger is often political as well, based on rules of engagement which are formulated before engagement takes place. Combine this with the possibility of the enemy capturing a robot plane and using it as a "robot shield" to trick out the sensors, and you have a potential problem.
  • was that the 72nd or 73rd episode?
  • This reminds me of a story I heard out of The Gulf War. I'm not sure, but I think the general was Colin Powell. Maybe somebody else knows for sure.

    Anyhow, this reporter asks the general "what are your plans for the enemy". He responds: "we're going to find them and kill them." The reporter was shocked. Surely he meant "neutralize", "terminate", "subdue" or some other euphemism. When pressed again the general reiterated: "we're going to find them and kill them. That's what war is all about."

    To me, this was almost as funny as that bit on Saturday Night Live where they were making fun of the reporters during that war, who were essentially asking the generals to reveal their battle plans before action. If the aforementioned incident really occured, it was an excellent way for the general to answer the reporter's question without revealing any secrets.

  • >IFF is not very useful. Do you perhaps remember a few years back when the air force shot down two army helicopters in the Iraq northern no-fly zone? They misidentified them.

    Yeah, AND THEY HAD THEIR IFF TURNED OFF.

    Later
    Erik Z
  • by Sanchi ( 192386 ) on Monday November 13, 2000 @05:55PM (#625806)
    Please, learn something before you spew your nonsence. First some background on me. I work on the computer program that runs on AWACS. I have seen what the system can do. The F-22 will not run any active systems when on an attack run. All of the detection is left up to us (awacs). and the radar system on it sucks, its only an plainer two pass (hard to explain).

    And IFF is used by every single airplane in the air, not just NATO.

    Sanchi
  • by SuiteSisterMary ( 123932 ) <slebrunNO@SPAMgmail.com> on Monday November 13, 2000 @03:58PM (#625816) Journal
    This is not logical: Since the planes can be networked and thus know each other's relative positions, preventing friendly fire is a much simpler problem than the visual recognition required to determine what to shoot at, unless you don't mind hitting non-military targets. I wonder what Asimov would think.
    Ok, so they won't shoot each other. But what about other friendly forces? Sure, put a location beacon on them, too. Then the enemy either a) tracks in on the frequency and shoots them, or b) jams them and watches chaos ensue. Humans will always point the trigger, if only so that the brass knows who to point the finger at later.
  • by TheDullBlade ( 28998 ) on Monday November 13, 2000 @07:41PM (#625820)
    From what I've read about modern military history, the goal of war is not to kill people, it is to destroy the combat effectiveness of the enemy's forces.

    This kind of thinking is the reason why the U.S. military can't beat a truly committed enemy. It doesn't help that their idea of "combat effectiveness" is hopelessly self-referential (combat effectiveness is the ability to reduce combat effectiveness of an enemy).

    The goal of war is to crush your enemies, see them driven before you, and hear the lamentations of their women. Even uneducated barbarians know that.

    All kidding aside, war is a means, not a goal. Was is the way you get something you want when nothing else will work. Principles like "we should avoid civilian casualties" and "a tank is worth more than a handgun" are not absolute.

    The U.S. military works from several basic assumptions that hamstring them in many situations (and are forced on them by the "CNN factor"): they are trying to help the civilians in the area, their enemy is an evil dictator whose people hate him, and they want the area to be peaceful. These assumptions often conflict with, and even contradict, the only possible logical purposes of their attacks, leading to confusion and apparent incompetence.

    It is very dangerous to have such sweeping absolutes out in the open for all your enemies to see and exploit. People in more than one area that has not profited from their interaction with the U.S. have compared the American military, with their submarines, stealth planes, and nightfighting gear, to vampires. The analogy is a very appropriate one, not only because their strange-seeming motivations and their terrifying night-attack tactics, but because they are invulnerable to the normal, direct methods of attack and must be fought according to bizarre and seemingly arbitrary rules that make them curiously easy for most to ward off, if not kill.

    However, with no chance of a seriously damaging defeat near home ground, the American military will doubtless remain complacently ignorant of how they are perceived, and in particular, how well their limitations are understood.

    --------
  • Am I the only one that sees this as a piece of SkyNet, of the Terminator movies?
  • by Matt_Bennett ( 79107 ) on Monday November 13, 2000 @05:59PM (#625822) Homepage Journal
    If you think about this device, and what its mission is, it isn't very far off from a cruise missle, except that it doesn't blow itself up when it completes its mission. It comes back to fight another day. Cruise missles suffer from the same vulnerabilites, they've got multiple navigation sources (GPS, inertial, landmarks), and sat. communications, where they can be re-targeted if needed.

    The first such mission envisioned is the suppression of enemy air defenses.

    I think that this means the ground based, anti-aircraft installations (guns, missles, radar) The airframe is a pretty stealthy design, which is right for going against that sort of target. That type of mission is the most dangerous for a human pilot. I'm sure Congress likes it, since if one of them goes down in combat, they don't have to tell their constituents that they are sending their children to death in some far off country.

    I would think air-air combat is a much harder problem, but it would be very good to take the human out of the loop- The plane could then do continuous hi-G turns, both positive and negative, stuff that would knock out a human in seconds.
  • If you watched both movies, the terminators were assasins disguised as humans to aid in infiltration.

    --
  • Thanks! Great link. One thing to note is the nature of the HARM. It specifically seeks out RF from threat systems and attempts to eliminate this source via direct impact or shrapnel from a near-miss explosion. Later versions of the HARM also attempted to damage / kill nearby equipment and operators. A misfire is surely possible. However, I feel its safe to assume that RF activity is central to most of these cases (radar or not).

    If you're interested in the Electronic Warfare systems involved during the Gulf War, this source [wonderland.org.nz] is excellent.

  • Ever seen one where the AI can beat a skilled human player?

    Granted, the USAF might do a better job at AI's then a gaming company, the basic problems are the same:

    There simply are no automated solutions to a chaotic scenario! Freaking mathematically impossible!

    A fully automated fighter will obey certain simple rules (fire at most threating target first, do evasive manouvers if fired upon)
    A human will learn those rules, and make sure that "the most threating target" is actually a decoy and that the robofighter is under a constant haul of cheap, quite harmless missiles that will distract it.

    Suppose the robofighter can determine friend/foe with 100% accuracy. That was the *easy* part. Now the question is "should I fire or not?"

    Again, see to the gaming situation. Don't you just love how those AI controlled opponents take impossible long distance shots at you, giving you a chance to duck and return accurate fire from a better position? Or how 20 bots scramble to meet a single opponent, while the rest are sneaking in from the back?

    Humans make mistakes, machines make mistakes. THe difference is that machines make their mistakes systematically.

  • by Anonymous Coward
    Have we forgotten about one thing?

    Light speed is 299792458 m/s. I'm working the numbers and after the transmission time, reception time, some fudge factor for signal decoding, latency, etc. and then the time to actually make the move, you are talking about a 1000-2000 mile radius between you and the plane.

    Earth curvature is even worse. If you are prepared to really take a hit on the service radius, you can use a satelite, but then you have to use a satelite in geosynch orbit (which is not a nice low 100-200 mile orbit like the shuttle flies in. 22,000 miles is more like it)

    Of course, there's always an airliner loaded with relays flying near the target radius.

    But the pilots won't be able to sit in their couches in some comfy home hooked up to a modified playstation and make war. You are going to have to ship the pilots and the planes out to a base near the target, so your enemy is going to be gunning for that location. All it takes is one good-sized balistic missile to make it through the defenses and all of your planes just fall out of the sky because you just killed a bunch of pilots.

    So it's not a technology that can make war like a game. Not unless the government knows something we don't about faster than light communication.
  • First of all, why would you be shooting at something that looked anything like a refugee camp

    Because you might be being shot at from near it. Remember the Kaykusha rocket attacks on Israel? In one case the Israeli computer accurately aimed at the location from which the rockets were launched -- from just outside the fence of a Palestinian refugee camp. Refugee camps are used by guerilla groups as arms transfer points, recruiting centers, refuges, etc.
  • by TheDullBlade ( 28998 ) on Monday November 13, 2000 @07:52PM (#625840)
    You have done a real bang-up job on the software design, but would you mind posting the implementation?

    --------
  • IFF is not very useful. Do you perhaps remember a few years back when the air force shot down two army helicopters in the Iraq northern no-fly zone? They misidentified them.

    Remember the USS Vincennes that shot down the Iran airliner?

    IFF has been aorund since WWII (55+ years) and has never been so reliable that pilots actually trusted it. No doubt current versions are better than older stuff, but it's hardly perfect, or even good enough.

    If IFF were so good, why do they spend billions on radar which can identify the airplane type from radar returns?

    --
  • I work with people doing AI research, so I can tell you that we're not yet there. "There" meaning: unmanned autonomous aircraft. That being said, I really believe that in a not so distant future we will be "there"...
  • I think that this means the ground based, anti-aircraft installations (guns, missles, radar)
    Robotic Wild Weasels [wildweasels.org]! The origional Wild Weasel patch [wildweasels.org] imortalizes the supposed reaction of the first EWO (Electronic Warfare Officer) to the mission. At the risk of murdering the quote:
    "You want me to sit behind a stick jocky who thinks he's invincible, flying an aircraft to go after weapons designed to shoot down aircraft?! You gotta be shittin' me!"
    The Wild Weasel mission has always been dangerous. Automating the role would keep a lot of pilots safe(r).

    It would also have a strong psycological factor. Wild Weasels, once the role had been perfected, tend to supress enemy air defenses by their mere presence. Imagine being a AAA operator considering the threat of stealth automated threat radar hunters lurking out there. Especially after a few sites get hit by one.

    Oddly enough, I believe the US already has a simular capability. The AGM-88 HARM (which I've mentioned elsewhere in this story). Give the HARM an increased fuel capacity and it can humm round the open skies for hours waiting for a threat radar to power up. Of course, the HARM is a one-shot deal.

  • by PhilHibbs ( 4537 ) <snarks@gmail.com> on Tuesday November 14, 2000 @12:22AM (#625853) Journal
    the decision to fire weapons should be made by a human, to reduce the risk of "friendly fire."
    Well, just so long as the human isn't American. We lost more troops to you guys than to the Iraqis.
  • we spend millions on military because it is the most socially meaningful research that there is! Measure the fall of communism or the end of Hitler's terror against some tree-huggin scientist developing a new hairspray that doesn't need to be tested on cute bunnies. A**holes like Saddam are out there - we spend millions on military to keep them out of our universities so scientists can have the leisure time to develop socially meaningful things like nutra sweet
  • whether you can force it down to zero is irrelevent... let's say you can force it down to 1 kbits/s. Text transmissions: no problem. Voice transmission: there's a problem but it can still get through. Video transmission: no way!

    The problem is that to remotely pilot an aircraft travelling at Mach 2 (even much below Mach 1), you need a lot of data, and that data needs to be updated rapidly. Think about it. You need a video feed from the cockpit, all the radar info, all the instruments (attitude, altitude, speed, ...). There's no way you can get all that through when you're being jammed... Also, if you have 100 planes in the same area, your bandwidth (using spread-spectrum) is divided by 100. That's not much available...
  • Just aerodynamically it's fascinating to check out this plane and see what's possible when you don't have to have a human pilot. It really is like a flying robot- where modern fighters have 'reflexes' like a robot (such as unstable planes with artificial stability via computers) this one even looks like a robot, given a complete stealth design and a menacing sleekness that comes from not needing a canopy. No tail- the jet exhaust is thrust vectored in the yaw direction and compensates for the obvious yaw instability, and in general that whole design is so unstable and flippy that only a computer and gyros could keep it flying stably. As a result I bet the damn thing can fly pretzels in the sky- very neat.

    I would be nervous about flying civilian aircraft in its vicinity (or indeed future variations of this concept that are more autonomous) in case it took a dislike to my plane. But- if I was expecting attacks from anything from enemy aircraft to helicopters to missiles, I'd want to have some of these little suckers loose in the sky. It'd be "all civilian aircraft out of the sky NOW! OK- anything left is toast". I think the defensive capabilities of such a design, particularly as autonomous robots, could be really formidable. This is not exclusively an offensive weapon. It could be a hell of a defense against almost any air attack.

  • hmm.. now if you have very good surveillance of the battle field (can we say satelight, ground camera's, radar, etc, etc) you could fly a bomber with no one way commands. Hell, if you can make a cruise missile that has a remote detonator, why can't you have an automated plane to do dogfights. Personally I'd just like to see a war plane controlled by a pilot on the ground in a simulator. Although I think a more interesting idea is a Battle Droid [starwars.com] controlled with a halflife like interface from a remote location. Now that is war.
  • You're right, we should continue to risk the lives of men and women in battle!

    Hey, if you're going to make it easier to kill other men and women under the guise of war, you should be willing to risk your own life.

    Perhaps one only realizes the value of all life when one's own has been put in danger, or sacrificed.

    -------------
  • Okay, the whole thing goes:

    Terminator: In 3 years Cyberdyne will become the largest supplier of military computer systems. All stealth bombers are upgraded with Cyberdyne computers, becoming fully unmanned. Afterward, they fly with a perfect operational record.

    Sarah: Uh huh, great. Then those fat fucks in Washington figure, what the hell, let a computer run the whole show, right?

    Terminator: Basically. The Skynet funding bill is passed. The system goes on-line August 4th, 1997. Human decisions are removed from strategic defense. Skynet begins to learn, at a geometric rate. It becomes self-aware at 2:14 a.m. eastern time, August 29. In a panic, they try to pull the plug.

    Sarah: And Skynet fights back.

  • I don't think that the world knowing the limitations of the US military is a bad thing. Yes, the US military is easy to ward off. Don't perform genetic cleansing, don't attack innocent countries, and keep a semblance of democracy. We don't need to fight direct wars with countries if they can change their behavior to ward us off from using our "vampire" tactics.
  • Ther is a big difference between a remtoe manned vehicle and an autonomous one. Saying things like "the first step towards robots fighting our wars for us" is like saying the 25 cent crane-game machien is the first step towards robots doing our manual labor for us.

    All they did is take the pilot out of the plane and put him on the ground folks. Those "operators" are air force pilots (the friend of an in-law of mine was oen of the test pilots.)

    I would expect this sort of "machines are gonna kill us" nonsense from the unwashed masses, but I thought Slashdot was supposed to be a techno-literate group.

    If you want to undrestand why they are going this direction, go rent an epsiode of Nova called "The Biology Barrier." For awhile now the limiting factor on fighter plane performance has not been what the plane could do, but what the human body inside the plane could stand. Taking the pilot out of the cockpit frees the plane up to perform at maximum.

  • better yet.. why not just fight the war in software. A world wide simulator with tanks, planes and infintry. Then at the end of each week we can send out a list of the people who were killed in battle.


    Attention Citizen. Your avatar was killed in active duty on the 21st of June, 2026. Please report for decintergration at your nearest recycling plant.


    "Damn it.."

  • See the now defunct DARPA Unmanned COmbat Vehicles Project.

    It was aimed directly at making intelligent autonomous tanks, ala Kieth Laumer's bolo books.

    A lot of the neural network stuff we see in practice now and the vestiges of neural network reserach still going on got started under that project.
  • I second that. I work on artillery control systems. It's not just something you can hack out. Guys, if you make a mistake, you don't just get a BSOD. If you make a mistake, people DIE. The wrong people. Not the bad guys.

    I find it hard to believe that /.'ers would accept that AI is sufficient to:

    control a plane, AND

    determine targets AND

    identify friendlies

    all while in combat. Of course a human is going to be pulling the trigger. If these things are remote control, fine. But don't BS me and say that a computer will do all that.

  • Why should the terminator look human? It'd make much more sense from a design standpoint to design a well armed and armored jeep to clear out those annoying meat monkeys.

    Besides, why is it that every time someone brings up an even remotely autonomous robot, someone brings up Terminator? Why would an AI be interested in the earth at all? Oxygen's hard on the 'bots and living at the bottom of a gravity well would require you to waste much more resources in the construction of your autonomous units. First thing I'd do as a rogue AI would be to move to the asteroid field.

  • Would these planes be vulnerable to EMP pulses over lasers that have been discussed here on /. before?

    I ask this because the planes will be remote controlled which would require some type of radio link. That implies and antennae. Is it possible to harden an antennae against EMP and still recieve a low power signal from many miles away? (assumption: the remote control tranmitter would have to be portable which would limit it to a few hundred watts?)

    Would it be possible to control the plane with a laser communication system using a high altitude AWACs in order to keep line of sight.

  • There are a ton of misconceptions in this discussion, so I'll just try to address them here.

    1) There is no AI running the UCAV. There is AI running in the aircraft, but it is there to help the pilot of the aircraft. It is an unmanned air vehicle (UAV), but there is a ground station that the pilot sits at. From this station, the pilot controls the aircraft, as well as designates and determines targets. The ground station has a screen that is a sensor view from the UCAV (both FLIR and DTV (I think it has DTV anyway)) that is used to identify the target before the PERSON pulls the trigger. (One person controls a flight of four UCAVs).

    2) Maneuverability. Sure, losing the human inside the aircraft would lessen the restrictions on aircraft agility, but the UCAV isn't designed for that. It is designed for deep attack missions, where the threat is too high for humans to risk their lives. It is designed for stealth. They don't want the UCAV to be seen at all. If its seen, it has very little in the way of surviving (which is why they are so cheap). They are designed to fly a long way, destroy the SAM sites (very easy to ID) and other high-priority targets (such as command and control structures).

    3) IFF. Modern land vehicles don't have the CPU power or bandwidth to answer all the IFF queries that would be going on on the modern battlefield. That is the main reason that they don't have IFF. It is still up to the person to decide if a target is a threat.
  • ''COLLATERAL DAMAGE'' OR UNLAWFUL KILLINGS? [amnesty.org]
    According to NATO, initially aircraft were restricted to flying above 15,000 feet to protect their aircraft and pilots from the FRY air defences. This ceiling was relaxed during the second half of the air campaign, with some planes flying as low as 6,000 feet. Officials have conceded that high-altitude bombing reduced the overall effectiveness of the air campaign, but have denied that it resulted in increased civilian casualties. They said that many attacks were aborted if a target could not be positively identified so as to spare civilians.
    So these devices, will
    • increase the number of civilians killed because of the lack of compassion of the algorithm?

      or

    • diminish it because of more selective bombing (so that they can die of hunger or cold)?

    __
  • All the talk in this thread about air-to-air combat, identification issues, communications latency, jamming, hacking etc. is quite interesting in principle. Looking at the specific mission the Boeing aircraft is designed for, however, a lot of the issues become irrelevant. The goal is to get a platform for Supression of Enemy Air Defences (SEAD) which is safer than the current manned airplanes, which have to fly right into the envelopes of the systems they are tasked to attack.

    The Boeing UAV, I imagine, would perform its mission by flying over a given area, maybe around an enemy airfield, which is the target for a later attack by manned strike aircraft. The UAV package, say four UAVs, cruises in, sensing for air defense radar emissions and looking with EO sensors and radar. Possible targets could be identified and cataloged by the on-board computer, and transmitted to the control station. Most of this part of the mission could be run quite autonomously, with human controllers only supervising. If the UAVs are engaged by enemy systems, they could perform automatic evasion maneuvers, or the controllers could intervene, ordering the endangered UAV to, for example, "fly into this valley and hide", and redirect it back sometime later.

    If the mission of the day is an attack on the air defense systems, this could be pre-planned in a very short amount of time by the controllers and the overall mission commander, based on the requirements of the follow-on strike package: "Let's take out these two missile batteries here first, they are on the ingress and egress routes for the strike, and then proceed to attack these gun batteries at the field - they may endanger our low-level strike planes". Targets can be designated, a time-on-target specified, and the UAV system would fly the UAVs in a manner consistent with these plans.

    The actual attack, then, would be a more hands-on effort on the controller's part. Weapons release would be ordered by them, but the technicalities would be handled by the computers - just as in manned aircraft.

    Afterwards, the surviving UAVs would withdraw or, possibly, stay on station to attack sudden threats when the manned strike aircraft are over the target.

    In light of the requirements of this mission, consider this:

    • Identification: air-defence systems should be relatively easy to identify. Most of them need RF emissions to work, and many have pretty unique visual (and, I guess, imaging radar) signatures as well. Take this together, and it should be possible to present the human operator a high-confidence evaluation of the area below/in front of the UAV. Manned friendly aircraft in the vicinity are not a combat, only an air traffic control problem.
    • Comm latency: This is not so much a problem if the moves and actions of the UAV are not dependent on split-second timing, as for example in air combat. Most of the activities during an unmanned SEAD mission, I would imagine, are time-critical, but not that much. Evading a missile requires quick reaction, of course, but some of this - the first turn - can be automated, and then the humans can intervene. If the first reaction isn't sufficient, you only lose an UAV, not a three-times-the-price fighter and an expensively-trained pilot or two.
    • Jamming: This requires a technical solution, but shouldn't be a big problem. The UAVs can be made autonomous enough to fly on for short periods of comm interruptions, and to return home (or perform other default actions) when comms break down for a longer time.
    • Hacking: This would entail gaining access to the ground-based control systems or the on-board systems in the UAV itself. This would probably be accomplished by compromising the communications protocol between the two, trying to pass false commands to the UAV and to present false sensor data to the ground system. I imagine that it would be possible in principle to do this, but very, very hard to implement in practice. This problem isn't specific to the case of armed UAVs, and need also be addressed in other contexts like inter-vehicle communications for ground forces, or data communications between AWACS and manned aircraft.
  • Even if the computer controlled weapons went apesh*t and fired on each other, they wouldn't be killing friendlies. Friendly fire isn't so bad if nobody dies.

    However, I can see being concerned about interaction between computer-controlled aircraft and, say, ground troops. Human soldiers do not act in ways that are easily predictable by computers. Calling in a gunship for air cover might be a little risky if the "gunners" can't tell who's on whose side.


    My mom is not a Karma whore!

  • That depends on what the robofighters would consider to be friendly.

    An automated, networked system, a kind of Sky Net, if you will, would be a threat to all of humanity. It would rapidly attempt to take over the work and destroy all of humanity.

    If the mysterious destruction in the mid-eighties of an LA police station, the more mysterious destruction ten years later of the CyberDyne Systems building, or the rampage that hit the Itchy and Scratchy Themepark doesn't convince you, I don't know what will. Robots will, inevitably, turn on their masters. We would be fools to ignore the evidence that Television gives us.

    Just wanted to let you all know. When the nuclear holocaust hits and the HKs and Terminators start rounding us up for disposal, I'll be there to tell you I Told You So.

  • by tbo ( 35008 ) on Monday November 13, 2000 @04:04PM (#625911) Journal
    Ever heard of IFF (Interogate Friendly or Foe)? It's a transponder system used by all sorts of NATO vehicles and aircraft to avoid friendly fire. Also, the F-22 can apparently identify the specific type of vehicles (both friendly and enemy) with 99% accuracy by radar signature. With newer technology, we should be able to do even better.

    The humans-must-pull-trigger rule is just a feel-good safeguard to avoid Terminator 2 scenarios. I imagine computers will soon surpass humans in accuracy in making those kinds of decisions (if they haven't already), but it will take much longer before people will be comfortable with the idea of machines that have the sole discression of using deadly force.
  • But in real-life combat, your computer does not know the exact location of the enemy.
  • Asimov? Didn't Tesla predict drone aircraft way before that?
  • One of these suckers, known as the Global Hawk, is flying across the Pacific to Australia in April next year. It's landing at an as-yet unnamed airfield that is being extensively surveyed so the bot knows the lie of the land, so to speak. Release is here&l t;/a>. [defence.gov.au]
  • If you have a network of planes chatting with each other with radio, someone will toss at your network some missiles which fly at any radio transmitter. (Someone will come up with a cute acronym for a HARM which is designed for this job)
  • At the risk of bringing in Trek, There was an episode (with the war fought by computers) where they pointed out that war is SUPPOSED to be messy and bloody. That's why it's to be avoided.
  • And the Tomahawk has delivered such payloads. During the Monica Lewinsky scandal Clinton launched dozens of Tomahawk missiles at targets that were apparently controlled by Osama bin Laden [cnn.com]. Included in the barrage of explosives were a number of anti-personal warheads that on impact explode thousands of spiked cow pods. Automated massacre.
  • -The number of g's that a plane can pull is not just limited by the pilot. At 9g the power needed to maintain a turn is extrodinary. Even the f-16 whose thrust is greater than its weight can only maintain this for short periods. After this the speed and thus energy, have dropped to a point where the plane has lost any advantage it may have gained while turning.
    -PIO was a problem on early designs of unstable aircraft. The newer revisions are much better at corrections than the older ones, most of the resistance to this was by pilots who got used to the lag. Try flying any plane at a low speed, and lag become a huge problem.
    -These new systems will be far heavier them current systems, and still require armor and such.

    You comment on human percetion however is the best agrument against this, and in the end will be the failing of unmanned aircraft.
  • This is just adding some kill AI as well.

    Much easier said than done.

    Remember, if you fsck up while writing the AI, PEOPLE DIE. And not the enemy.
  • by shren ( 134692 ) on Monday November 13, 2000 @04:07PM (#625928) Homepage Journal

    The US Government has avoided or gotten pressured out of a lot of wars because American Soldiers were dying. Each technology designed to fight a battle without putting men on the field or in the sky will help move a political impediment to war.

    Most people would consider this a bad thing.

  • by Sinical ( 14215 ) on Monday November 13, 2000 @06:48PM (#625929)
    As someone who works in the defense industry,
    let me just say that weapons systems are *hard*.
    You do not have the luxury of going "dang, a bug"
    when your missile just decided to blow up
    friendlies by mistake

    Now, I work entirely on missiles, which have a
    fairly small operational scope (kill *that*),
    and I know how many hours (read, YEARS) missiles
    spend in development, how much testing is done,
    how many simulation runs are made, and the idea
    of trying to build algorithms that try and decide
    whether a *human* *being* should DIE is not
    something I would relish or encourage.

    IFF sytems break, they are destroyed in combat,
    and maybe they are jammed. Allied systems aren't
    compatible, or a wire gets loose, or whatever.

    In my very not humble opinion, only PEOPLE get
    to decide when people die. Remember, KISS,
    and AI fire systems are most definitely not simple.

  • This sounds like a great new hacking project/field if it ever catches on. It's risky, but think of the payoff. A whole air force at your command. MUHUHAHA.
    Also, these new aircraft would presumably have to have contact with the ground. Wouldn't it be easy to jam/disable this contact and send them spiraling to their demise? Just a thought.
  • The real problem with 'Terminators' is that they make war too easy. Fortunately war is horrible and difficult - if it weren't we would have far more of them. Anyone building devices that make war easy is an enemy of all of humanity. Any person who does that needs to be killed immediately; before he kills the rest of us - which he absolutely will.

  • Did anyone else think of Dale Brown's book "Hammerheads"? An anti-drug unit starts using the tilt-rotor V-22 Osprey [boeing.com] with great success. They add on smaller remote-controlled versions for patrolling. Some operators drop candy on parachutes to recreational boaters (good public relations) which were being examined, while supervisors would prefer the public not realize that those oversized toys carry lethal weapons.
  • Attention Citizen. Your avatar was killed in active duty on the 21st of June, 2026. Please report for decintergration at your nearest recycling plant.

    Trek did this one back in '66 or '67. The episode was called "A Taste of Armageddon".

  • [Quoted from Economist article]
    America's air force spends around $2m training each pilot, and $1 billion a year keeping its 2,000 F-16 pilots in peak
    ....
    From simulations, Boeing has worked out that operators should be able to handle four UCAVs efficiently from a single workstation.
    [/Quoted]

    So, if you can quarter the number of pilots you need, that's a cost reduction of US$750m pa in ongoing training costs. Not to mention direct labour cost. Oh, and if you're spending US$2m on each pilot, that can be quartered as well.

    And given that military outfits spend more time sitting around at peace than in conflict, this is a real ongoing saving.... And I'd bet that /that's/ the real motivator. Nothing to do with engineering prowess, or protecting pilots, just good old-fashioned greenbacks. :-/

  • Thank you. But allow me to put it shorter and sweeter by using what I told my daughter when she asked about the Holocaust...

    Evil Exists.

  • ObMacross Plus: Don't they know that if they start testing unmanned fighters, they run the risk of having them taken over by insane virturoid idol singers?
    --
  • But then, you're vulnerable to stealth giant flying scisors!
  • Sorry... The ships were not unmanned. There were human crews connected by the ansibles. Makes Ender's end strategy all that more poignent.

    SPOILER

    Now Ender did not know that it was not a training simulation, and the crews would not be returning to the homes they left (relativistic time dilation).

  • And IFF is used by every single airplane in the air, not just NATO.

    As well as other things such as SAMs, AAA, Warships, Tanks, etc, etc.
  • First of all, why would you be shooting at something that looked anything like a refugee camp (military camps would have lots of metal munitions and weapons, which refugee camps would presumably lack)?

    And no human commander is ever going to try and make a military camp look like a refugee camp???
  • by (void*) ( 113680 ) on Monday November 13, 2000 @04:18PM (#625984)
    The three laws of robot fighter planes (RFP)

    1. An RFP may not injure friendly forces, or, through inaction, allow a friendly forces to come to harm from enemy forces.
    2. An RFP must obey orders given it by friendly ground control, except where such orders would conflict with the First Law.
    3. An RFP must protect its own existence and its partners as long as such protection does not conflict with the First or Second Law.
  • talk to the developers of Terminus at Vicarious Visions. During the first demo they did at RPI those dev's told us that the AI could beat their best more than half the time. Not only that, they (the AI) were employing some very interesting "stalk and kill" methods.

    If that is more than promotion talk: impressing!
    I remain sceptical though. Computers, even state of the art "AI:s" play by the rules. Playing by the rules is not a good war-time strategy...

  • You're missing the point. I'm assuming that by "newer technology" you are alluding to Taco's idea of a network of airplanes, each of which knows the other's location. Stop and think about this for a minute - anyone with a semester's worth of high school physics can tell you that once you have an object's velocity and direction (i.e. vector), hitting it with, oh, say, a missile, becomes as trivial as solving a math problem. It's quite obvious, really, that such a system won't exist until the military can be absolutely sure (110% in MilSpeak) that no one could intercept such data. And that will be a long time coming.
  • > don't just kill allies

    Somebody might want to mention that to the American military at some point.

    "Hey, that tank has a big sign marking it as British..."

    "Fuck it. Bombs Away!"
  • It has been done, and it's called the Phoenix missile. Each of the things you mention (topographical recognition, GPS guidance) exists in the guidance computer of the Phoenix. You can nuclear-tip them, too, so any disparity b/t the firepower of a guided missile and a bomber is, erm, mitigated, I suppose.
  • by Chris Siegler ( 3170 ) on Monday November 13, 2000 @10:47PM (#626000)

    Since UCAVs are remotely controlled by operators sitting at computer workstations, there is no need for pilots to fly constant training missions to keep their skills sharpened; they can sit at the same workstations and run simulations.

    It would be pretty hard to distinguish a simulation from a real battle then, wouldn't it? I won't ruin the ending for people who haven't read the book, but this brought to memory Orson Scott Card's book, a boy named Ender, and his training at battle school.

  • Yay! Rah-Rah Rangers! They killed over a thousand Somalis with a 50:1 kill ratio!

    Only... They aborted the mission.

    the US military has a long track record of beating hell out of truly committed enemies.

    On the contrary, the US military has a long track record of inflicting terrible losses on 3rd world countries without actually managing to do what they set out to do: the defence of South Vietnam, the abduction of a Somali warlord, humanitarian protection of Yugoslav civilians, just a few examples of the many complete and utter failures with high "collateral damage".

    With one gruesome stunt, the Somalis stopped the immense American war machine cold. Now that is a successful operation.

    --------
  • I think you said what I was trying to better than I could have myself. Thanks!
  • No, it was when Ooog the caveman ordered his little brother, Ooogoo, to go kill the cavemen across the valley with a stone axe.

    The ethical dilemma lies with the person giving the orders, not with the mindless machine that follows them.
  • sorry 'bout that old bean. . .
  • What the hell are my allies doing supplying my enemies with equipment?

    Maybe your enemy was your ally. An obvious senario would be a revolution where the US, Russia, France, UK, etc shows up to support the people they regard as the "legitimate government".
  • Not only that, you pilots grew up playing the Playstation version, so training will be minimal - plus, you don't need perfect physical specimens - they don't have to have perfect eyesight (the camera gets rid of any advantage there), they don't have to be in top condition for resisting high-g maneuvers, nor do they need the extensive survival training.

    Basically, you could pull Beavis and Butthead right out of their living room and plop them directly into the console.
  • by andyh1978 ( 173377 ) on Monday November 13, 2000 @04:21PM (#626011) Homepage
    Perhaps the Asimov reference is an ironic one referring to the story 'The Feeling of Power'.

    In this (slightly heavy handed) story, the superpowers' computers battle each other, with fully automated weapons. Humans have become reliant on computers to do simple maths; why bother learning it when everyone has a computer?

    But the military want a way to beat the enemy's computer weapons; it's too costly to put larger and larger computers in the weapons. So, they re-invent the idea of doing maths on paper (a shocking concept to those assembled, and they name it 'graphitics'), and with it the 'manned missile'.

    The general drove on. "At the present time, our chief bottleneck is the fact that missiles are limited in intelligence. The computer controlling them can only be so large, and for that reason they can meet the changing nature of antimissile defenses in a unsatisfactory way. Few missiles, if any, accomplish their goal, and missile warfare is coming to a dead end; for the enemy , fortunately as well as for ourselves.

    "On the other hand, a missile with a man or two within, controlling flight by graphitics, would be lighter, more mobile, more intelligent. It would give us a lead that might well mean the margin of victory. Besides which, gentlemen, the exigencies of war compel us to remember one thing. A man is much more dispensable than a computer. Manned missiles could be launched in numbers and under circumstances that no good general would care to undertake as far as computer-directed missiles are concerned-"
    Full story can be found at this site. [themathlab.com]
  • Perhaps laser relays
    It's too early in the morning to be sarcastic and scathing.
    • battlefield smoke
    • rain
    • clouds (being air units, at least
    • problems stabalizing the lasers
    I could go on, but I won't.
  • I strongly suspect that any actions against mobile ground units will be carried out by humans.
    Actually, you'll notice that the only 'AI' weapons the US Army really has are X to ground missiles; they don't track in on targets, they track in on geographical locations. In other words, they don't try to find targets, they go to a predetermined point and blow up. That's interesting.
  • by NMerriam ( 15122 ) <NMerriam@artboy.org> on Monday November 13, 2000 @04:22PM (#626016) Homepage
    The concern with friendly fire isn't about allied aircraft -- you're correct that we alrady have decent friend-or-foe identification beacons on friendly aircraft to prevent us accidentally downing an allied plane.

    The primary concern in frind-or-foe is in ground forces that have no such beacons, nor do they really have any way of carrying such.

    The reason we lost so many forces to friendly fire during the Gulf War is that our ground forces were moving so quickly that they were frequently AHEAD of the official friend/foe demarcation. They were pushing so aggresively into Iraqi territory that they were mistaken by allied aircraft as thus being Iraqui forces and fired upon.

    In almost every case the mistake was recongnized immediately, but of course once the Hellfire is launched it does little good to realize it was a friendly tank.

    As much as we trust in technology the truth remains that we really have no 100% effective way of knowing exactly where friendly and enemy troops are int he heat of battle -- which is why, ultimately, we HAVE to rely on humans to make the call. if an aircraft's IFF is damaged that doesn't excuse our shooting it down, and the same goes for tanks.

    Some of the fault belongs in the fact that our armed forces (like moth other government agencies) are decentralized -- the Army doesn't necessarily know exactly what the Air Force is doing, and vice versa. Of course they cooperate, and they are getting much better, but even within the Army you had most of the friendly fire due to simple inability to notify the Army air units that the Army ground units were progressing as quickly as they were. Picking individual tanks out of a skirmish would be an exercise in futility if there was not a human to make the call.

    That said, the real advantage to unmanned craft is that they no longer have to keep within the physical constraints of safety for the pilot -- they can pull 15-G turns without a problem, and don't have to be designed to incorporate safety equipment or a feild of view for human eyes. They can be designed to be essentially disposable, perfectly aerodynamic, lighter, highly maneuverable, and with a minimal radar signature...

    ---------------------------------------------
  • You're kidding, right? A good bot in a combat game is unbeatable.

    A bot in a quick reaction/if it moves, blast it - combat game, yes.
    But a bot in a RTS game? A bot in a game with heavy penalties for blasting allies/NP:s? A bot against a human player who may wear a disguise, looking like an ally/NP?

    If you just want to destroy everything in an area, you don't need a robofighter, you can use a cruise missile or high altitude bombing (or in extreme cases an ICBM)

    And putting the crosshairs of a 20mm cannon on a live target is no different than in a game.

    It is very different (apart from the fact that you are killing someone)
    Reality is quite a bit larger than a quake arena. There are fake targets to confuse you, there are bushes and fog to hide behind. There are all sorts of things that you didn't expect.
    And most of all, You cannot afford to make mistakes. There is no "next round"

  • There was also a movie with a similar name that was filmed during the '70's (?) with a remarkably similar plot. Anyone remember watching Colossus?

    BTW, I saw that movie again on Scifi a few weeks ago, and couldn't believe the resemblance of the Colossus logo and the AOL logo.

    --

  • Forget about jamming, if you are actively transmitting a signal, everyone who has passive radar on (witch is eveyone in a air battle) will be able to pick you up.

    Even worst they are likely to think of integrating passive radar into AAA and missile fire control systems.
  • It'll be interesting to see how the program will develop as they run through the prototype airframes/AI... since they're saying they're just initially targeting SEAD as a program goal, it seems that Boeing doesn't feel their air-to-air capabilities are viable in a dogfight yet (which translates to human fighter escorts for the strike wing).

    Also, with the pilot removed, the UCAV can perform very high-g maneuvers (probably just for evasion of SAMs/AAMs and what not)... does anyone know if the current generation of unmanned reconnaisance aircraft have complex evasion routines, or are they oblivious to air defenses?
  • This looks much like the classic SF story Ender's game by Orson Scott Card.

    In that story, a young computer hacker and excellent gamer is sent to army training, controlling unmanned combat space vehicles in a war with an alien race. He won the war in a very unexpected way. I don't want to spoil the ending for those who haven't read it, but you should!

  • How, exactly, do you propose tagging the refugee camp and everything in it?

    I don't. First of all, why would you be shooting at something that looked anything like a refugee camp (military camps would have lots of metal munitions and weapons, which refugee camps would presumably lack)? That's what radar signatures are for... Tanks look different than starving people on radar...

    remove the human, and the first really major f*ckup will cause a huge uproar

    Exactly. Even if the machines are more accurate than people, there will be an outcry the first time they screw up.
  • by costas ( 38724 ) on Monday November 13, 2000 @04:23PM (#626029) Homepage
    There are better reasons than cost to create UCAVs:

    G-forces. At this day and age, the true limit of a fighter's performance isn't engine power or structural integrity: it's how many Gs the pilot can stand. Even with the best pressure suits, a UCAV has an obvious advantage.

    PIO (Pilot Induced Oscillation): if you're gonna pull any tricky aerodynamics like the X-45 does (inverted swept wing, stealth profile) you need dynamically unstable aircraft. The problem with unstable designs in fighters is usually that the pilot overcompensates flight corrections --i.e., the resolution of the human is much lower than the resolution that the flight corrections must be made; in essence, the pilot is correcting the aircraft at a lag. Modern control systems of course correct for this already --by trying to determine what the pilot *wants* to do, rather than what he's putting in the stick-- but with higher Gs (and thus higher speeds) the human is the weak link.

    Weight, of course. If I remember my Design courses correctly, the extra systems for the pilot account for about 20%-25% of a fighter's Take Off Gross Weight: armor plating, cockpit controls, air conditioning, etc. Weight is an aircraft's Number 1 limiting design factor.

    OTOH there is one huge disadvantage to a UCAV: in a dog-fight, or whenever human perception is needed to reduce the decision tree to something manageable, they will always (well, for the next few decades anyway) be outmanned. Pun intended.

  • I imagine that "The Economist" was referring to the American habit of shooting up its allies on the ground, rather than the marginally less frequent shooting own aircraft, flying through cable railways, crashing Harriers with no reason at all, but as usual, irony flies over the American head.
  • I second that. I work on artillery control systems. It's not just something you can hack out. Guys, if you make a mistake, you don't just get a BSOD. If you make a mistake, people DIE. The wrong people. Not the bad guys.

    A story here: A friend of mine did his military service in the artillery. During a sharp weapons exercise, they drove their howitzer into the indicated position, assembled it, entered their and their targets coordinated in their military number cruncher, pointed the barrel in the indicated direction AND DID THE REGULATED DOUBLE CHECK.
    Imagine the pale faces when it turned that some guy had mixed up north and south and the Bofors Howitzer m/77 (range > 20 km) was pointing 180 degrees wrong.

    After they got back, they did a quick check to see where they were first aiming. Smack in the center square of a nearby town...

  • Fixed ground units will most likely be destroyed in an automated fashion. These aircraft are also extremely unlikely to be put in a mode (or even have a mode) where their objective is to destroy anything that isn't a friendly.

    What advantage does "roboplane" have over a cruise missile here? Having to recover the plane is additional hassle.
  • After the "accidental" bombing of the Chinese Embassy in Belgrade, the opinion of many countries (most notably China, of course) was that Americans are so afraid of losing their own lives, that they are willing to be reckless with the lives of other people.
    Sour grapes and propoganda. The US Military is amoung, if not THE, world's most powerfull forces. It has in its arsonal weapons systems that give it capabilities undreamed of by less well-funded armies. To those who oppose US interests, that has to be frightening. I'm not suprised this generates a bit of name-calling.
    Some have gone so far as to link this behavior to the fact that we are willing to get involved where Europeans or Oil is at stake (not necessarily in that order of importance), even if mass slaughter is going on and human rights are being severely violated with impunity.
    Damned if you do, damned if you don't.

    The US is going to protect its interests. War for oil? Certainly. The Gulf War was about protecting allied, and thus US, interests. Liberating a small State overan by its aggresive neighbor? It makes good press.

    That's not to say the US won't take action on a moral basis. We have the reach to be able to react to any event in the world. But these situations get considerably more complex and have to potential to change drastically from the inital intent. Somalia is a prime example (and has sparked frantic study in urban warfare by the US Army). But just because we can... does that mean we should?

    The US military can't solve every problem in the world. We will fail if we try. With any luck, the US leadership will wisely choose those situations where US involvement can help.

    But the more the US gets involved in world affairs, the more we'll be scorned by fearfull governments for interfering with other sovergn nations. How ironic that some of these same critics would complain that we don't involve ourselves with every world hotspot.

    Of course, the fear of some of the greatest critics will be that the next "hot spot" will be their own backyard.

    Or breeding new and interesting versions of Mad Cow Disease to spray into cattle feed. Or slipping discrete lead-lined packets out of Khazakstan. Or suicide bombing our ships. Or cutting back on oil production.
    And thus, our critics will seek ways to level the playing field. They'll use our morality (wish to avoid civilian casualties) to gain the high ground - its part of what makes urban warfare so difficult. And they'll seek out cheap forms of "weapons of mass destruction" to include biological and chemical agents. Oddly enough, this kind of threat will only increase the likelyhood that the US will take active interest in these governments.
  • OK, I took a few seconds to look it up. The movie was actually released in 1969, and was called "Colossus: The Forbin Project". It was about the U.S. and the Soviet Union both building virtually indestructible supercomputers that powered the defense system of each country. Eventually, the computers take over the planet and force (!) people to live in peace. You can see the Colossus logo pretty clearly on the bottom images on this page [akamai.net] (caching courtesy of Lycos), and there's also the IMDB entry [imdb.com].

    --

  • Ever seen one where the AI can beat a skilled human player?

    Yeah; Counter-Strike aimbot. :-)

    Seriously, though, one thing people are forgetting is that most of the time when a human being makes a mistake in a battle and shoots a missle at the wrong Bradley (happened in a friend of mine's unit, and you all saw it on CNN), it's because a piece of complicated technology they were relying on failed.

    A computer wouldn't even have the option of verifying that the hardware didn't fail. At least a human has a chance.

    -
  • Um, is there anyway of teaming Unmanned and Manned craft? Like a single human fighter with, say, 5 unmanned subordinates?

    I don't know about you, but when I play Homeworld, I usually watch my squadrons closely (while not leaving it to a blip on the screen for them to find/kill). If "I" think that what they find is/isn't worth it, "I" decide if they should go for it/retreat/find something else.

    The machines can dip their fingers in blood while a human can play eye in the sky (e.g. "Heh, look, a SCUD base" or "Nope, those are our guys").

    Personally it seems like a solution that maximizes all of their strengths while minimizing their weaknesses.

  • by vergil ( 153818 ) <vergilb.gmail@com> on Monday November 13, 2000 @04:28PM (#626065) Journal
    I've noticed a disturbing trend when it comes to modern weaponry, war and the public's perception of both.

    Recall the "conflict" (it wasn't formally a "war") in the Persian Gulf and the lavish media coverage fawning over the tricked-out American arsenal of depleted uranium, ship-launched cruise missiles and so-called "smart bombs."

    I was in high school at the time, and remember well the glossy graphics in the corporate press extolling the efficiency of "fire-and-forget" rockets.

    Later came a few insightful (but quickly forgotten) editorials criticizing America's "video game mentality" of combat.

    Perhaps automated weapon systems are more efficient than those manned by humans. Maybe they'll even cut down on "friendly" casualties, and, in the long run, shave some dollars off of our bloated defense budget.

    What really concerns me ain't efficiency, or cost savings. It's accountability. I think many fail to realize that war -- whether conducted with knives or napalm, whether hand-to-hand or computerized -- is about killing. Smart bombs and fire-and-forget missiles abstract killing to a small blip on a phosphorescent screen far removed from the actual event.

    Unmanned flying gunships, I'm afraid, are a step in the wrong direction.

    Sincerely,
    Vergil

  • by Iron Monkey ( 113162 ) on Monday November 13, 2000 @04:36PM (#626118)

    I agree. One place that computers still haven't surpassed humans is dealing with complex scenarios with insufficient information. Putting IFF on everything in sight may seem reasonable at first, but then someone might jam them. Put in swanky image recognition software, and the enemy repaints their planes to look like your allies... and so on.

    Basically, any method you use to try and ensure no screw-ups occur can be broken by the enemy.. Find me a computer that can deal with situations as complex as identifying friend vs. foe in a heated battle situation - with very little time, and when the enemy is actively trying to decieve it, and I'll show you a human brain.

    Perhaps a computer like this lies somewhere in the future.. I hope so, quite frankly. But I firmly believe that until then, humans are the best thing we have - by a long shot.

    One other thing, regarding the ethical situation. Sure, there are losses to friendly fire in every war - these are likely inevitable. But risking large amounts of human life based on some new program or machine is potentially very stupid, given just how well tested the human being is in combat.. a very safe bet over the latest technological development.

  • by Lohgra ( 253776 ) on Monday November 13, 2000 @04:39PM (#626153)
    All this talk about intercepting or spoofing control signals is ridiculous.

    If electronic infiltration were even a remote possibility with computer pilots it would be almost as doable with real ones, since a human pilot gets targetting info, terrain maps, base and target locations, and mission objectives from ground or satellite locations already.

    Of course there is the human element of trust in calling the enemy "charlie" or naming different jets with various adjective/animal/number triplets, but that is just another type of encryption, really. It can't always be cracked by computers (as long as they're still failing the turing test) but enemy humans can crack it pretty well.

    A human pilot could of course just use visual imput to complete the mission and get home if he had some reason to doubt ground or satellite info. But so could a computer pilot! If the checksums or codewords or protocols are a bit fishy, of if it sees one ally attacking another, the computer plane can go manual override and just do its own thing. Of course this creates the possibility for a Dr. Strangelove scenario, but the movie shows that humans don't do much good against that.

    Personally I plan on getting on the robots' good side now so they'll let me live to do their menial chores once they take over.
  • by vheissu ( 229617 ) on Monday November 13, 2000 @05:21PM (#626161)
    For some reason, this discussion seems to have focused on the idea that we have only two choices: A) Machines picking targets with little or no human intervention, but with the powers of radar, IFF, radiation counters, video cameras, etc. or B) People controlling the machines without any sort of electronic back up. This is ridiculous. People and machines have different strengths and weaknesses. Even if an IFF works 100% of the time, it still needs a human to determine whether an enemy should be targeted, ignored, or avoided. And that is one of the easier problems for a machine. On the other hand, people have problems too--we're relativly fragile, get tired, need heavy life support, and can't detect radio signals. What this system does is exactly what makes sense--it allows the people to control the machine remotely and make the hard decisions, while the machine gets up close and personal.
  • by DrgnDancer ( 137700 ) on Monday November 13, 2000 @05:30PM (#626194) Homepage
    SEAD: Suppress Enemy Air Defense. It actually focuses more on ground based enemy AA. Tadio controled planes would be fairly good for that if you could make them accuratly target AA emplacents. Usually the biggest danger to the US during tacical air to groud attacks (Close air support or CAS) is enemy ground based AA. Artillery is often tasked with SEAD, but an accurate plane capable of detecting ground to air radar and acting as CAS to the CAS, would in some ways be more effective. Basically it would be like the Vietam era Wild Weasels, but without the insane risk to human life. I do not think these plane would be effective against other enemy planes. Humans are still better combat pilots than computers. Think about Quake, who's worse to play against a person or the comuter?
  • by _Sprocket_ ( 42527 ) on Monday November 13, 2000 @05:33PM (#626196)
    One of the many successful weapons systems used during the Gulf War was the AGM-88 HARM [navy.mil] (High-speed Anti-Radiation Missile). The HARM basically goes after weapons systems such as Anti-Aircraft Artillary (AAA) or Surface to Air Missiles (SAMs) by eliminating the radar component and effectively blinding the threat (if not completely destroying it). It was remarkably effective and played a large part in limiting the effectiveness of enemy air defenses.

    The HARM can be used in different ways. You can fire it off a platform such as the F-4G Wild Weasel. In this case, the EWO (Electronics Warfare Officer) selects a threat, hands that threat to the HARM, and sends the HARM on its way.

    But the HARM also carries its own threat table and can be sent after a target with little direction. In this case, a threat is identified in a general area and the HARM is fired. The HARM then looks for threats, identifies the highest priority threat according to its internal table, and then goes after that threat.

    These kinds of abilities allow a HARM to be used with platforms not otherwise especially equiped for Wild Weasel missions. It also allows for more creative functions. A pilot can "pickle over the horizon" and send a HARM after a known target without coming in range themselves. And by extending a HARM's fuel capacity, it can "hunt" for an extended period of time awaiting threat radars to power up after hiding from the Wild Weasel aircraft.

    Vicous stuff.

    The sobering part comes from a few rare reports during the Gulf War. There were reports of "near misses" with HARM missles by friendly surface forces. The theory is that the HARMs mis-identified friendly radar or communications systems as a threat in its internal threat table.

  • by Detritus ( 11846 ) on Monday November 13, 2000 @05:38PM (#626217) Homepage
    I think many fail to realize that war -- whether conducted with knives or napalm, whether hand-to-hand or computerized -- is about killing.

    From what I've read about modern military history, the goal of war is not to kill people, it is to destroy the combat effectiveness of the enemy's forces. This usually involves killing people, but that is not the objective.

Enzymes are things invented by biologists that explain things which otherwise require harder thinking. -- Jerome Lettvin

Working...