Unmanned (But Armed) Aircraft Experiments In 2001 449
John Warden, architect of the Gulf war air campaign, believes that by 2025
90% of combat aircraft will be unmanned. Next spring, the first armed
aircraft without pilot,
the X-45A UCAV will make its maiden flight. Replacing the pilot
by a ground controller cuts the price of each unit by two-thirds, and makes it
easier to transport.
The Economist has more, and states 'the decision to fire weapons should be
made by a human, to reduce the risk of "friendly fire."' This is not logical:
Since the planes can be networked and thus know each other's relative positions,
preventing friendly fire is a much simpler problem than the visual recognition
required to determine what to shoot at, unless you don't mind hitting
non-military targets. I wonder what Asimov would think.
Law enforcement (Score:2)
Unmanned Air Combat Vehicles (Score:2)
Alrighty, i've seen some presentations on these kinds of things. The basic problem in current aircraft design is that aircraft can handle multiple times the acceleration the human body can. The F-16 can already take g-loads that would have its pilot's brain squishing out his/her ears. And thats still a 4th generation fighter mind you, not the 5th generation like the F-22 which probably can outperform the pilot even worse...
Once you remove the pilot a lot of interesting ideas become possible since you just ditched about 25% of the aircraft's weight and a majority of the physical requirements. Entirely new designs become possible because you don't need a cockpit etc...
This is a great idea then except for one thing. The Air Force is expecting the next generation of air combat to be quieter than the previous generation. The F22 can fly in passive mode and at least locate (if not target) enemy aircraft from the noise they put out (like their active radar, radio chatter etc). Things like IFF are ariel bulls-eyes in these cases. Also a continous transmission like the UCAV would put out isn't going to be much better. There is a great benefit to having a self contained fighting aircraft in this case which is something a UCAV is not and most likely never will be. AI is nowhere near where it needs to be for these things to be fully autonomous instead of simply unmanned.
As for men pulling triggers being unneccessary, dream on. One of the most important parts of combat is making sure the enemy doesn't know where you are. Would it be wise to create a remotely accessible database (which could conceivably be hacked) showing where all your aircraft are at any given time? It would be a target list if the enemy got a hand on it and could exploit it. Think security here.
Re:Incorrect assumption (Score:2)
Sanchi
Shades of Gundam Wing (Score:2)
Am I alone here in thinking that the contemporary cartoon show "Gundam Wing" is an interesting critique of mechanized warfare which is valid today, though marketed to children?
Re:Incorrect assumption (Score:2)
Re:Does anyone remember that Star Trek episode... (Score:2)
Re:Disturbing Trend (Score:2)
This reminds me of a story I heard out of The Gulf War. I'm not sure, but I think the general was Colin Powell. Maybe somebody else knows for sure.
Anyhow, this reporter asks the general "what are your plans for the enemy". He responds: "we're going to find them and kill them." The reporter was shocked. Surely he meant "neutralize", "terminate", "subdue" or some other euphemism. When pressed again the general reiterated: "we're going to find them and kill them. That's what war is all about."
To me, this was almost as funny as that bit on Saturday Night Live where they were making fun of the reporters during that war, who were essentially asking the generals to reveal their battle plans before action. If the aforementioned incident really occured, it was an excellent way for the general to answer the reporter's question without revealing any secrets.
Re:Absolutely wrong (Score:2)
Yeah, AND THEY HAD THEIR IFF TURNED OFF.
Later
Erik Z
Re:Incorrect assumption (Score:4)
And IFF is used by every single airplane in the air, not just NATO.
Sanchi
Incorrect assumption (Score:5)
Re:Disturbing Trend (Score:3)
This kind of thinking is the reason why the U.S. military can't beat a truly committed enemy. It doesn't help that their idea of "combat effectiveness" is hopelessly self-referential (combat effectiveness is the ability to reduce combat effectiveness of an enemy).
The goal of war is to crush your enemies, see them driven before you, and hear the lamentations of their women. Even uneducated barbarians know that.
All kidding aside, war is a means, not a goal. Was is the way you get something you want when nothing else will work. Principles like "we should avoid civilian casualties" and "a tank is worth more than a handgun" are not absolute.
The U.S. military works from several basic assumptions that hamstring them in many situations (and are forced on them by the "CNN factor"): they are trying to help the civilians in the area, their enemy is an evil dictator whose people hate him, and they want the area to be peaceful. These assumptions often conflict with, and even contradict, the only possible logical purposes of their attacks, leading to confusion and apparent incompetence.
It is very dangerous to have such sweeping absolutes out in the open for all your enemies to see and exploit. People in more than one area that has not profited from their interaction with the U.S. have compared the American military, with their submarines, stealth planes, and nightfighting gear, to vampires. The analogy is a very appropriate one, not only because their strange-seeming motivations and their terrifying night-attack tactics, but because they are invulnerable to the normal, direct methods of attack and must be fought according to bizarre and seemingly arbitrary rules that make them curiously easy for most to ward off, if not kill.
However, with no chance of a seriously damaging defeat near home ground, the American military will doubtless remain complacently ignorant of how they are perceived, and in particular, how well their limitations are understood.
--------
SkyNet (Score:2)
Not nuch different from a cruise missle (Score:3)
The first such mission envisioned is the suppression of enemy air defenses.
I think that this means the ground based, anti-aircraft installations (guns, missles, radar) The airframe is a pretty stealthy design, which is right for going against that sort of target. That type of mission is the most dangerous for a human pilot. I'm sure Congress likes it, since if one of them goes down in combat, they don't have to tell their constituents that they are sending their children to death in some far off country.
I would think air-air combat is a much harder problem, but it would be very good to take the human out of the loop- The plane could then do continuous hi-G turns, both positive and negative, stuff that would knock out a human in seconds.
Re:Yeah, but... (Score:2)
--
Re:In HARM's way: doing your googling for you (Score:2)
If you're interested in the Electronic Warfare systems involved during the Gulf War, this source [wonderland.org.nz] is excellent.
Ever played a multiplayer game? (Score:2)
Granted, the USAF might do a better job at AI's then a gaming company, the basic problems are the same:
There simply are no automated solutions to a chaotic scenario! Freaking mathematically impossible!
A fully automated fighter will obey certain simple rules (fire at most threating target first, do evasive manouvers if fired upon)
A human will learn those rules, and make sure that "the most threating target" is actually a decoy and that the robofighter is under a constant haul of cheap, quite harmless missiles that will distract it.
Suppose the robofighter can determine friend/foe with 100% accuracy. That was the *easy* part. Now the question is "should I fire or not?"
Again, see to the gaming situation. Don't you just love how those AI controlled opponents take impossible long distance shots at you, giving you a chance to duck and return accurate fire from a better position? Or how 20 bots scramble to meet a single opponent, while the rest are sneaking in from the back?
Humans make mistakes, machines make mistakes. THe difference is that machines make their mistakes systematically.
Have we forgotten about one thing? (Score:2)
Light speed is 299792458 m/s. I'm working the numbers and after the transmission time, reception time, some fudge factor for signal decoding, latency, etc. and then the time to actually make the move, you are talking about a 1000-2000 mile radius between you and the plane.
Earth curvature is even worse. If you are prepared to really take a hit on the service radius, you can use a satelite, but then you have to use a satelite in geosynch orbit (which is not a nice low 100-200 mile orbit like the shuttle flies in. 22,000 miles is more like it)
Of course, there's always an airliner loaded with relays flying near the target radius.
But the pilots won't be able to sit in their couches in some comfy home hooked up to a modified playstation and make war. You are going to have to ship the pilots and the planes out to a base near the target, so your enemy is going to be gunning for that location. All it takes is one good-sized balistic missile to make it through the defenses and all of your planes just fall out of the sky because you just killed a bunch of pilots.
So it's not a technology that can make war like a game. Not unless the government knows something we don't about faster than light communication.
Re:Incorrect assumption (Score:2)
Because you might be being shot at from near it. Remember the Kaykusha rocket attacks on Israel? In one case the Israeli computer accurately aimed at the location from which the rockets were launched -- from just outside the fence of a Palestinian refugee camp. Refugee camps are used by guerilla groups as arms transfer points, recruiting centers, refuges, etc.
I have a small favor to ask... (Score:3)
--------
Absolutely wrong (Score:2)
Remember the USS Vincennes that shot down the Iran airliner?
IFF has been aorund since WWII (55+ years) and has never been so reliable that pilots actually trusted it. No doubt current versions are better than older stuff, but it's hardly perfect, or even good enough.
If IFF were so good, why do they spend billions on radar which can identify the airplane type from radar returns?
--
Re:One word Artificial Intelligence! (Score:2)
Re:Not nuch different from a cruise missle (Score:2)
It would also have a strong psycological factor. Wild Weasels, once the role had been perfected, tend to supress enemy air defenses by their mere presence. Imagine being a AAA operator considering the threat of stealth automated threat radar hunters lurking out there. Especially after a few sites get hit by one.
Oddly enough, I believe the US already has a simular capability. The AGM-88 HARM (which I've mentioned elsewhere in this story). Give the HARM an increased fuel capacity and it can humm round the open skies for hours waiting for a threat radar to power up. Of course, the HARM is a one-shot deal.
Friendly Fire is never friendly (Score:3)
Re:What about close air support? (Score:2)
Re:Jamming isn't a big problem (Score:2)
The problem is that to remotely pilot an aircraft travelling at Mach 2 (even much below Mach 1), you need a lot of data, and that data needs to be updated rapidly. Think about it. You need a video feed from the cockpit, all the radar info, all the instruments (attitude, altitude, speed,
Very slick design :) (Score:2)
I would be nervous about flying civilian aircraft in its vicinity (or indeed future variations of this concept that are more autonomous) in case it took a dislike to my plane. But- if I was expecting attacks from anything from enemy aircraft to helicopters to missiles, I'd want to have some of these little suckers loose in the sky. It'd be "all civilian aircraft out of the sky NOW! OK- anything left is toast". I think the defensive capabilities of such a design, particularly as autonomous robots, could be really formidable. This is not exclusively an offensive weapon. It could be a hell of a defense against almost any air attack.
Re:One word: Jamming! (Score:2)
Re:Disturbing Trend (Score:2)
Hey, if you're going to make it easier to kill other men and women under the guise of war, you should be willing to risk your own life.
Perhaps one only realizes the value of all life when one's own has been put in danger, or sacrificed.
-------------
Re:Reminds me of Terminator 2 (Score:2)
Okay, the whole thing goes:
Terminator: In 3 years Cyberdyne will become the largest supplier of military computer systems. All stealth bombers are upgraded with Cyberdyne computers, becoming fully unmanned. Afterward, they fly with a perfect operational record.
Sarah: Uh huh, great. Then those fat fucks in Washington figure, what the hell, let a computer run the whole show, right?
Terminator: Basically. The Skynet funding bill is passed. The system goes on-line August 4th, 1997. Human decisions are removed from strategic defense. Skynet begins to learn, at a geometric rate. It becomes self-aware at 2:14 a.m. eastern time, August 29. In a panic, they try to pull the plug.
Sarah: And Skynet fights back.
Re:Disturbing Trend (Score:2)
These arent autonomous vehicles (Score:2)
All they did is take the pilot out of the plane and put him on the ground folks. Those "operators" are air force pilots (the friend of an in-law of mine was oen of the test pilots.)
I would expect this sort of "machines are gonna kill us" nonsense from the unwashed masses, but I thought Slashdot was supposed to be a techno-literate group.
If you want to undrestand why they are going this direction, go rent an epsiode of Nova called "The Biology Barrier." For awhile now the limiting factor on fighter plane performance has not been what the plane could do, but what the human body inside the plane could stand. Taking the pilot out of the cockpit frees the plane up to perform at maximum.
Re:This is a great idea. (Score:2)
Attention Citizen. Your avatar was killed in active duty on the 21st of June, 2026. Please report for decintergration at your nearest recycling plant.
"Damn it.."
For REAL autonomous combat machines... (Score:2)
It was aimed directly at making intelligent autonomous tanks, ala Kieth Laumer's bolo books.
A lot of the neural network stuff we see in practice now and the vestiges of neural network reserach still going on got started under that project.
Re:Incorrect assumption (Score:2)
I find it hard to believe that
control a plane, AND
determine targets AND
identify friendlies
all while in combat. Of course a human is going to be pulling the trigger. If these things are remote control, fine. But don't BS me and say that a computer will do all that.
Yeah, but... (Score:2)
Besides, why is it that every time someone brings up an even remotely autonomous robot, someone brings up Terminator? Why would an AI be interested in the earth at all? Oxygen's hard on the 'bots and living at the bottom of a gravity well would require you to waste much more resources in the construction of your autonomous units. First thing I'd do as a rogue AI would be to move to the asteroid field.
EMP pulses (Score:2)
I ask this because the planes will be remote controlled which would require some type of radio link. That implies and antennae. Is it possible to harden an antennae against EMP and still recieve a low power signal from many miles away? (assumption: the remote control tranmitter would have to be portable which would limit it to a few hundred watts?)
Would it be possible to control the plane with a laser communication system using a high altitude AWACs in order to keep line of sight.
Misconceptions (Score:2)
1) There is no AI running the UCAV. There is AI running in the aircraft, but it is there to help the pilot of the aircraft. It is an unmanned air vehicle (UAV), but there is a ground station that the pilot sits at. From this station, the pilot controls the aircraft, as well as designates and determines targets. The ground station has a screen that is a sensor view from the UCAV (both FLIR and DTV (I think it has DTV anyway)) that is used to identify the target before the PERSON pulls the trigger. (One person controls a flight of four UCAVs).
2) Maneuverability. Sure, losing the human inside the aircraft would lessen the restrictions on aircraft agility, but the UCAV isn't designed for that. It is designed for deep attack missions, where the threat is too high for humans to risk their lives. It is designed for stealth. They don't want the UCAV to be seen at all. If its seen, it has very little in the way of surviving (which is why they are so cheap). They are designed to fly a long way, destroy the SAM sites (very easy to ID) and other high-priority targets (such as command and control structures).
3) IFF. Modern land vehicles don't have the CPU power or bandwidth to answer all the IFF queries that would be going on on the modern battlefield. That is the main reason that they don't have IFF. It is still up to the person to decide if a target is a threat.
More or less war crimes? (Score:2)
or
__
Armed UAVs, cruise missiles and stuff (Score:2)
All the talk in this thread about air-to-air combat, identification issues, communications latency, jamming, hacking etc. is quite interesting in principle. Looking at the specific mission the Boeing aircraft is designed for, however, a lot of the issues become irrelevant. The goal is to get a platform for Supression of Enemy Air Defences (SEAD) which is safer than the current manned airplanes, which have to fly right into the envelopes of the systems they are tasked to attack.
The Boeing UAV, I imagine, would perform its mission by flying over a given area, maybe around an enemy airfield, which is the target for a later attack by manned strike aircraft. The UAV package, say four UAVs, cruises in, sensing for air defense radar emissions and looking with EO sensors and radar. Possible targets could be identified and cataloged by the on-board computer, and transmitted to the control station. Most of this part of the mission could be run quite autonomously, with human controllers only supervising. If the UAVs are engaged by enemy systems, they could perform automatic evasion maneuvers, or the controllers could intervene, ordering the endangered UAV to, for example, "fly into this valley and hide", and redirect it back sometime later.
If the mission of the day is an attack on the air defense systems, this could be pre-planned in a very short amount of time by the controllers and the overall mission commander, based on the requirements of the follow-on strike package: "Let's take out these two missile batteries here first, they are on the ingress and egress routes for the strike, and then proceed to attack these gun batteries at the field - they may endanger our low-level strike planes". Targets can be designated, a time-on-target specified, and the UAV system would fly the UAVs in a manner consistent with these plans.
The actual attack, then, would be a more hands-on effort on the controller's part. Weapons release would be ordered by them, but the technicalities would be handled by the computers - just as in manned aircraft.
Afterwards, the surviving UAVs would withdraw or, possibly, stay on station to attack sudden threats when the manned strike aircraft are over the target.
In light of the requirements of this mission, consider this:
real friendly fire concern (Score:2)
However, I can see being concerned about interaction between computer-controlled aircraft and, say, ground troops. Human soldiers do not act in ways that are easily predictable by computers. Calling in a gunship for air cover might be a little risky if the "gunners" can't tell who's on whose side.
My mom is not a Karma whore!
Re:Incorrect assumption (Score:2)
That depends on what the robofighters would consider to be friendly.
An automated, networked system, a kind of Sky Net, if you will, would be a threat to all of humanity. It would rapidly attempt to take over the work and destroy all of humanity.
If the mysterious destruction in the mid-eighties of an LA police station, the more mysterious destruction ten years later of the CyberDyne Systems building, or the rampage that hit the Itchy and Scratchy Themepark doesn't convince you, I don't know what will. Robots will, inevitably, turn on their masters. We would be fools to ignore the evidence that Television gives us.
Just wanted to let you all know. When the nuclear holocaust hits and the HKs and Terminators start rounding us up for disposal, I'll be there to tell you I Told You So.
Re:Incorrect assumption (Score:3)
The humans-must-pull-trigger rule is just a feel-good safeguard to avoid Terminator 2 scenarios. I imagine computers will soon surpass humans in accuracy in making those kinds of decisions (if they haven't already), but it will take much longer before people will be comfortable with the idea of machines that have the sole discression of using deadly force.
Re:Incorrect assumption (Score:2)
Tesla (Score:2)
Up and over down under (Score:2)
Re:Incorrect assumption (Score:2)
Re:Disturbing Trend (Score:2)
Re:New task/field for hackers (Score:2)
Re:PIO, G and other little problems... (Score:2)
-PIO was a problem on early designs of unstable aircraft. The newer revisions are much better at corrections than the older ones, most of the resistance to this was by pilots who got used to the lag. Try flying any plane at a low speed, and lag become a huge problem.
-These new systems will be far heavier them current systems, and still require armor and such.
You comment on human percetion however is the best agrument against this, and in the end will be the failing of unmanned aircraft.
Re:New task/field for hackers (Score:2)
Much easier said than done.
Remember, if you fsck up while writing the AI, PEOPLE DIE. And not the enemy.
I am worried. (Score:5)
The US Government has avoided or gotten pressured out of a lot of wars because American Soldiers were dying. Each technology designed to fight a battle without putting men on the field or in the sky will help move a political impediment to war.
Most people would consider this a bad thing.
Re:Incorrect assumption (Score:5)
let me just say that weapons systems are *hard*.
You do not have the luxury of going "dang, a bug"
when your missile just decided to blow up
friendlies by mistake
Now, I work entirely on missiles, which have a
fairly small operational scope (kill *that*),
and I know how many hours (read, YEARS) missiles
spend in development, how much testing is done,
how many simulation runs are made, and the idea
of trying to build algorithms that try and decide
whether a *human* *being* should DIE is not
something I would relish or encourage.
IFF sytems break, they are destroyed in combat,
and maybe they are jammed. Allied systems aren't
compatible, or a wire gets loose, or whatever.
In my very not humble opinion, only PEOPLE get
to decide when people die. Remember, KISS,
and AI fire systems are most definitely not simple.
New task/field for hackers (Score:2)
Also, these new aircraft would presumably have to have contact with the ground. Wouldn't it be easy to jam/disable this contact and send them spiraling to their demise? Just a thought.
Re:Incorrect assumption (Score:2)
The real problem with 'Terminators' is that they make war too easy. Fortunately war is horrible and difficult - if it weren't we would have far more of them. Anyone building devices that make war easy is an enemy of all of humanity. Any person who does that needs to be killed immediately; before he kills the rest of us - which he absolutely will.
"Hammerheads" (Score:2)
Re:This is a great idea. (Score:2)
Trek did this one back in '66 or '67. The episode was called "A Taste of Armageddon".
The /real/ reason? (Score:2)
America's air force spends around $2m training each pilot, and $1 billion a year keeping its 2,000 F-16 pilots in peak
....
From simulations, Boeing has worked out that operators should be able to handle four UCAVs efficiently from a single workstation.
[/Quoted]
So, if you can quarter the number of pilots you need, that's a cost reduction of US$750m pa in ongoing training costs. Not to mention direct labour cost. Oh, and if you're spending US$2m on each pilot, that can be quartered as well.
And given that military outfits spend more time sitting around at peace than in conflict, this is a real ongoing saving.... And I'd bet that
Re:What about close air support? (Score:2)
Evil Exists.
Where's Isamu Dyson When We Need Him? (Score:2)
--
Re:One word: Jamming! (Score:2)
Re:Ender's game -- Possible spoilers (Score:2)
Sorry... The ships were not unmanned. There were human crews connected by the ansibles. Makes Ender's end strategy all that more poignent.
SPOILER
Now Ender did not know that it was not a training simulation, and the crews would not be returning to the homes they left (relativistic time dilation).
Re:Incorrect assumption (Score:2)
As well as other things such as SAMs, AAA, Warships, Tanks, etc, etc.
Re:Incorrect assumption (Score:2)
And no human commander is ever going to try and make a military camp look like a refugee camp???
What Asimov would think ... (Score:3)
Re:Ever played a multiplayer game? (Score:2)
If that is more than promotion talk: impressing!
I remain sceptical though. Computers, even state of the art "AI:s" play by the rules. Playing by the rules is not a good war-time strategy...
Re:Incorrect assumption (Score:2)
Re:Incorrect assumption (Score:2)
Somebody might want to mention that to the American military at some point.
"Hey, that tank has a big sign marking it as British..."
"Fuck it. Bombs Away!"
Re:Incorrect assumption (Score:2)
Ender's game (Score:3)
It would be pretty hard to distinguish a simulation from a real battle then, wouldn't it? I won't ruin the ending for people who haven't read the book, but this brought to memory Orson Scott Card's book, a boy named Ender, and his training at battle school.
Exactly! They did not accomplish their objective. (Score:2)
Only... They aborted the mission.
the US military has a long track record of beating hell out of truly committed enemies.
On the contrary, the US military has a long track record of inflicting terrible losses on 3rd world countries without actually managing to do what they set out to do: the defence of South Vietnam, the abduction of a Somali warlord, humanitarian protection of Yugoslav civilians, just a few examples of the many complete and utter failures with high "collateral damage".
With one gruesome stunt, the Somalis stopped the immense American war machine cold. Now that is a successful operation.
--------
Re:Gundam Wing (Score:2)
Re:What about the V-1? (Score:2)
The ethical dilemma lies with the person giving the orders, not with the mindless machine that follows them.
Re:Friendly Fire is never friendly (Score:2)
Re:Incorrect assumption (Score:2)
Maybe your enemy was your ally. An obvious senario would be a revolution where the US, Russia, France, UK, etc shows up to support the people they regard as the "legitimate government".
Re:The /real/ reason? (Score:2)
Basically, you could pull Beavis and Butthead right out of their living room and plop them directly into the console.
Re:what asimov would think (Score:4)
In this (slightly heavy handed) story, the superpowers' computers battle each other, with fully automated weapons. Humans have become reliant on computers to do simple maths; why bother learning it when everyone has a computer?
But the military want a way to beat the enemy's computer weapons; it's too costly to put larger and larger computers in the weapons. So, they re-invent the idea of doing maths on paper (a shocking concept to those assembled, and they name it 'graphitics'), and with it the 'manned missile'.
Full story can be found at this site. [themathlab.com]
Re:Incorrect assumption (Score:2)
Re:Incorrect assumption (Score:2)
Friendly fire isn't about aircraft (Score:3)
The primary concern in frind-or-foe is in ground forces that have no such beacons, nor do they really have any way of carrying such.
The reason we lost so many forces to friendly fire during the Gulf War is that our ground forces were moving so quickly that they were frequently AHEAD of the official friend/foe demarcation. They were pushing so aggresively into Iraqi territory that they were mistaken by allied aircraft as thus being Iraqui forces and fired upon.
In almost every case the mistake was recongnized immediately, but of course once the Hellfire is launched it does little good to realize it was a friendly tank.
As much as we trust in technology the truth remains that we really have no 100% effective way of knowing exactly where friendly and enemy troops are int he heat of battle -- which is why, ultimately, we HAVE to rely on humans to make the call. if an aircraft's IFF is damaged that doesn't excuse our shooting it down, and the same goes for tanks.
Some of the fault belongs in the fact that our armed forces (like moth other government agencies) are decentralized -- the Army doesn't necessarily know exactly what the Air Force is doing, and vice versa. Of course they cooperate, and they are getting much better, but even within the Army you had most of the friendly fire due to simple inability to notify the Army air units that the Army ground units were progressing as quickly as they were. Picking individual tanks out of a skirmish would be an exercise in futility if there was not a human to make the call.
That said, the real advantage to unmanned craft is that they no longer have to keep within the physical constraints of safety for the pilot -- they can pull 15-G turns without a problem, and don't have to be designed to incorporate safety equipment or a feild of view for human eyes. They can be designed to be essentially disposable, perfectly aerodynamic, lighter, highly maneuverable, and with a minimal radar signature...
---------------------------------------------
Re:Ever played a multiplayer game? (Score:2)
A bot in a quick reaction/if it moves, blast it - combat game, yes.
But a bot in a RTS game? A bot in a game with heavy penalties for blasting allies/NP:s? A bot against a human player who may wear a disguise, looking like an ally/NP?
If you just want to destroy everything in an area, you don't need a robofighter, you can use a cruise missile or high altitude bombing (or in extreme cases an ICBM)
And putting the crosshairs of a 20mm cannon on a live target is no different than in a game.
It is very different (apart from the fact that you are killing someone)
Reality is quite a bit larger than a quake arena. There are fake targets to confuse you, there are bushes and fog to hide behind. There are all sorts of things that you didn't expect.
And most of all, You cannot afford to make mistakes. There is no "next round"
Re:what asimov would think (Score:2)
BTW, I saw that movie again on Scifi a few weeks ago, and couldn't believe the resemblance of the Colossus logo and the AOL logo.
--
Re:Incorrect assumption (Score:2)
Even worst they are likely to think of integrating passive radar into AAA and missile fire control systems.
Roles and missions (Score:2)
Also, with the pilot removed, the UCAV can perform very high-g maneuvers (probably just for evasion of SAMs/AAMs and what not)... does anyone know if the current generation of unmanned reconnaisance aircraft have complex evasion routines, or are they oblivious to air defenses?
Ender's game (Score:2)
In that story, a young computer hacker and excellent gamer is sent to army training, controlling unmanned combat space vehicles in a war with an alien race. He won the war in a very unexpected way. I don't want to spoil the ending for those who haven't read it, but you should!
Re:Incorrect assumption (Score:2)
I don't. First of all, why would you be shooting at something that looked anything like a refugee camp (military camps would have lots of metal munitions and weapons, which refugee camps would presumably lack)? That's what radar signatures are for... Tanks look different than starving people on radar...
remove the human, and the first really major f*ckup will cause a huge uproar
Exactly. Even if the machines are more accurate than people, there will be an outcry the first time they screw up.
PIO, G and other little problems... (Score:4)
G-forces. At this day and age, the true limit of a fighter's performance isn't engine power or structural integrity: it's how many Gs the pilot can stand. Even with the best pressure suits, a UCAV has an obvious advantage.
PIO (Pilot Induced Oscillation): if you're gonna pull any tricky aerodynamics like the X-45 does (inverted swept wing, stealth profile) you need dynamically unstable aircraft. The problem with unstable designs in fighters is usually that the pilot overcompensates flight corrections --i.e., the resolution of the human is much lower than the resolution that the flight corrections must be made; in essence, the pilot is correcting the aircraft at a lag. Modern control systems of course correct for this already --by trying to determine what the pilot *wants* to do, rather than what he's putting in the stick-- but with higher Gs (and thus higher speeds) the human is the weak link.
Weight, of course. If I remember my Design courses correctly, the extra systems for the pilot account for about 20%-25% of a fighter's Take Off Gross Weight: armor plating, cockpit controls, air conditioning, etc. Weight is an aircraft's Number 1 limiting design factor.
OTOH there is one huge disadvantage to a UCAV: in a dog-fight, or whenever human perception is needed to reduce the decision tree to something manageable, they will always (well, for the next few decades anyway) be outmanned. Pun intended.
Friendly fire (Score:2)
Re:Incorrect assumption (Score:2)
A story here: A friend of mine did his military service in the artillery. During a sharp weapons exercise, they drove their howitzer into the indicated position, assembled it, entered their and their targets coordinated in their military number cruncher, pointed the barrel in the indicated direction AND DID THE REGULATED DOUBLE CHECK.
Imagine the pale faces when it turned that some guy had mixed up north and south and the Bofors Howitzer m/77 (range > 20 km) was pointing 180 degrees wrong.
After they got back, they did a quick check to see where they were first aiming. Smack in the center square of a nearby town...
Re:Incorrect assumption (Score:2)
What advantage does "roboplane" have over a cruise missile here? Having to recover the plane is additional hassle.
Sour Grapes (Score:2)
The US is going to protect its interests. War for oil? Certainly. The Gulf War was about protecting allied, and thus US, interests. Liberating a small State overan by its aggresive neighbor? It makes good press.
That's not to say the US won't take action on a moral basis. We have the reach to be able to react to any event in the world. But these situations get considerably more complex and have to potential to change drastically from the inital intent. Somalia is a prime example (and has sparked frantic study in urban warfare by the US Army). But just because we can... does that mean we should?
The US military can't solve every problem in the world. We will fail if we try. With any luck, the US leadership will wisely choose those situations where US involvement can help.
But the more the US gets involved in world affairs, the more we'll be scorned by fearfull governments for interfering with other sovergn nations. How ironic that some of these same critics would complain that we don't involve ourselves with every world hotspot.
Of course, the fear of some of the greatest critics will be that the next "hot spot" will be their own backyard.
And thus, our critics will seek ways to level the playing field. They'll use our morality (wish to avoid civilian casualties) to gain the high ground - its part of what makes urban warfare so difficult. And they'll seek out cheap forms of "weapons of mass destruction" to include biological and chemical agents. Oddly enough, this kind of threat will only increase the likelyhood that the US will take active interest in these governments.Re:what asimov would think (Score:2)
--
Re:Ever played a multiplayer game? (Score:2)
Yeah; Counter-Strike aimbot.
Seriously, though, one thing people are forgetting is that most of the time when a human being makes a mistake in a battle and shoots a missle at the wrong Bradley (happened in a friend of mine's unit, and you all saw it on CNN), it's because a piece of complicated technology they were relying on failed.
A computer wouldn't even have the option of verifying that the hardware didn't fail. At least a human has a chance.
-
Idiotic Solutions to Real Problems (Score:2)
Um, is there anyway of teaming Unmanned and Manned craft? Like a single human fighter with, say, 5 unmanned subordinates?
I don't know about you, but when I play Homeworld, I usually watch my squadrons closely (while not leaving it to a blip on the screen for them to find/kill). If "I" think that what they find is/isn't worth it, "I" decide if they should go for it/retreat/find something else.
The machines can dip their fingers in blood while a human can play eye in the sky (e.g. "Heh, look, a SCUD base" or "Nope, those are our guys").
Personally it seems like a solution that maximizes all of their strengths while minimizing their weaknesses.
Disturbing Trend (Score:5)
Recall the "conflict" (it wasn't formally a "war") in the Persian Gulf and the lavish media coverage fawning over the tricked-out American arsenal of depleted uranium, ship-launched cruise missiles and so-called "smart bombs."
I was in high school at the time, and remember well the glossy graphics in the corporate press extolling the efficiency of "fire-and-forget" rockets.
Later came a few insightful (but quickly forgotten) editorials criticizing America's "video game mentality" of combat.
Perhaps automated weapon systems are more efficient than those manned by humans. Maybe they'll even cut down on "friendly" casualties, and, in the long run, shave some dollars off of our bloated defense budget.
What really concerns me ain't efficiency, or cost savings. It's accountability. I think many fail to realize that war -- whether conducted with knives or napalm, whether hand-to-hand or computerized -- is about killing. Smart bombs and fire-and-forget missiles abstract killing to a small blip on a phosphorescent screen far removed from the actual event.
Unmanned flying gunships, I'm afraid, are a step in the wrong direction.
Sincerely,
Vergil
Re:Incorrect assumption (Score:4)
I agree. One place that computers still haven't surpassed humans is dealing with complex scenarios with insufficient information. Putting IFF on everything in sight may seem reasonable at first, but then someone might jam them. Put in swanky image recognition software, and the enemy repaints their planes to look like your allies... and so on.
Basically, any method you use to try and ensure no screw-ups occur can be broken by the enemy.. Find me a computer that can deal with situations as complex as identifying friend vs. foe in a heated battle situation - with very little time, and when the enemy is actively trying to decieve it, and I'll show you a human brain.
Perhaps a computer like this lies somewhere in the future.. I hope so, quite frankly. But I firmly believe that until then, humans are the best thing we have - by a long shot.
One other thing, regarding the ethical situation. Sure, there are losses to friendly fire in every war - these are likely inevitable. But risking large amounts of human life based on some new program or machine is potentially very stupid, given just how well tested the human being is in combat.. a very safe bet over the latest technological development.
hacking into jet fighters: not bloodly likely (Score:3)
If electronic infiltration were even a remote possibility with computer pilots it would be almost as doable with real ones, since a human pilot gets targetting info, terrain maps, base and target locations, and mission objectives from ground or satellite locations already.
Of course there is the human element of trust in calling the enemy "charlie" or naming different jets with various adjective/animal/number triplets, but that is just another type of encryption, really. It can't always be cracked by computers (as long as they're still failing the turing test) but enemy humans can crack it pretty well.
A human pilot could of course just use visual imput to complete the mission and get home if he had some reason to doubt ground or satellite info. But so could a computer pilot! If the checksums or codewords or protocols are a bit fishy, of if it sees one ally attacking another, the computer plane can go manual override and just do its own thing. Of course this creates the possibility for a Dr. Strangelove scenario, but the movie shows that humans don't do much good against that.
Personally I plan on getting on the robots' good side now so they'll let me live to do their menial chores once they take over.
Re:Incorrect assumption (Score:4)
Re:Incorrect assumption (Score:3)
In HARM's way (Score:3)
The HARM can be used in different ways. You can fire it off a platform such as the F-4G Wild Weasel. In this case, the EWO (Electronics Warfare Officer) selects a threat, hands that threat to the HARM, and sends the HARM on its way.
But the HARM also carries its own threat table and can be sent after a target with little direction. In this case, a threat is identified in a general area and the HARM is fired. The HARM then looks for threats, identifies the highest priority threat according to its internal table, and then goes after that threat.
These kinds of abilities allow a HARM to be used with platforms not otherwise especially equiped for Wild Weasel missions. It also allows for more creative functions. A pilot can "pickle over the horizon" and send a HARM after a known target without coming in range themselves. And by extending a HARM's fuel capacity, it can "hunt" for an extended period of time awaiting threat radars to power up after hiding from the Wild Weasel aircraft.
Vicous stuff.
The sobering part comes from a few rare reports during the Gulf War. There were reports of "near misses" with HARM missles by friendly surface forces. The theory is that the HARMs mis-identified friendly radar or communications systems as a threat in its internal threat table.
Re:Disturbing Trend (Score:3)
From what I've read about modern military history, the goal of war is not to kill people, it is to destroy the combat effectiveness of the enemy's forces. This usually involves killing people, but that is not the objective.