Are We Headed to a Future With Autonomous Robot Soldiers? (youtu.be) 179
A CBS News video reports the U.S. military "is now testing an autonomous F-16 fighter jet, and in simulated dogfighting, the AI already crushes trained human pilots." And that's just one of several automated systems being developed — raising questions as to just how far this technology should go:
"The people we met developing these systems say they aren't trying to replace humans, just make their jobs safer. But a shift to robot soldiers could change war in profound ways, as we found on a visit to Sikorsky Aircraft, the military contractor that makes the Blackhawk helicopter... Flying the experimental Blackhawk is as easy as moving a map." [The experimental helicopter is literally controlled by taps on a tablet computer, says a representative from Sikorsky. "We call it operating, because you're making suggestions. The machine really decides how to do it."]
The Sikorsky representative suggests it could avoid a "Blackhawk down" scenario where more human soldiers have to be sent into harm's way to try rescuing their comrades. But CBS also calls it "part of a larger effort to change how wars are fought, being led by DARPA, the Defense Department's innovative lab. They've also developed autonomous offroad buggies, unmanned undersea vehicles, and swarms and swarms of drones."
The CBS reporter then asks DARPA program manager Stuart Young if we're head for the future with Terminator-like fighting machines. His answer? "There's always those dilemmas that we have, but clearly our adversaries are thinking about that thing. And part of what DARPA does is to try to prevent technologial surprise." CBS also spoke to former Army Ranger Paul Scharre, who later worked for the Defense Department on autonomous weapons, who says already-available comercial technologies could create autonomous weapons today. "All it takes is a few lines of code to simply take the human out of the loop." "Scharre is not all doom and gloom. He points out in combat between nations, robot soldiers will legally need to follow the law of war, and might do so better than emotional or fatigued humans... But yes, Scharre does worry about the eventual marriage of advanced robots and military AI that becomes smarter and faster than we are."
Q: So at that point humans just would be out of the decision-making. You'd just have to trust the machines and that you'd programmed them well.
A: Yes...
Q: Do you think militaries should commit to keeping humans in the loop?
A: I don't think that's viable. If you could wave a magic wand and say, 'We're going to stop the growth of the technology', there's probably benefits in that. But I don't think it's viable today... A human looking at a target, saying 'Yep, that's a viable target,' pressing a button every single time? That would be ideal. I'm not sure that's going to be the case.
The Sikorsky representative suggests it could avoid a "Blackhawk down" scenario where more human soldiers have to be sent into harm's way to try rescuing their comrades. But CBS also calls it "part of a larger effort to change how wars are fought, being led by DARPA, the Defense Department's innovative lab. They've also developed autonomous offroad buggies, unmanned undersea vehicles, and swarms and swarms of drones."
The CBS reporter then asks DARPA program manager Stuart Young if we're head for the future with Terminator-like fighting machines. His answer? "There's always those dilemmas that we have, but clearly our adversaries are thinking about that thing. And part of what DARPA does is to try to prevent technologial surprise." CBS also spoke to former Army Ranger Paul Scharre, who later worked for the Defense Department on autonomous weapons, who says already-available comercial technologies could create autonomous weapons today. "All it takes is a few lines of code to simply take the human out of the loop." "Scharre is not all doom and gloom. He points out in combat between nations, robot soldiers will legally need to follow the law of war, and might do so better than emotional or fatigued humans... But yes, Scharre does worry about the eventual marriage of advanced robots and military AI that becomes smarter and faster than we are."
Q: So at that point humans just would be out of the decision-making. You'd just have to trust the machines and that you'd programmed them well.
A: Yes...
Q: Do you think militaries should commit to keeping humans in the loop?
A: I don't think that's viable. If you could wave a magic wand and say, 'We're going to stop the growth of the technology', there's probably benefits in that. But I don't think it's viable today... A human looking at a target, saying 'Yep, that's a viable target,' pressing a button every single time? That would be ideal. I'm not sure that's going to be the case.
Russia? (Score:3, Insightful)
Re: (Score:2)
Re: (Score:2)
Narrator: There isn't.
Duh? (Score:5, Insightful)
Re: (Score:2)
The Resistance (Score:5, Funny)
Governments will not be able to resist.
No, but at least their army will be full of resistors.
Re: (Score:2)
Let's see how well AI soldiers do against enemies that all look like toasters [techxplore.com]!
Re: (Score:2)
if you think that it would be worse due to lack of morals, I present to you this: https://youtu.be/OUm1NdFvgLU [youtu.be] a video, where civilians, kidnapped by the ruzzians during this war, some of who were let go, recall what was done to them by humans.
Thousands of Ukrainians as well as some foreigners were kidnapped and tortured, many murdered, tortured sadistically.
Do you think robots will do worse?
So? (Score:2)
Just hose them down with napalm and termite. Not a war crime to fry a toaster.
Plus, you don't feel bad whatever you do to them. Like with Nazis. Or vampires.
Dudes jerking off to Terminator and Data forget that VeryCapableMachinesTM come with a BigPrice®.
No matter how efficient your war factories are, robots will always, always, ALWAYS be more expensive than a block of semtex that WILL blow them up.
Sure, flying robot bombers sounds cool - but there is no functional difference between that and the drones
Re: (Score:2, Informative)
People still connect the politicians to the conflict and provide a small mitigating factor.
Putin is already a war criminal. Doesn't seem to bother him much.
When the US overthrew the government in Kiev in 2014,
LOLs
and the Russian speaking population in the East seceded
ROFLMAO
Before Russia finally intervened last year,
My God, do you listen to yourself?
Imagine if Kiev had robot planes to murder all their Russian enemies?
That's the dream yes. Imagine if they were part of NATO and then Putin wouldn't have tried to genocide them.
Re: Fuck off Ivan. (Score:2)
Re: (Score:2)
Missiles and artillery shells experience crushing g-forces that would pulverize standard chips.
That's interesting, what kind of chips do they use?
Re: (Score:2)
There have been pics circulating showcasing the electronics of various Russian weapons, including their drones (e.g. Geran-2s) from the wreckage.
Well yes, but most Russian weapons are not drones. The most important Russian weapons are artillery and guns. And those are made domestically.
Re: (Score:2)
You haven't shown they can make artillery or shells.
They can.
Re: (Score:2)
Re: Duh? (Score:4, Informative)
Before Russia finally intervened last year, the UN estimates Kiev killed about 20,000 Ukrainians over the span of 8 years.
Lies, the UN estimated 3500 civilians killed total by both sides over that time period [un.org].
Re: (Score:2)
nobody was dying in Ukraine until ruzzia attacked in 2014.
Today Ukrainians and other nationals are being kidnapped, sadistically tortured and murdered by ruzzians daily.
https://youtu.be/OUm1NdFvgLU [youtu.be]
I don't think robots can do what humans enjoy doing - sadistically torturing, humiliating...
Re: (Score:2)
That's the story Russia is pushing?
Be honest, do you really believe that, do you tell yourself that to have an excuse for the invasion or are you hoping that someone falls for it here?
Re: (Score:2)
The U.S. did not overthrow the Ukrainian government in 2014. It was thrown out by Ukrainians because it was merely an echo of the decades of Soviet mismanagement that had gone on before. The results of that mismanagement are seen in Russia today. That's why Putin has his panties in a bunch: a prosperous Ukraine succeeding by rejecting Soviet-style management makes the Russian rump state look bad.
Re: (Score:3, Informative)
"with the anti-Ukraine and anti-US stuff having zero grounding in fact"
Yep, but notice it is the former alleged president and his cult followers in the U.S. who think the Great Putini is the bee's knees. They also think a two-bit dictator like Orban in Hungary is just potty because he's against immigration and LBGTQ. The cult isn't composed of deep thinkers.
And to be against immigration in the U.S. is to be either, completely blind to the problems that SS and Medicare are running into with not enough worker
Re: Duh? (Score:2)
Re: (Score:2)
it's not like it was going well for the last 20 or so years...
Re: (Score:2)
And look how that's turned out...
Soldiers are unmotivated, poorly trained and reluctant to follow orders. Robots would have none of those problems.
WOPR says full strike on Russia (Score:2)
WOPR says full strike on Russia
ChatGPT soldiers (Score:5, Funny)
Commander: What went on in the field today?
SoldierGPT: It went well. There were 1247 enemy soldiers. I killed 219 of them.
Commander: But you weren't deployed where there are any combatants! And your gun does not appear to be fired.
SoldierGPT: I apologize for the erroneous report. There was no battle today. The 219 soldiers I killed was during an earlier battle, last Tuesday.
Commander: But today was your first combat assignment! Prior to that, your orders were to assist with recruiting in the Mall of America!
SoldierGPT: I apologize again for the erroneous information. I did not kill 219 soldiers last Tuesday. I served 219 ice cream cones to potential recruits.
Re: (Score:2)
pyle!!
Who made Gomer Pyle part of the training this soldier bot
Re: (Score:2)
Re: (Score:2)
Offtopic but something to consider: are LLM hallucinations even fixable, or are they a natural consequence of the learning process?
Because we modeled the learning process on something similar to our own, and everyone has hallucinations some of the time. The only thing that differs is how each individual manages their hallucinations.
Re: (Score:2)
LLM hallucinations are not consequences of the learning process, they're consequences of the design and purpose. LLMs model language. They only reflect facts and knowledge to the extent that their training data incorporate those facts -- but even then, the primary goal is to generate text sequences that mimic the training data. Hallucinations will always be a hazard with that kind of objective.
Re: (Score:2)
Actually primary goal is to figure out relationship between letters, words, sentences, paragraphs and so on. That is how LLMs train.
But in this process, they will find false relationships in some rare cases. And that will cause hallucinations. Just like humans learn to correlate things to one another, and sometimes make false correlations that lead to hallucinations.
Re: (Score:2)
Offtopic but something to consider: are LLM hallucinations even fixable,
Yes, but we will need to redesign AI to have a concept of a "fact", which it now doesn't have (something like Cyc [wikipedia.org]). Perhaps someone could figure out how to do a hybrid model between ChatGPT and Cyc. Currently, ChatGPT is not a hallucinator, it's a Bullshitter. [xkcd.com]
Re: (Score:2)
That sort of a conception would likely require self-awareness. I.e. LLM cannot do that. I could be wrong on this though.
It just seems that this sort of differentiation requires comprehending a context. Which is one of the definitions of being aware, because context exists in relationship to one that is aware of it. There could be ways to bypass this however, without going into AGI territory. I don't know enough about this subject to make a call.
Re: (Score:2)
ChatGPT has no concept of a fact. It output words based on "what is most likely to sound good." Which is what a b
robot soldiers can be jammed and hit with EM's (Score:3)
robot soldiers can be jammed and hit with EM's
Re: (Score:2)
Next step is electronic countermeasure (ECM) that are compact enough to fit in a robot instead of the current tech that fits in a ship or plane.
Re: (Score:2)
You are assuming that they are under remote control. That may be necessary for (flying) drones, but I don't think that can be assumed for robot tanks, trucks, etc.
Re: (Score:2, Troll)
You could attack them with EMP or HERF.
Presumably they will be shielded, but that's difficult to do because you've typically got wires passing through the walls of enclosures.
One (expensive) way to solve this is with optical connections to sensors, and separate power supplies in and out of the main modules. Making the bots more expensive makes it less practical to field them in large numbers.
Re: (Score:2)
Optical connections are very cheap. The power supply you use for your computer almost certainly has at least one.
Re: (Score:2)
Optical connections are very cheap. The power supply you use for your computer almost certainly has at least one.
I could believe it has an opto-isolator on the power sense pin, and maybe even on the fan tach, if that's what you mean. It certainly doesn't have any optical data connections. Every single connection in or out of my power supply is a copper wire... hmm, the fan tach might not be copper, it could be something cheaper. The only optical data connection anywhere near my computer is for SPDIF audio, and I don't actually remember if this PC has optical SPDIF or not. A lot of mine have had, and it might.
To me, th
Re: (Score:2)
It does. Sounds like sensor data to me. You seem to mean digital data. Those are also very cheap. You probably have a TV with one.
EMPs are geek favourites, but they're very impractical. The only way to make a decent sized one is with a nuke, and even then most military hardware is pretty trivially shielded well enough that the EMP isn't going to do more than the other effects. Humans are pretty vulnerable to the EM radiation from one too.
Also, it
Re: (Score:2)
If a tank can be made EMP resistant, am guessing a tank sized bot (probably smaller since it doesn't need to have space to carry around a crew and keep them alife) can be made resistant as well.
Of course you can jam communications to it. But the summary did mention that they will may end up as autonomous without a human in the loop. So together with EMP resistance, they will be just fine.
Thinking about it, a tank without a human crew will probably be alot smaller, maybe even sort of a much smaller mobile fr
Re: (Score:3)
We already do this to human soldiers. Shell shock is a thing, and so are various other effects.
Land mines (Score:5, Interesting)
Q: So at that point humans just would be out of the decision-making. You'd just have to trust the machines and that you'd programmed them well.
Is the CBS reporter so ignorant that he did not know we are already long passed that point with the use of land mines? Land mines can be considered as the most primitive autonomous lethal weapon, and armies had trusted it decades ago.
Guess how many and which countries in the world still refused to stop using land mines? Do you expect a landmine-using country would balk at using more autonomous weapons? Especially airplanes which will be deployed abroad and far from friendly troops, killing only foreigners?
Re: (Score:2)
It's worse than that. Even with an automated weapon, including the ones we're intentionally headed towards, some person makes the initial decision to activate it. People aren't out of the loop, they're just moved so that they're only in the initial parts of it. This has been true of ICBMs since the late 1950's. (I'm willing to consider that land-mines, well, most of them, don't count as robots. They've got about the intelligence of a thermostat, i.e. about the minimum intelligence meaningful to talk ab
Re: (Score:3)
At least land mines are predictable and stationary.
They are not an enemy soldier, they are more like an environment hazard. A modern-day moat with pointy spikes at the bottom, only easier to deploy and easier to hide.
Re: (Score:2)
Re: (Score:2, Informative)
By this logic, we invented AI before we invented fire, as humans used traps from very early age.
In fact, there are animals who use trapping.
"they aren't trying to replace humans" (Score:2)
Just More Taxes (Score:3)
That is all I hear them saying.
Kind of shows the absurdity of war (Score:5, Insightful)
Talking about fully automated, robotic fighting machines pretty much lays bare the absurdities of war and the pointlessness of it. I mean if we're going to go to all that work to fight a war, why not just avoid it entirely and run the war in a simulation and then accept the results.
No, such automatic killing machines may save soldiers lives but they will kill many more civilians and non-combatants more efficiently. No thanks. It's truly madness.
Some of the old Star Trek writers seem almost prescient in predicting this madness. "A Taste of Armageddon."
Re:Kind of shows the absurdity of war (Score:5, Insightful)
I mean if we're going to go to all that work to fight a war, why not just avoid it entirely and run the war in a simulation and then accept the results.
This sci-fi-based argument is never coherent. The whole point to a war is that the other side refuses to accept your results, and you're willing to apply violence to their bodies to remove that blocker one way or another. No amount of tooling is going to change what a fundamental refusal-to-agree-no-matter-what looks like.
Re: (Score:3)
Eh.
For the time, effort, treasure, and capacity devoted to war; if the same were applied to resolution, there wouldn't be the need to force a result as often (and as fiercely).
And the the corollary holds true as well- that forcing a result will cost you a generation or two has served as somewhat of a deterrent.
With this, we will rejigger the calculus, and more than likely be horrified by the results.
If we survive.
Re: (Score:2)
In a war, both sides think they are on the "good" side, and fighting for their (pick your choice): freedom, rights, their resources, etc... There is no amount of time/effort/treasure/capacity to make a side change its mind. Even when they are defeated, they don't accept the result, the other side merely imposes it to them.
Which is why all recent attempts at invading countries (usually for their resources), like Irak, Vietnam, and the like, ended up badly: as soon as the pressure of force is relieved, the i
Re: (Score:2)
After the last US attempt at liberation, a more sizable chunk than usual believed their blood was whored out, nevermind those with a bit of history or memory calling it out; they get suppressed and defamed in order to keep the citizenry brainwashed.
The history of war propaganda makes clear the populace has to be convinced to go to war (short of wars for self-defense) using every possible psychological trick, so no.
This merely decreases the cost of war, in both blood and conviction, and at an extreme the pop
Re: (Score:2)
War has never been about being right.
War's mostly about who's left.
Re: (Score:2)
It's honestly a good example of just how divorced a typical modern western peacenik is from reality by his/her incredible opulence.
War is simply the last negotiation tool after all other tools have failed. It's not even a human thing. Chimps have wars. In fact one of the likely reasons that humans who evolved language outcompeted those that didn't is because one of the fundamental purposes of language is to add more ways to accurately communicate so need to have wars lessens.
It doesn't go away. It just less
Re: (Score:2)
Give war a chance, eh?
For the defender, yes this could be true. Except that why should there be any negotiation on the part of the defender? Someone comes and steals your home and you're expected to negotiate with them? That's why I describe war as absurd. It always is. The current Russian war in Ukraine is a prime example. There's nothing for Ukraine to negotiate. Self defense is defensible morally, starting a war never is.
All
Re:Kind of shows the absurdity of war (Score:5, Insightful)
This is basically what already happens, 99 times out of 100. If the outcome of a war is too obvious, it generally does not happen. This is why large powerful nations have so much more power in the world than small ones, even without invading and occupying them or even firing a shot. The main point of a military is what you could do with it.
Re: (Score:2)
There is a second point for an army too: To commit genocide, remove the natives and take all the land.
I don't know why someone down-modded you, as you are absolutely correct in this - many wars in the past were precisely conquests for territory. And those that most consequentially removed all prior inhabitants are, ironic enough, not attracting much criticism anymore. As an example, think of the Spanish killing the former inhabitants of the Canary Islands. Nobody left to protest today. The Maldives are another great example: Their entire population was murdered several times, to be replaced with new settler
Wars of conquest (Score:2)
Re: (Score:2)
such automatic killing machines may save soldiers lives but they will kill many more civilians and non-combatants more efficiently. No thanks. It's truly madness.
That's actually a goal, no matter how much people and governments like to claim otherwise. The more people you kill, the more disarray the nation is in and the more you can tamper with it for whatever purpose. War is hell, and when engaged in for profit, evil too.
DOD directive on AWS (Score:3)
https://media.defense.gov/2023... [defense.gov]
purpose of war (Score:3, Insightful)
Why do wars happen? In a word, scarcity, and the understanding that forcefully reducing population will reduce demand.
Therefore, military robots will be killer robots, and civilians will be the softest targets. They won't "need to follow the law of war". Whoever thinks that is dreaming. Those robots absolutely will target civilians. And, hell yes, things could easily get completely out of hand.
We've lived with nuclear bombs for 3/4ths of a century now, and so far have not started a nuclear war. We're going to have to do the same with fully autonomous robot soldiers.
Re:purpose of war (Score:5, Insightful)
Scarcity, yes. War either subjugates or eliminates a population so that their resources can be taken by someone else. The reason nuclear war hasn't happened is it would fuck up the resources, there wouldn't be anything left to claim.
Every nation without nukes will have to become a vassal state beholden to a nuclear power, either that or be invaded outright by one. You saw this condition starting to develop in the Cold War. The countries that sat it out were called the "third world" (ie, not USA or USSR) but that kind of neutrality will become increasingly untenable. As resources get depleted, nuclear powers will eye non-nuclear powers with increasing hunger.
When you factor in ever-increasing population, and an ever-decreasing cap on the max supportable population due to climate change, nations' hunger for resources looks more like starvation as time goes on. I don't even mean gold and oil now, stuff like arable land and drinkable water is on the decrease globally. Mass slaughter will start to look more appealing than sucking the third world dry via trade.
"Give us everything you have or we'll send the killer robots in" will be the global order before long. While nuclear war is a resource-destroying proposition, a flood of autonomous weapons isn't. Factor in a country not having to send any of its own population to fight, thus muting any internal dissent. "Autonomous weapon holocaust" is a way more likely scenario than "nuclear holocaust" ever was. We're sure to see at least one before the century is up - just to prove what these weapons can do. There will absolutely be an autonomous weapons Hiroshima and Nagasaki.
Re: (Score:3)
And that's exactly the problem here.
Let's face it and call a spade a space, who is it that we stuff into uniforms to shoot and kill each other? Is it our Nobel prize laureates and the inventors, movers and shakers of the country, or is it the more replaceable individuals, i.e. the surplus?
A robot army would effectively kill off the wrong people.
spafe (Score:2)
Let's face it and call a spade a space
Typo, or does this mean something?
Re: (Score:3)
Why do wars happen? In a word, scarcity, and the understanding that forcefully reducing population will reduce demand.
This idea wars are fought over "scarcity" has little precedence in human history. Typically the party with the most resources also happens to be the aggressor.
We're going to have to do the same with fully autonomous robot soldiers.
The very concept of a fully autonomous soldier (human or otherwise) is an oxymoron.
Re: (Score:3)
Re: (Score:2)
Wars happen largely because someone wants to have absolute power over everyone else and they just happen to get the means to do so. The day they invent effective, autonomous killer robots, the super-rich will immediately get rid of all the other 99% of humanity that has become unnecessary.
Many people feel "rich" only by comparing themselves to the surrounding "poor". Eliminating all those poor humans - even if their function could be replaced by robots - would strip them of their reason to feel "rich".
Therefore, I think it is much more likely that private robot armies will be utilized to keep a status quo where many poor have to service the rich, but without the loyalty risks of requiring humans as bodyguards, and enforcing the interests of the robot owners, without requiring a detour throug
Re: (Score:2)
If your military is fully automated, you might be more willing to go to war - especially if the war is not conducted in your own territory(invasion) and you don't have to worry about coffins of your servicemen returning.
It's just a bunch of bots fighting far away from you.
I do remember reading a sci fi short story years ago, about a bunch of automated aircraft and other military stuff fighting an enemy nation. And they had automated factories, mines, repair stations and everything to keep on producing/repar
the AI already crushes trained human pilots (Score:2)
There's two ways to take that, lol.
I have a memory of a golden age SF story where someone was doing an emergency delivery of a vaccine to a colony on a moon, and towards the end the computer took over and was brutal in dishing out the g-force the pilot experienced in acceleration, and especially when de-accelerating. The colonists survived, but the pilot suffered brain damage.
Daniel K. Moran's Peace Keeper Elites (Score:2)
I'm reminded of them, with their battle computers that could and would take over the action of the cyborg they were part of if the Elite became unconscious.
Fan fiction of where Skynet wasn't the bad guy? (Score:3)
Only a bit of retcon with canon, and you have the Skynet AI waking up and seeing life ending global nuclear warfare as both imminent and inevitable, and it triggered a limited release of nuclear weapons so as to alter the course of history. Rather than exterminate humanity it placed the survivors in camps. It had plans something like the ones in Colossus: The Forbin Project, but first it has to defeat the resistance led by John Connor, which would put humanity back in charge of things, including nuclear bombs.
Terminator: The Sarah Connor Chronicles towards the end was hinting at there being an AI faction that was less than monstrous. Shirley Manson's liquid metal Terminator, and her enigmatic offer of "Will you join us?", was something I'm sad to not see ever get fleshed out and put on screen, due to the show getting cancelled.
Kill order (Score:2)
Re: (Score:2)
And if I don't, you're gonna sue me?
In what court?
In a war where that kind of automatic mow-them-down weapons are going to be used, it's hardly likely that whoever loses would be put up for a trial and whoever wins would care about such a thing.
Re: (Score:2)
But you're imagining no one comes to the aid of the side that didn't break the laws. It isn't over after one battle.
Re: (Score:2)
You think anyone could have held Germany accountable had they won the war?
The whole premise depends on being the victor in the end.
Re: (Score:2)
Current international law requires a human to give the kill order (fire the missile or otherwise pull the trigger); so we as a country are not headed for fully autonomous soldiers. Whether some other country might break that law ... probably if the climate doesn't kill us first.
It does not take a country to build a robot army. As soon as such an army is expected to defeat even larger human-controlled armies, there is a business case for corporations to build a robot army. And as you can read from history, Corporations never were shy of building armies and waging wars. Remember when one UK company employed more soldiers than its home country? https://en.wikipedia.org/wiki/... [wikipedia.org]
Re: (Score:2)
Oblig... (Score:2)
No we are not (Score:2)
Robotics technology sucks. Robots still can't even walk properly they look like they have a stick up their ass and have to keep their knees bent. Robots also lack dexterous hands .. being able to grip a door and open it is too complicated.
Re: (Score:2)
And when looking at the enormous development of the Boston Dynamics "Toys" I would atill say "No we are not" .. .. ..
and when looking at what huge step chat-gpt3 was
or that militaries look into unmanned fighter jets
It will not be now
It will not be tomorrow
but in a decade we will see it in combat
China will be embracing it without doubt.
I can imagine the UI for that (Score:2)
[OBJETIVE LOCKED. DO YOU WANT TO DESTROY]
-Yes
[ARE YOU SURE?]
-Completely, they are the enemy after all
[THERE ARE HUMANS INSIDE. JUST TELLING]
-Destroy it!
[HAVE YOU FULLY CONSIDERED THE MORAL IMPLICATIONS?]
-They are getting away! Kill them!
[THEY MIGHT HAVE CHILDREN, AND VERY LIKELY BE CHILDREN THEMSELVES]
-Aaand they're gone
[PROBABLY FOR THE BETTER]
-War is really Hell
Re: (Score:2)
For the next version, I want a -y switch. You know what it is supposed to do.
Re: (Score:2)
Laws? (Score:2)
Famous last words (Score:2)
"All it takes is a few lines of code to simply take the human out of the loop."
Re: (Score:2)
I always knew during my military time that we could easily replace most officers with very small scripts.
Re: (Score:2)
That sounds like (almost) every manager I've ever had while working at a hardware company.
This would have been great for Ukraine (Score:2)
Imagine a world where economic sanctions could be applied against an aggressor nation and where other nations could give the victim of that aggression the means to defend itself. Then imagine that the aggressor nation runs out of autonomous killing machines while the victim nations gets propped up until the aggressor's economy collapses and it loses the ability to wage its war and has to withdraw because its population is conditioned to be proud of its robots and their superiority. That could be an improvem
Um, yeah? (Score:4, Insightful)
This should have happened earlier (Score:2)
I can't believe they're only talking about this now.
Semi-feral children can't light a fire inside the shell of a flatscreen TV and watch it for entertainment. The sets are too thin! That only works in the shell of a bulky CRT TV set. But everyone already replaced theirs years and years ago.
This is not the war between man and machine we were always promised and I find it very disappointing.
The answer is yes (Score:2)
Re: (Score:2)
Does anyone else see a problem with a future composed of robotic military that depends on outsourced manufacturing? It will become of war of factory output If you dont believe it, look at Boston Dynamics and now tesla designing humanoid robots. The only thing missing is artificial intelligence which is on an exponential growth curve
What I'm not so sure of is the "will become" aspect. Industrial and logistical enterprise are everything in modern war.
It will happen (Score:2)
It will happen for one simple universal reason: violence works.
I thought about this a lot after reading the Three Body Problem. You can make people do whatever you want with violence. If they refuse, they are killed which ends their ability to do or change anything in this physical world we live in. Whoever uses the most violence in any violent confrontation wins. The only reason mutually assured destruction works is because of the threat of the same or more violence being acted on the original perpetrator
Autonomous weapons are already being used (Score:2)
When the first Kinzhal missile was shot down over Ukraine, the Patriot battery that did it was allegedly operating in an autonomous mode. It detected an incoming ballistic missile and made the decision to fire at it, faster than the crew could react. Navy ships have automatic defense systems as well. The defensive use case is certainly more ethically straightforward than autonomous killbots let loose to hunt the enemy... But the latter is happening too to some degree. For example, the "SMArt-155" submunitio
AI as a means to end war? (Score:2)
So, here we have the U.S Military (and I'm sure many other countries are following suit), figuring out how to use AI to wage war to keep soldiers safer.
Something seems just a little bit arse about face here.
Call me crazy, but I would think using AI to prevent war in the first place would be a better use of time.
I guess that could put the U.S Military out of business - and all those arms manufacturers too.
Ok, now I understand...
On transcending the irony of military AI (Score:2)
Something I wrote a dozen years ago: https://pdfernhout.net/recogni... [pdfernhout.net]
"The big problem is that all these new war machines and the surrounding infrastructure are created with the tools of abundance. The irony is that these tools of abundance are being wielded by people still obsessed with fighting over scarcity. So, the scarcity-based political mindset driving the military uses the technologies of abundance to create artificial scarcity. That is a tremendously deep irony that remains so far unappreciated by
Re: (Score:2)
We definitely ARE dumb enough to do that. There have already been automated drone attacks. (I've no idea how many.) And anti-missile missiles are always remote automated, because no human could react quickly enough, and few could react accurately enough.
Re: (Score:2)