AI Experts Boycott South Korean University Over 'Killer Robots' (bbc.com) 73
An anonymous reader shares a report: Leading AI experts have boycotted a South Korean university over a partnership with weapons manufacturer Hanwha Systems. More than 50 AI researchers from 30 countries signed a letter expressing concern about its plans to develop artificial intelligence for weapons. In response, the university said it would not be developing "autonomous lethal weapons." The boycott comes ahead of a UN meeting to discuss killer robots. Shin Sung-chul, president of the Korea Advanced Institute of Science and Technology (Kaist), said: "I reaffirm once again that Kaist will not conduct any research activities counter to human dignity including autonomous weapons lacking meaningful human control. Kaist is significantly aware of ethical concerns in the application of all technologies including artificial intelligence."
They're breaking the First Law (Score:3)
Asimov rolls in his grave
Re: (Score:2)
Maybe it's not the first law they broke...
Re: (Score:2)
Asimov rolls in his grave
Reminds me of a 1st season ST:NG episode, the Arsenal of Freedom
Re: (Score:2)
Asimov rolls in his grave
Raises the question of what these guys think we should do when the killbots show up (which they will; can't stop every place on earth from developing them).
Perhaps we should try to discuss 1940's science fiction with the kilbots? That should work.
Re: (Score:2)
Raises the question of what these guys think we should do when the killbots show up (which they will; can't stop every place on earth from developing them).
Make killbot killer bots?
Re: (Score:2)
Raises the question of what these guys think we should do when the killbots show up (which they will; can't stop every place on earth from developing them).
Well, since killbots have a preset kill limit, we can send wave after wave of men at them until they reach their limit and shut down... It worked for Captain Brannigan!
Re: (Score:2)
I think the underlying point of the stories were that there's no iron-clad set of rules that COULD govern the behavior of robots without unexpected consequences. Even such obviously benign and logical rules such as those had serious limitations. Besides the impossibility of programming such rules... The 3 rules don't and CAN'T exist.
That said, yeah killer robots suck.
Re:Violence is the last refuge of the incompetent. (Score:1)
Re: Weasel words (Score:1)
Re: (Score:2)
"What is funny, is that America works it tail off to not hit civilians"
That went out the window in many of the drone strike operations and has gotten worse under Trump
https://www.independent.co.uk/... [independent.co.uk]
Re: (Score:3)
America makes lots of mistakes, as does any large nation.
However, I would argue that gitmo is not a mistake, but a crime.
We either treat these ppl like soldiers or they should be civilians. There is no real 3rd group for this.
Hopefully, some day, Rumsfeld will make the mistake of going to Europe and grabbed and tried for war crimes.
It is one thing to go after terrorists in afghanistan and pakistan where the govs were/are hiding them, but to simply hold these ppl with no trials, as well as o
Tiem to design a powerful EMP (Score:1)
Re:Tiem to design a powerful EMP (Score:4, Insightful)
Design one that will penetrate even the most robust faraday cage.
So when AQ deploys a killbot on Wall Street, are we going to self-nuke NYC?
First law of war on the battlefield...
Modern wars are not fought on "battlefields".
blast an EMP over enemy lines. take out central command.
There are no "lines" and there is no "central command".
You play way too many video games.
Re: (Score:1)
Re: (Score:1)
Technology already exists, just needs integration (Score:3, Insightful)
There is excellent computer vision algorithms (e.g. in automotive industry) that can detect and track humans (even partially obscured) with great accuracy.
There is excellent robot technology available (just look at Boston, or your average drone manufacturer).
And there is no lack of "aim and fire" technology from the e.g. north american continent.
It is not all that difficult to assemble these pieces into a nightmarish unit. You can already now see (rather harmless soft air gun) prototypes of this on YouTube.
The university does not need to "manufacture autonomous lethal weapons", they just need to some generic AI stuff and leave the weaponization to others who probably are more than willing to do it.
This sounds like an arms race to me; if one army will obtains this technology, will the others sit around and accept it? Heck, even if all armies would collectively refrain from it, what prevents your favorite terrorist organization from doing it? It's not THAT high tech anymore.
Like This (Score:2)
Replace the light with lead:
https://www.youtube.com/watch?... [youtube.com]
Re: (Score:2)
Also in the US, we're so used to DARPA having funded cool projects. It's the last thing US-based researchers would want to boycott.
For those of you outside the US, DARPA means Defense Advanced Research Projects Agency and it falls under the purview of the Department of Defense.
These aren't the droids you're looking for... (Score:2)
If these "experts" couldn't see this coming... (Score:2)
... then either they're incredibly naive or just stupid. Either way, I wonder if they're the sort of people to be working on a paradigm changing technology since clearly their understanding of human nature and what it will do with powerful technology is left severely wanting.
Re: (Score:1)
If Western, relatively free countries don't do this stuff to keep ahead, the less ethical non-free countries will beat everyone to it.
China is not going to stop because a bunch of eggheads whined about it.
This is just a ploy by globalists and leftists to trick people in free countries from getting ahead.
Re: (Score:2)
"If Western, relatively free countries don't do this stuff to keep ahead, the less ethical non-free countries will beat everyone to it. "
And fuckwit CEOs in various corps virtually gave china the technology to do it by outsourcing manufacturing and effectively paying the chinese to improve their technological abilities. All to save a bit of cash and increase shareholder value.
wha? (Score:3)
So, people just won't develop this military tech, because peace? Got it.
No, we need to develop it, first, fastest, and best. And develop AI powered countermeasures too.
Re: (Score:2)
So, people just won't develop this military tech, because peace? Got it.
They say "To Err is Human, To really fowl things up you need a Computer" Can you imagine trying to debug that!??!?
No, we need to develop it, first, fastest, and best. And develop AI powered countermeasures too.
Of we go then, what could possibly go wrong?
Re: (Score:2)
Of we go then, what could possibly go wrong?
It is going to be developed; the only person you can stop from developing it is yourself. Let me know how that works out for you.
Re: (Score:2)
Of we go then, what could possibly go wrong?
It is going to be developed;
The fewer people involved making it unreliable and prone to failure the better so let's not be naive and think that nothing is going to go wrong with doing it. It's a bad idea so people of good sense and good will have the right idea slowing the process down. Do we actually need AIs that can kill us? No. Do we need to think of the ramifications and prepare for them, I don't think it's unreasonable, so let's do that first. We could start here.
I'd start with weapons that destroy themselves if they k
Re: (Score:2)
We need to be ready for it, but that doesn't mean we need to develop it too. If a terrorist organization obtains killbots, having our own better killbots first won't help with that. We need anti-killbot technology. Maybe that's killbots too, but maybe it's not.
Wow 50 people signed something... (Score:2)
50 Years of AI (Score:1)
Since the dawn of AI, it was funded, and still is, by the military at all universities. The technology researched and further developed by larger corporations has been both lethal and non-lethal. Usually the university does the basic research and the company adds to it.
So now some new AI hotheads are protesting developing AI (i.e. technology) that will be used in military weapons?
Technology is neither good or bad, it's how you apply it. Anyone can take the research done by a university and apply it to weapo
AI and killbots versus Internet and porn (Score:2, Insightful)
A Cruise Missile (Score:2)
IS a "Killer Robot."
It fly's to its destination under its own guidance then explodes.
FAR too late to ban.
Re: (Score:1)
So stupid (Score:2)
Thinking Machines (Score:2)
Re: (Score:2)
...after which mankind was controlled by two cabals of manipulative bossy bitches on permanent PMS. I'll take the killer AI instead, thanks.
While China and Russia and . . . . (Score:2)
Sure makes sense to me for them to do it, and cannot understand the umbrage?!?!?
FOR SCIENCE! FEAR SCIENCE! (Score:1)
So, now we're going to start SJW bullshit over a scientific discipline that sci-fi writers have spun horror stories out of?
Oi vey...
Re: (Score:2)
Sorry if you didn't have the balls to post as yourself.
The problem is, "This COULD happen with a sufficiently advanced AI", so we shouldn't pursue it AT ALL?
So, because, some day, we MIGHT eventually turn out a sufficiently advanced AI that could be dangerous, we shouldn't pursue ANY form of AI, no matter how primitive?
Sorry, that's just FUD.
Just because you can doesn't mean you should. (Score:2)
That said, I have 0% faith in anyone telling me they absolutely will not weaponize AI. Anyone saying that must think we're all idiots. The first and best use for AI is replacing humans in dangerous situations, replacing them with a machine. Yeah you can do that for miners and truck drivers, but the real application will be replacing people as soldiers waging war. Period. Full stop. Okay maybe I should include law enforcement too but these days war and police work are starting to blur together at least in th