Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
AI China The Military Technology

Yoshua Bengio, a Grand Master of Modern AI, is Worried About Its Future (technologyreview.com) 126

Yoshua Bengio is a grand master of modern artificial intelligence. Alongside Geoff Hinton and Yan LeCun, Bengio is famous for championing a technique known as deep learning that in recent years has gone from an academic curiosity to one of the most powerful technologies on the planet. Here's an excerpt from an interview he gave to MIT Technology Review: MIT TR: What do you make of the idea that there's an AI race between different countries?
Bengio: I don't like it. I don't think it's the right way to do it. We could collectively participate in a race, but as a scientist and somebody who wants to think about the common good, I think we're better off thinking about how to both build smarter machines and make sure AI is used for the well-being of as many people as possible.

MIT TR: Are you worried about just a few AI companies, in the West and perhaps China, dominating the field of AI?
Bengio: Yes, it's another reason why we need to have more democracy in AI research. It's that AI research by itself will tend to lead to concentrations of power, money, and researchers. The best students want to go to the best companies. They have much more money, they have much more data. And this is not healthy. Even in a democracy, it's dangerous to have too much power concentrated in a few hands.

MIT TR:There has been a lot of controversy over military uses of AI. Where do you stand on that?
Bengio: I stand very firmly against.
MIT TR: Even non-lethal uses of AI?
Bengio: Well, I don't want to prevent that. I think we need to make it immoral to have killer robots. We need to change the culture, and that includes changing laws and treaties. That can go a long way. Of course, you'll never completely prevent it, and people say, "Some rogue country will develop these things." My answer is that one, we want to make them feel guilty for doing it, and two, there's nothing to stop us from building defensive technology. There's a big difference between defensive weapons that will kill off drones, and offensive weapons that are targeting humans. Both can use AI.
MIT TR: Shouldn't AI experts work with the military to ensure this happens?
Bengio: If they had the right moral values, fine. But I don't completely trust military organizations, because they tend to put duty before morality. I wish it was different.

This discussion has been archived. No new comments can be posted.

Yoshua Bengio, a Grand Master of Modern AI, is Worried About Its Future

Comments Filter:
  • by zlives ( 2009072 ) on Monday November 19, 2018 @05:46PM (#57670698)

    I wish he had a library card and could check out a history book...

    "build smarter machines and make sure AI is used for the well-being of as many people as possible"
    "itself will tend to lead to concentrations of power, money, and researchers"
    "we need to make it immoral to have killer robots"

    sigh...
    victims

    • Re:what an idiot (Score:4, Interesting)

      by Luckyo ( 1726890 ) on Monday November 19, 2018 @05:48PM (#57670704)

      No, just people who specialize so deeply in their field of choice, they're utterly ignorant of everything else.

      You know, scientists.

      • by zlives ( 2009072 )

        i was gonna say, he could have gone out for a movie date and watch terminator... but "You know, scientists."

        • What other speculative fiction do you think people should treat as real world fact? Star Wars? Star Trek? Lost in Space? Space Balls? Heavy Metal? Green Mars? (in order of increasing preposterousness.)

          SF authors have axes to grind. Move makers have eye candy to frame with 'story'.

          • by zlives ( 2009072 )

            well we could start with AI

          • Re: (Score:2, Insightful)

            by Anonymous Coward

            no one is treating speculative fiction as fact. speculative fiction is there to get people to think about the future and possible outcomes. Asimov wrote the three laws because he speculated that computers could turn against us at one point. This is something worth considering as one develops new technologies in computation as it is one of the many possibilities. Some speculative fiction looks furthur out than others but alot of it becomes relative as we grow as a species. 1984 is a good example of the specu

            • The GPs suggestion was not to think, rather go see a movie and accept the agenda of the movie maker as truth.

              He dismisses their not doing so as: but 'you know scientists'.

              I just listed some speculative fiction that has wrapped in (insane/space opera) assumptions that no one in their right mind would think apply in the real world.

    • by Anonymous Coward

      Scientists are always so resistant to having their work used in the fabrication of weapons. They really need to get over it.

      Face the facts: there are dangerous people in the world and they want to kill you. The reason they can't is because you have a military force to protect you. And the reason that is good enough? Our military force is better than theirs.

      Killer robots won't go crazy and run through the cities killing people. That is fiction. Good for movies but not for actual contingency planning.

      • by jd ( 1658 )

        Killer robots just mean an arms race where someone will launch a preemptive strike.

        The best defence is to not get attacked, rather than to provoke an attack before you can defend.

        Guns don't shield you from bullets, killer robots don't shield you from killer robots.

      • Re:what an idiot (Score:5, Interesting)

        by ceoyoyo ( 59147 ) on Monday November 19, 2018 @08:21PM (#57671444)

        "And even if we DO need to take military action, we can send the robots in instead of your kids."

        You've put your finger on the likely problem. The rate of violence at all levels has been decreasing exponentially in the world for at least the last 500 years or so (actually exponentially, backed up by numbers and stats). Much of this reduction, at the state level, is associated with engagement and interdependence on other nations. You wage war for economic or political gain. If waging war is expensive because you lose all your trade benefits, you're less likely to do it.

        Killer robots remove one of the major political costs, particularly in a democracy. Wars are unpopular with the citizenry, especially when body bags start coming back.

        • Killer robots remove one of the major political costs, particularly in a democracy. Wars are unpopular with the citizenry, especially when body bags start coming back.

          Exactly and this means more war and killing (of the other guys) because there is no cost in lives on the home front. When was the last time you heard about a drone being shot down or crashing after bombing someone in Yemen? Never. If it was a manned aircraft going down, people would pay more attention to the mission and ask questions.

      • The point with robots is that if your code is compromised you have of sleeping agents in your country of choice. The code could be compromised years before it is activated and stay dormant until doomsday. The problem is mitigating such a devastating army.
    • by phantomfive ( 622387 ) on Monday November 19, 2018 @06:38PM (#57670972) Journal
      "Grand Master of AI" is a title I have no filed along side "Technology Futurist." An extremely useful title, it tells you they will just babble mindlessly.
    • I was bothered more by the lack of understanding that friendly competition where the results are shared is more effective than pooling the work.

      If software is open source, competition benefits everybody inherently.

      Maybe this is why the field is so slow to advance? They still haven't internalized the lessons their own field taught the world two generations ago.

    • Re:what an idiot (Score:5, Interesting)

      by ceoyoyo ( 59147 ) on Monday November 19, 2018 @08:17PM (#57671430)

      "I wish he had a library card and could check out a history book..."

      I wonder just how much history you've studied. One of my favourite courses in university was a double course entitled "The History of Human Conflict." The theme that emerged is that war at the state level is a surprisingly stylized, rigidly rule bound activity. The use of "dishonorable" weapons is highly suppressed. Which probably explains why any of us are still alive. It's also very highly conservative. Military officers study history extensively, and tradition is extremely important; "it takes the Navy three years to build a ship. It will take three hundred years to build a new tradition."

      Stephen Pinker points out in "The Better Angels of Our Nature" that even guerilla and terrorist organizations violate accepted norms at their peril. Such organizations require popular support, and when they commit atrocities they tend to lose that support. The Red Brigade and IRA being prominent examples.

      By the way, I know Bengio. He's very much not an idiot.

      • by zlives ( 2009072 )

        the real issue is the development of said weapons and not as much the use. They do end up getting used once in a while( recently from mustard gas, nukes and other chemicals) but again it is in human nature to develop the next killer and for some one to do hand wringing after the fact... is pointless.

  • Spare the handwringing. Please!

    If he deserves the MASTER moniker he'd have done something instead of this dower sour grapes image manipulation.

    How about AI compilers that write efficient code, reuse best practices and engineer fixes in architecture humans code.

    • Re: (Score:2, Interesting)

      by Aighearach ( 97333 )

      Anybody who is a [Grand-]Master of something that isn't even a competition, the title is a red flag to tell you that they're an advanced amateur; for example Master Gardener, Master Recycler.

      Unless the implication is that he has a Master's Degree, then it is merely in poor style to use it as a title.

      A GrandMaster of chess, or Go, I know what it means; it means he defeated other people who already had the GrandMaster title, and earned it by demonstrating their skill.

      When the subject is software, it reads the

      • The author is a grandmaster fluffer.

        I don't think 'grandmaster' has meaning in Go...how many dan is that? What nations 'dan'?

        • It has meaning because "grandmaster" is English, and "dan" is not English. After you apply translations, you'll end up with some of the ranks being called Grandmaster. Generally, whatever ranks have ELO ratings corresponding to a chess grandmaster.

        • Grandmaster would be Han Shi, that can be awarded around 8th DAN, Master would be Shi Han, a title that can be awarded 5 to 7 years after gaining the rank of 6th DAN. At least that is how it is somewhat resembled in martial arts. Around 4th DAN people might honourable refer to you as Sensei ...

          And no, for other people reading this: in Go they don't wear black belts. And most martial arts have no belt colours anyway. And if you wonder: no, a 1st degree black belt aka a Sho DAN aka a 1st DAN is not s Sensei o

  • Comment removed (Score:5, Insightful)

    by account_deleted ( 4530225 ) on Monday November 19, 2018 @05:56PM (#57670756)
    Comment removed based on user account deletion
    • by zlives ( 2009072 )

      yeah just like porn... thats why no one watches porn online.

    • Very good point. What is truthful is that some percent will see enemies with real or imagined weapons as threats to be neutralised, regardless of any real threat.

      You cannot deter the insane or psychotic, don't bother, and as some fanatics want to bring about the end of the world, mutually assured destruction is more of a temptation than a threat.

      That's an equation scientists have to consider. Just because you can do something doesn't mean you should. You can't uninvent a weapon.

      I agree that guilting is usel

  • by dryriver ( 1010635 ) on Monday November 19, 2018 @05:58PM (#57670766)
    Analysis: China's Communist Party Members Have Terrible Haircuts. Action Taken By AI: Inspired by Wintermute in William Gibson's Neuromancer, AI composes a mighty dub called "Tiananmen Square Boogie", generates a photorealistic-looking 3D video for it and posts it on Youtube. Disasterous Real World Consequence: The video proves so viral on Youtube that billions watch it over and over again. K-Pop crashes in popularity, and male K-Pop singers can no longer afford quality makeup, hair gel and earrings, making them very, very sad. South Korea gets very pissed with China's AI, and unleashes its own AI on China. The Korean AI hacks into Apple's manufacturing plants in China, and causes them to manufacture iPads and iPhones with a yellow Banana logo. iTunes can only play Kung-Fu movies on these devices. Siri also sounds like a transvestite with a bad cold now. Apple's stock price crashes on the Nasdaq. America gets pissed, unleashes its own AI on China AND South Korea. This in turn causes North Korea to become tittilated and take advantage of the situation by unleashing its Bang-Dong-Bong Viral Propaganda Generator AI on everybody else. Bang-Dong-Bong fills Youtube with super-viral communist anthems, causing half the world's youth to become Communism admiring messes. This in turn pisses off the Europeans, and they unleash their........
    • K-Pop crashes in popularity, and male K-Pop singers can no longer afford quality makeup, hair gel and earrings, making them very, very sad.

      No, AoA Hyejeong and Chanmi would just redux Chanmi's makeup show to teach them how to take care of their skin and look good without makeup. AoA goes on social media without makeup all the time, and the fans go nuts because they're so much more beautiful in their skin than in their costumes.

      China can't take down k-pop, not even with AI. Notice, the attempt is defeated without even calling in Seolhyun!

  • by Tablizer ( 95088 ) on Monday November 19, 2018 @06:20PM (#57670874) Journal

    There's a lot of other things to worry about besides rogue AI: social-media induced mass riots, garage-built nukes, garage-built run-away killer germs, state-built run-away killer germs, mass computer virus outbreaks*, big solar flares knocking out most our gizmos, global economic depression, and combos of these exacerbating each other.

    With all the things that can go wrong, I almost think we solved Fermi's Paradox.

    * Systemd may just be the end of humans ;-)

    • garage-built nukes,

      Is this a credible problem? Is there a way an average person could get sufficient plutonium, or that they could any time in the next 50 years?

      • There's no shortage of fissile material as sludge in the Irish Sea. Getting it without being noticed would be difficult, but unlikely to be impossible.

        Extracting the uranium and plutonium from the americium and other elements would be the challenge, and probably beyond garage developers, but the quantities would be easy.

  • It will, over the next 50 years, radically transform society in ways that are difficult to fathom. Certainly robots will take over menial jobs, and there will be zero privacy and nowhere to hide.

    I'd like to think that democratic values outside China will prevail, but people are pretty stupid.

    And then, eventually, AI will be able to program itself without people. People currently have a symbiotic relationship to machines, but that will change to being parasitic. Why would the AIs want people around?

    Why wo

    • Two things.

      1. Democracy, like all systems, must evolve. And that means it will eventually evolve into something not democracy.

      2. AIs and humans each have strengths where the other is weak. Just as prokaryotes combined to form the nucleus and mitochondria of more powerful eukyarotes, and these combined with viruses and bacteria to eventually form human cells, AIs and humans can combine to form a composite organism more powerful than either alone. Systems naturally combine, rather than compete.

    • Why do nutters talk about "robots" when talking about "AI"? So strange.
      • Why do nutters talk about "robots" when talking about "AI"? So strange.

        Because an AI sitting in a box on a desk somewhere isn't particularly frightening. An AI rolling around in a robot shell with a credit card to pay for power is much more frightening.

        Elon Musk is sowing the seeds of humanity's destruction with the creation of the Supercharger network.

        (The prevalence of above-ground power lines pre-Tesla Motors shall be ignored for the purposes of hyperbole.)

  • Yet China is moving at a ast pace to make killer robots. Who wins?
    • Re: Who wins? (Score:4, Insightful)

      by jd ( 1658 ) <imipakNO@SPAMyahoo.com> on Monday November 19, 2018 @07:56PM (#57671320) Homepage Journal

      The country that sidesteps the problem wins.

      Weapons are expensive to produce. Wars are expensive to fight. As Sun Tzu notes, the best strategy is not to fight.

      If you don't produce killer AI but put your resources into out-evolving humans and AIs as individual constructs, you can't be beaten by either and can walk right over those degraded and exhausted by fighting.

      That's who wins.

  • An article about what Fei Fei Li has been up to for a few years: Stanford, Google, etc. And the things she is now worried about. Here is the link: https://www.wired.com/story/fei-fei-li-artificial-intelligence-humanity/ [wired.com]

  • The country who's totally got experience with war and occupation. Sorry, when the next Hitler rolls around I damn well hope the military shoots back and kills as many as possible of the bastards. And if that means going on the offensive all the way to Berlin, so be it. WW2 ended because of D-day and the nukes at Hiroshima and Nagasaki, you can't defend yourself to victory. And in a real war it's not about duty and honor, it's about freedom and survival as a people and a nation. Try making a dictator that'd

    • You do know America's protectionism allowed Hitler to rise to power, as did Europe's attitude after WW1.

      If people hadn't been so bloody minded and so bloody stupid, Hitler would never have risen to power.

      Lesson #1: If you're not so bloody stupid, then nobody else is likely to become moronic. If you don't create your enemies, you tend not to have any. And then you don't have to fight anyone.

      And what did you achieve in Iraq? You single-handedly created Al Qaeda in Iraq, it hadn't existed before the Invasion.

      • Lesson #1: If you're not so bloody stupid, then nobody else is likely to become moronic.

        Really? What if someone else is stupid first? Isn't it kind of a high bar to expect no one in the world to be stupid?

    • by ceoyoyo ( 59147 )

      Since you mentioned Vietnam perhaps you're American?

      Canada had a higher percentage of its population die in World War II than did the United States.

      There are a lot of people, not just in Canada, who would prefer to see the warmongers fuck off so we can all avoid a repeat.

      • That's what happens when the Brits command 'colonials', they are cannon fodder. Old news.

        • by ceoyoyo ( 59147 )

          The supreme allied commander Europe was a guy named Eisenhower.

          • Montgomery made all tactical decisions for the British ground forces.

            Colonials as cannon fodder is a very common old British tactic. Surely you don't dispute it?

            • by ceoyoyo ( 59147 )

              Nope. Your initial reply was irrelevant to the thread, so we're currently exchanging historical trivia.

              British forces experienced per capita casualties that were greater than *either* the US and Canada in WWII.

  • The argument that we shouldn't use them is obviously negated by the fact that those who will, will thereby have a huge advantage over those who do not.

    His argument that we can use the technology only defensively, such as to counter those who use it offensively, was brilliant. Think of Wikipedia. There are those who abuse the system but there are far more who correct it. In a world of seemingly little hope, this truly helps redeemed humanity. If there is far more effort into the use of AI in defense, per

  • Duty is not antagonistic to morality - duty is a form of morality. Maybe this grandmaster should read up on moral foundations theory [wikipedia.org]. It's clear that Bengio is more focused on the care/harm pillar but it's not the only pillar.

    People rarely disagree on the pillars of morality (care is better than harm, fairness is better than cheating, loyalty is better than betrayal, respect is better than contempt, cleanliness is better than filth), they just value them differently or apply them to different categories (e.

  • Ya, right. Somebody is reading a little to much Manga.

Some people manage by the book, even though they don't know who wrote the book or even what book.

Working...