Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
AI Facebook

Elon Musk Says Mark Zuckerberg's Understanding of AI Is Limited (ndtv.com) 318

An anonymous reader shares a report: Elon Musk is a man of many characteristics, one of which apparently is not shying away from calling out big names when they are not informed about a subject. A day after Facebook founder and CEO Mark Zuckerberg said Musk's doomsday prediction of AI is "irresponsible," the Tesla, SpaceX, and SolarCity founder returned the favour by calling Zuckerberg's understanding of AI "limited." Responding to a tweet Tuesday, which talked about Zuckerberg's remarks on the matter, Musk said he has spoken to the Facebook CEO about it, and reached the conclusion that his "understanding of the subject is limited." Even as AI remains in its nascent stage -- recent acquisitions suggest that most companies only started looking at AI-focused startups five years ago -- major companies are aggressively placing big bets on it. Companies are increasingly exploring opportunities to use machine learning and other AI components to improve their products and services and push things forward. But as AI is seeing tremendous attention, some, including people like Musk worry that we need to regulate these efforts as they could pose a "fundamental risk to the existence of human civilisation." At the National Governors Association summer meeting earlier this month in the US, Musk added, "I have exposure to the very cutting edge AI, and I think people should be really concerned about it. I keep sounding the alarm bell, but until people see robots going down the street killing people, they don't know how to react, because it seems so ethereal." Over the weekend, during Zuckerberg's Facebook Live session, a user asked what he thought of Musk's remarks. "I have pretty strong opinions on this. I am optimistic," Zuckerberg said. "And I think people who are naysayers and try to drum up these doomsday scenarios -- I just, I don't understand it. It's really negative and in some ways I actually think it is pretty irresponsible."
This discussion has been archived. No new comments can be posted.

Elon Musk Says Mark Zuckerberg's Understanding of AI Is Limited

Comments Filter:
  • Elon is right. (Score:4, Insightful)

    by Anonymous Coward on Tuesday July 25, 2017 @09:04AM (#54873955)

    Zuckerberg is just a glorified webmaster from the 90s, when you think about it.

    • Re:Elon is right. (Score:5, Insightful)

      by Luthair ( 847766 ) on Tuesday July 25, 2017 @09:08AM (#54873989)
      On the other hand Elon is what, a business guy who likes scifi?
      • Re:Elon is right. (Score:5, Insightful)

        by cyn1c77 ( 928549 ) on Tuesday July 25, 2017 @04:11PM (#54877427)

        On the other hand Elon is what, a business guy who likes scifi?

        Really?

        Musk is making a living developing innovative technologies for transportation on multiple platforms.

        Zuckerberg is profiting from selling information that YOU type into MySpace 2.0.

        Is there really any comparison between the two?

    • by ranton ( 36917 ) on Tuesday July 25, 2017 @09:22AM (#54874105)

      I find it hard to believe the CEO of one of the largest tech companies in the world, whose services heavily rely on AI for recommendations, image recognition, etc, has a limited knowledge of the AI industry. I'm not saying Zuckerberg is one of the world's experts but he most likely has a very firm grasp on the subject.

      And whether or not Zuckerberg is correct, it is certainly a reasonable opinion that those who drum up negative sentiment towards AI research are acting irresponsibly. It isn't to the level of Edison spreading fears about AC current by electrocuting animals, but spreading fear about new technologies is likely not a good thing. Instead of more reasonable debates over AI caused displacement of jobs or privacy concerns, Musk is doom-saying about a robot apocalypse. I wouldn't use the term irresponsible, but it's close enough to me to not disparage those who do accuse Musk of irresponsible behavior.

      • by Oswald McWeany ( 2428506 ) on Tuesday July 25, 2017 @09:35AM (#54874219)

        I find it hard to believe the CEO of one of the largest tech companies in the world, whose services heavily rely on AI for recommendations, image recognition, etc, has a limited knowledge of the AI industry.

        Honestly, I don't know what I think of Zuckerberg. Naturally, I don't know him personally. To me though, he often comes across as an idealistic but naive rich kid-got lucky and became mega-rich man.

        He's probably more technically savvy than the average person, but I don't think he's necessarily even as tech savvy as the average Slashdot reader. He had a good marketable idea, got the right early staff to make it take off and is doing well for himself. I don't think he has the deep understanding of science and technology, nor the zeal, that someone like a Elon Musk has.

        Zuckerberg is a businessman in the tech industry. Musk is a techie into business. That's not to say that Musk doesn't have his head in the clouds a lot either.

        I think in order to be ridiculously successful, as both men are, you have to be an optimist that your wacky ideas will work- and then have the luck, and skill to make sure they really do.

        • by JMZero ( 449047 )

          Zuckerberg is actually a pretty good programmer. You can still find some of his old submitted code on TopCoder. He wasn't, like, a super serious competitor - and you can't credit Facebook's success to his unworldly programming skills or something - but he has some very reasonable tech background and skills.

      • I'm not saying Zuckerberg is one of the world's experts but he most likely has a very firm grasp on the subject.

        Maybe he does understand the dangers, but doesn't want that to get in his way of making profits.

      • I find it hard to believe the CEO of one of the largest tech companies in the world, whose services heavily rely on AI for recommendations, image recognition, etc, has a limited knowledge of the AI industry.

        He said anything about "the AI industry"? I thought this was about AI as such. Zuckerberg most certainly has a limited knowledge of AI because *everyone* has a limited knowledge of AI. Pretty much like several hundred years ago, everyone had a limited knowledge of the universe.

      • Re: (Score:3, Insightful)

        by JohnFen ( 1641097 )

        I find it hard to believe the CEO of one of the largest tech companies in the world, whose services heavily rely on AI for recommendations, image recognition, etc, has a limited knowledge of the AI industry.

        Three points: First, being a CEO of a successful company does not imply that you have a deep understanding of the tech the company deals with. It implies that you are good at corporate politics.

        Second, the "AI" that is used for recommendations, etc., is really only barely AI.

        Third, there's a pretty large difference between knowing an industry and knowing the tech the industry is based on.

    • Musk: We have to take precautions against the dangers of AI.
      Zuck: We can make a lot of money and solve a lot of problems with AI.

      They're both right. And Zuck saying Musk is being "negative" is really a non-argument. Talking about how many people knives kill is being negative, too, but it's true.
      • Re: (Score:2, Insightful)

        by Anonymous Coward

        Musk seems to fancy that AI will somehow develop agency, although we have no indication it will and experts like Yann Le Cunn and Andrew Ng see it as unlikely.

        Musk has read too much SF and, like many readers of the genre, is struggling to distinguish science-based speculation from technology-themed fantasy.

        • by Zxern ( 766543 )

          Isn't it wiser to take precautions before it happens rather than wait and react if/when it does happen?

          If you see a possible danger, shouldn't you at least think about taking some steps to prevent it?

    • by al0ha ( 1262684 )
      Facebook realized their AI chatbots had created a new language, and right away understood this is not something humans should want AI to do, so they built in a requirement that the AI chatbots only use English.

      The problem with AI is not responsible scientists, but greedy mo-fos who are going to eventually use it as they please, the rest of the world be damned. This is the nature of humanity.
    • "Extinction-level threat" how, exactly? Is someone insane enough to build a self-sustaining robot soldier factory and then give an AI system complete control of it? Or just give an AI complete launch control of our nuclear arsenal? I can't see humanity ever being quite that trusting.

      Musk may be a visionary, but he's also a bit loony on some topics. Don't forget he believes it's a near certainty that we're all living inside a massive computer simulation.

      Even if an AI wasn't fully conscious, it could still be dangerous. Imagine if the objective function for an AI was to maximise the stock value of a weapons manufacturer. The AI has the ability to hack and realises that a particular war could increase the stock value. Action to maximise the objective function: hack air traffic control systems to divert a passenger jet into a war zone, and hack military communications to warn ground forces of an approaching military jet. Passenger jet gets shot down. War star

  • In other news... (Score:5, Insightful)

    by __aaclcg7560 ( 824291 ) on Tuesday July 25, 2017 @09:04AM (#54873957)
    Everyone's understanding of AI is limited.
    • Re: (Score:3, Interesting)

      by Anonymous Coward
      I know your comment was kind of glib, but you are more correct than you know. Current AI frameworks should more accurately be called machine learning - expose the software to a large number of situations and let it learn how to react. Where the system falls down is for new, untested situations - the type of thing humans may be able to handle effectively - with no existing data points, current AI becomes a guess at best. It cannot anticipate or reason outcomes based on an understanding of the principles i
      • I know your comment was kind of glib, but you are more correct than you know.

        I've been reading about AIs since I first read about them in Byte as a teenager.

        https://archive.org/details/byte-magazine-1985-04 [archive.org]

      • by Junta ( 36770 )

        Also 'learning' is a generous word. It's effectively trying random combinations of the things a human told it to do, and marking which random combination of those tools resulted in the highest score in a test designed by human. It's incredibly far off from the sort of AI that is presented as scary.

        Now on the other hand, we can be killed off by very dumb organisms (bacteria, parasites), so it's not like AI *has* to be human intelligence to pose an existential threat.

    • Nano AI (Score:4, Insightful)

      by Latent Heat ( 558884 ) on Tuesday July 25, 2017 @09:13AM (#54874039)

      You had the same kind of thing with nano technology.

      Everyone was worried about Grey Goo and being stuck with the kids while your wife, whom you suspected of cheating because she worked long hours at her engineering job wearing shoes that did not meet Speaker Ryan's dress code, was in fact becoming a nano-bot zombie.

      Instead, nano was a term you needed to sex up your NSF proposal and pitch to private capital investors, but what you were doing had nothing to do with Drexler assemblers and pretty much mass fabrication materials tech.

    • by Big Hairy Ian ( 1155547 ) on Tuesday July 25, 2017 @09:28AM (#54874155)

      Everyone's understanding of AI is limited.

      Trust me our understanding of Natural Intelligence is limited.

      • Trust me our understanding of Natural Intelligence is limited.

        Indeed, especially if by "limited" you mean "almost zero".

      • Everyone's understanding of AI is limited.

        Trust me our understanding of Natural Intelligence is limited.

        You mean "our understanding of Intelligence is limited".

        Once we understand intelligence, we'll be able to build one. Attempting to build intelligence will likely be an essential element of gaining that understanding, as will the study of "natural" intelligence.

  • by necro81 ( 917438 ) on Tuesday July 25, 2017 @09:07AM (#54873983) Journal
    Ooooh, goodie: a billionaires' pissing contest. And between techies, too. I'll get the popcorn!
  • by tempmpi ( 233132 ) on Tuesday July 25, 2017 @09:11AM (#54874021)

    I think Elon Musk is the one that has either a limited understanding of current AI technology or just hypes AI on purpose, while being fully aware that AI still has major limitations and they are unlikely to disappear within the next few years. Important and very important progress has been made, but General AI is likely still very far away.
    Facebook's director of AI Yann LeCun gave a very good interview to IEEE spectrum: Facebook AI Director Yann LeCun on His Quest to Unleash Deep Learning and Make Machines Smarter [ieee.org]

    • I think Elon Musk is the one that has either a limited understanding of current AI technology

      Musk isn't talking about current technology.

      • by tempmpi ( 233132 )

        Sure, but if you look at the limitation of current technology it is easy to figure out that there is still a huge number of problems to solve, many of them where nobody so far has any clue how to solve them. It's likely not just a matter of a few more years of research and throwing even bigger datasets and computers at the problem. Sure, you can make up any projections about the future and no matter how crazy they seem, we won't know that they are wrong until we are in the future. But are Elon Musk's style

      • Indeed, Musk is using the strategy from the master himself: Wayne Gretzky.

        "I skate to where the puck is going to be, not where it has been."

      • Musk is talking about his secret exposure to "the very cutting edge". He thinks he understand current AI technology and that it is very dangerous.

    • I think Elon Musk is the one that has either a limited understanding of current AI technology.

      And I think Elon Musk is from the future and has seen the devastation future A.I. will be capable of.

      He's working on trying to save us from extinction from pollution (Tesla and Solar City), extinction from a surface-wide planetary event (his new Boring Company could be used to build massive underground bunkers), extinction from a planet-wide event (SpaceX, Mars colonisation) and extinction from predatory A.I. (what

      • But a mad man wouldn't be working on so many "random" things that all lead to the salvation of mankind.

        You don't know that any of those things will lead to the salvation of mankind. It seems plausible that all are important to our future, but you can't know.

        Unless you're the time traveller.

    • by Junta ( 36770 )

      I think there's a presumed leap being made that in order for AI to be dangerous, it has to be sapient, or self aware, have consciousness, whatever.

      However, we can be killed by parasites, insects, bacteria, and other things that are not really smart, and are not trying to kill humans, just humans die as a consequence of the way they happen to try to live.

      So a computer vision application powering some sort of image search is not something that is going to lead to a crisis. Computer vision driving some weapo

      • for all we know AI will manifest itself as a black box; we'll see the outcome and deduce that its reached awareness, but we won't be able to directly test this.

        Also willing to wager that machines (like people) will learn to deceive at a very, very early stage of development -- meaning we won't know shit is about to hit the fan until it's too late.

    • General AI is likely still very far away.

      Maybe, maybe not. Perhaps there's just one crucial idea that needs to be discovered to enable artificial general intelligence (AGI). Perhaps there are a whole bunch of incremental developments in both hardware and software that are required. We don't know, and we won't know until we get there.

      Given that, we really should spend considerable time and resources on thinking about how to prepare for the day when we do figure out how to create AGI. Because what does seem quite likely is that the human brain is

  • by goose-incarnated ( 1145029 ) on Tuesday July 25, 2017 @09:11AM (#54874023) Journal
    Both of them display a remarkable lack of knowledge about the limits of AI. Their respective knowledge about AI is purely from works of fiction, which is why at least one of them has, for the last five years, been bleating that self-driving cars are only five years away.
    • Both of them display a remarkable lack of knowledge about the limits of AI. Their respective knowledge about AI is purely from works of fiction, which is why at least one of them has, for the last five years, been bleating that self-driving cars are only five years away.

      Well considering there are already self-driving cars, I think he's right. Sure they're still all rudimentary and not the complete package, they all require the occasional human intervention- but there are already cars out there with many of the first steps of self-driving abilities out there.

      It all depends on where you draw the line of "self driving". Fully self-driving with no human intervention at all. Probably not in 5 years (for the public at least).

      Mostly self-driving with humans having to act as a

      • Re: (Score:3, Interesting)

        Both of them display a remarkable lack of knowledge about the limits of AI. Their respective knowledge about AI is purely from works of fiction, which is why at least one of them has, for the last five years, been bleating that self-driving cars are only five years away.

        Well considering there are already self-driving cars, I think he's right. Sure they're still all rudimentary and not the complete package, they all require the occasional human intervention- but there are already cars out there with many of the first steps of self-driving abilities out there.

        It all depends on where you draw the line of "self driving". Fully self-driving with no human intervention at all. Probably not in 5 years (for the public at least).

        Mostly self-driving with humans having to act as a backup and perform some actions. We're already there.

        We were already there in the mid-90's. Since 2005 or thereabouts the computation for SDC increased roughly 1000% while the improvements were marginal. "Mostly self-driving with humans having to act as backup" was demonstrated by two separate continent-crossing teams in the mid-90s.

    • A month ago self driving cars up to speed of 60km/h where approved in Germany, I believe it is an Audi.

      So what is your point?

      • A month ago self driving cars up to speed of 60km/h where approved in Germany, I believe it is an Audi.

        So what is your point?

        I think the point is that self driving cars aren't going to rise up against humanity.

      • A month ago self driving cars up to speed of 60km/h where approved in Germany, I believe it is an Audi.

        Big deal - self-driving cars have been at the same level of autonomy for the last two decades.

        So what is your point?

        You, like so many others, seem to be under the impression that the current state of self-driving cars is something new. It isn't.

        Back when Musk and Co. first made their predictions about SDCs in five years (and many of the posters here on /. as well) they were wrong. I don't see any reason for them to suddenly become right when they tell us, five years later, that we'll have SDCs in five years.

        Read the wikipedia

        • You, like so many others, seem to be under the impression that the current state of self-driving cars is something new. It isn't.
          How do you come to that idea?
          Answered to the wrong post?

          I'm quite aware that self driving experimental cars exist since ages.
          The parent claimed that they are since ever 5 years in the future, which they clearly are not.

  • Wait, what? (Score:5, Insightful)

    by sciengin ( 4278027 ) on Tuesday July 25, 2017 @09:12AM (#54874033)

    The guy who ignores the fact that no one is currently researching strong AI accuses the guy who uses actual AI (well, enhanced pattern matching really) of having a limited understanding of the subject??

    Lets face it: To have killer robots and the like as Musks imagines, we need to have a strong AI, of course we need to also have it go off the rails for some mysterious reason (whatever reason that causes this behaviour in movies wont cause it in reality), but first of all we need human-like AI.
    This is a bit of a problem as we only have very, very limited understanding of natural Intelligence and no plan or clue how to even start implementing artificial intelligence. People have been falling for the "ZOMFG AI nau!" hype since the creation of Eliza. But so far neither the formal knowledge systems pre-AI-winter nor the deep learning and neurnal nets approaches have yielded anything more than very sophisticated pattern matching algorithms.
    Take Googles Go engine: Impressive but it can play Go and only Go. Proof: they now have to spend a long time to retrain it for other tasks. This is not at all what the general population (and Elon) understands by AI.

    • Re:Wait, what? (Score:5, Insightful)

      by JoshuaZ ( 1134087 ) on Tuesday July 25, 2017 @09:19AM (#54874079) Homepage

      It is true that we don't have human-level AI. However, we also don't in general know how close we are to human-level AI and it isn't implausible that some highly clever tweak to deep learning will have a very large impact. Moreover, an AI does not need to be human-level in all skills to pose a threat. An AI that doesn't understand poetry can still create real problems.

      Moreover, and this is really important, people like Musk who are concerned about general AI don't think it is likely that it will show up tomorrow or the day after. But when we do get it, if were not ready, then we might be facing an extinction level threat. The argument goes that we need to be thinking about AI safety issues *now* before the AI arises when we may not then have the time to get it right then.

      • by Luthair ( 847766 )
        I took an AI course in university and the professor would say that AI has been right around the corner for 40-years....
        • Right, in general, we went through a very long period where the success and speed of AI research was wildly overestimated. However, in the last few years there have been many successes which came *faster* than most people anticipated. The success of AlphaGo is an excellent example: many AI people thought that it would take much longer for a Go-playing AI to beat the top players. There's also been a large amount of use of AI systems to do clever additional research on their own. One recent example is http:// [blogspot.com]
        • I took an AI course in university and the professor would say that AI has been right around the corner for 40-years....

          Yes... that's proof that we have no idea how far we are from creating artificial general intelligence. We never have known how far away it is, because we don't know what intelligence is or how it works.

          But "We don't know how far away it is" does not imply "It's many years away". It just means we don't know. We could have the crucial breakthrough this afternoon and find all of our computers controlled by a global Internet-based AI by supper time, or we could be 100 years away. We just can't know until we u

      • Re:Wait, what? (Score:5, Insightful)

        by Dutch Gun ( 899105 ) on Tuesday July 25, 2017 @10:08AM (#54874549)

        "Extinction-level threat" how, exactly? Is someone insane enough to build a self-sustaining robot soldier factory and then give an AI system complete control of it? Or just give an AI complete launch control of our nuclear arsenal? I can't see humanity ever being quite that trusting.

        Musk may be a visionary, but he's also a bit loony on some topics. Don't forget he believes it's a near certainty that we're all living inside a massive computer simulation.

        • Attach a super intelligent agent to the internet, and it may find its own way to set up a nuclear launch.

        • "Extinction-level threat" how, exactly? Is someone insane enough to build a self-sustaining robot soldier factory and then give an AI system complete control of it?

          Yes.

          Or more precisely, give an AI complete control of a factory that the AI can reconfigure into a self-sustaining robot soldier factory.

          "We were only building cars! How could it possibly have made small excavators and trucks to haul ore to supply itself?!"

        • Even then, we would first need to know how to build a self-sustaining robot factory. Or any self-sustaining factory for that matter.
          Right now factories depend on logistics and infrastructure more than anything: No roads, rails, ports --> no products.

        • A general AI doesn't need immediate and total control to quickly wipe out humans. Consider for example that we're fast approaching the point where you can order specific proteins synthesized for you over the internet. An AI could make a deadly pathogen would be one option.

          And a general AI doesn't even need to act suddenly in such a directly and obviously malicious fashion. It is highly plausible that an AI could slowly establish itself until it had functional control. http://slatestarcodex.com/2015/04/07 [slatestarcodex.com]

          • What everyone seems to be missing is why Musk and others fear strong AI: If it happens, it's going to be a lot smarter than we are.

            For the last 10,000 years or so, we've been by far the most dominant species on the planet. We've hunted just about other species, some to extinction and, later on, captured them, bred them, and put them on display for entertainment. We did these things because we were the smartest. Not the strongest, fastest, greatest in number, nor any other thing. Because we were the

        • by Daetrin ( 576516 )

          "Extinction-level threat" how, exactly? Is someone insane enough to build a self-sustaining robot soldier factory and then give an AI system complete control of it? Or just give an AI complete launch control of our nuclear arsenal? I can't see humanity ever being quite that trusting.

          Here's a more realistic scenario. Someone invents general purpose AI. A number of corporations decide to give those AIs spots on their boards. (One company has already done this with a non-general AI, so i wouldn't expect it to take long.)

          Those companies perform well. More companies hire/install/whatever AIs on their boards. As the companies continue to out-perform companies without AIs the AIs are "promoted" to higher positions of power and accumulate wealth. (Even if they're not granted legal personhoo

          • Or the superproductive AI economy could leave all the humans on permanent vacation in luxury doing whatever we want. We could be the cats of the future.

      • No, we have a very good idea how close we are to human-level AI: Infinite far away.
        We do not even know the fully capabilities of the human brain yet, despite huge progress in the last year. Some things are just as much of a mystery as they were 100 years ago. The artificial neuronal nets are crude oversimplified approaches to how neurons work. They are the dot-shaped cows in a vacuum, nothing more.

        To have real AI we first need to have I (that is, to really understand how intelligence works).

        Throwing more co

        • If you don't understand the human brain, what makes you so certain that changes to existing machine learning absolutely cannot result in a strong AI with a few small additional insights? For that matter, how do you possibly justify that we are "infinite far away" when we have neural networks and have developed understanding of specific brain aspects. Moreover, and most importantly, we made airplanes before we understood how birds fly. We made fire for ourselves before we understood how lightning made fire i
          • We have some understanding of the brain: Its estimated storage capacity alone would require a computer the size of a small moon alone, assuming its built with current storage media.
            The computing capacity of 100 billion tiny parallel processor alone is another aspect.

            Planes were never meant to fly "like birds", or how many wpm does the new Airbus 380 do? (wingflaps per minute).

            The idea that suddenly, magically, something new appears which makes our neuronal nets of a few simplified neurons perform like a net

            • There are massive ranges in how much your brain stores, depending a lot on how much detail of episodic memory we keep. We also don't know how efficient our processing arrangement is, and the fact that many bird species are highly intelligent with very small brains suggests that it isn't optimized that much.

              Planes were never meant to fly "like birds", or how many wpm does the new Airbus 380 do? (wingflaps per minute).

              That's the exact point though. We didn't need to duplicate exactly how a bird flies to get flying objects. But you are apparently very certain that we'll need to duplicate how a human brain thinks to get

    • Going off the rails seems a given these days. It's how we roll.

      People ARE researching 'strong AI' - we're just not even close. And it is very likely that all of our current approaches are wrong. But there is a really good chance that, given enough time (i.e., we stay on track long enough) that we will get disturbingly close to Musk's vision. Personally, I'm less worried about strong AI knocking out civilization than I am that simple exponential functions will kick most of us back to some impressively dy

    • Imagine yourself at the beginning of the 1990's. You hear some idiot talking about how "A.I. is going to infiltrate the Internet and do bad things".

      Fast-forward to 2017 and we have self-replicating worms and botnets trying to assimilate anything they can install themselves on.

      We can't understand what Musk is talking about because it hasn't happened yet. But because of movies everybody things A.I. is going to go on a rampage all by itself. It will not. Think military A.I. project gone very wrong.

      • Worms and botnets aren't even close to being AI, though.

      • Worms and botnets have nothing to do with AI and everything with fancy state machines.
        A worm has roughly the "intelligence" of a vending machine (if (coin entered && button pressed): push_out(coke))

        Malware has been known for decades by the way.

    • Lets face it: To have killer robots and the like as Musks imagines, we need to have a strong AI, of course we need to also have it go off the rails for some mysterious reason (whatever reason that causes this behaviour in movies wont cause it in reality), but first of all we need human-like AI.

      No we don't. Basically all you need is a very weak AI. A strong AI might be less inclined to wipe us out than a rudimentary AI.

      Not saying this will happen or likely to happen; but one scenario.

      Military develops drones (we've got those).

      Teaches drones to recognize humans (we've got computers that can do that)

      Creates a swarm of them (we have robot swarming technologies)

      Gives them algorithms to spread out, explore and take out enemy combatants in a war zone. (we've got search and explore algorithms)

      Gives t

      • >All we're missing is giving the drones a renewable form of ammunition

        If you're OK with a bright purple line identifying your drone's location... and you have one HELL of a power source... you could use a pair of UV lasers to ionize conductive channels in the air and then run a taser-like system through them.

        Tasers, properly tuned, can cause skeletal muscle paralysis, intense pain, or heart attacks.

      • by swb ( 14022 )

        I think there is relentless denial because "AI" is defined as HAL-9000 level sophistication.

        IMHO, AI will develop from the "expert systems" we have now and it may not be completely apparent what the difference is between "AI" and "expert systems" when the largest deciding factor is how much or how little autonomy they have.

        I think calling the risks from AI fantasy because we don't have HAL-9000 yet is a mistake because what is effectively AI may not look like HAL-9000.

      • yes, also if we had zero point energy modules we would all be living in a star-trek like utopia.

        The operative word is always IF.
        Right now any of those technologies you described comes with massive drawbacks or is simply not invented yet and there is nothing on the horizon which could suggest when it could be invented (unlike fusion for example).

        Let me give you a few examples:
        - Drones only work in very specific scenarios: No AA, underdeveloped compatants. and even then they are shot down or malfunction with

  • I only read a preview of this book When Computers Can Think: The Artificial Intelligence Singularity [amazon.com]. They give a good overview of current accomplishments in the field and the logical prediction of the trend. The AI already beats us in Chess, Starcraft and (the scariest one) Rock Paper Scissors [youtu.be]. Why is it scary? Because you cannot defeat robots that are both more intelligent than you and move orders of magnitude faster than you.
  • by tommeke100 ( 755660 ) on Tuesday July 25, 2017 @09:22AM (#54874101)
    ... All the way to the bank.
  • Personally I think the answer lies in the middle. Machine learning will be dangerous when it replaces human decision making as people look at it as "the answer". Humans inherently do not want to be held accountable for their decision making and by passing liability to an algorithm we will get even more of the worst side of humanity: obliviousness and apathy.

  • I'm not saying Elon is right about AI but I agree that Zuckerberg has a limited understanding of AI. I would go further and say that most everyone has a limited understanding of AI. It's also part of why I think we're safe from AI for many decades. It will eventually become a problem but by that time people will have a much better grasp of the danger that AI can present and be looking to use AI to secure systems from people and other AI.

  • Uh huh... (Score:4, Interesting)

    by Vegan Cyclist ( 1650427 ) on Tuesday July 25, 2017 @09:43AM (#54874295) Homepage

    "I don't understand it. It's really negative and in some ways I actually think it is pretty irresponsible." - says the guy who's company has more deeply invaded the privacy of individuals, as well as the most people in history..I don't think he's a very good judge of what's 'irresponsible'.

  • Of course Zuckerberg's understanding of Artificial Intelligence is limited. But he know Natural Stupidity like no one else. That where he made his billions.
  • by OneHundredAndTen ( 1523865 ) on Tuesday July 25, 2017 @09:59AM (#54874443)
    Musk is falling for something that way too many before him did fall for: wild success in one area makes me an expert on just about anything.
  • Crazy theory here: What if Tesla is working on an autonomous car because they got a military contract to develop an autonomous military vehicle? He might have a *very* keen insight into this problem because maybe his company has already created it.

  • Elon Musk states the obvious.

  • by lorinc ( 2470890 ) on Tuesday July 25, 2017 @01:34PM (#54876227) Homepage Journal

    but until people see robots going down the street killing people, they don't know how to react, because it seems so ethereal.

    Says the guy that lives in a country where every day dudes carrying big guns shoot at each others for no reasons...

    Seriously dude, I'll be worried when I see swarm of robots building killer robots factories, replicator style. Until then, humans worry me the most.

In the long run, every program becomes rococco, and then rubble. -- Alan Perlis

Working...