Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
AI Software Technology

Software Is Eating the World, But AI Is Going To Eat Software, Nvidia CEO Says (technologyreview.com) 135

An anonymous reader writes: Nvidia's revenues have started to climb in the recent quarters as it looks at making hardware customized for machine-learning algorithms and use cases such as autonomous cars. At the company's annual developer conference in San Jose, California last week, the company's CEO Jensen Huang spoke about how the machine-learning revolution is just starting. "Very few lines of code in the enterprises and industries all over the world use AI today. It's quite pervasive in Internet service companies, particularly two or three of them," Huang said. "But there's a whole bunch of others in tech and other industries that are trying to catch up. Software is eating the world, but AI is going to eat software."
This discussion has been archived. No new comments can be posted.

Software Is Eating the World, But AI Is Going To Eat Software, Nvidia CEO Says

Comments Filter:
  • by Anonymous Coward

    In the future you'll just tell your computer what to do. It will understand with nuanced meaning everything that you want and won't have to bother with providing any of those picky details.

    • by Anonymous Coward

      In the future you'll just tell your computer what to do. It will understand with nuanced meaning everything that you want and won't have to bother with providing any of those picky details.

      And then according to TFA, it will eat you, all the remaining bees, and the world.

    • We will finally have 'DowhatIwant.exe' debugged.

    • by Opportunist ( 166417 ) on Wednesday May 17, 2017 @12:44PM (#54434893)

      Yeah, right. I said "make me a ham sandwich" and now I have mustard in my eye and you don't even WANT to know where it put the mayo.

    • In the future you'll just tell your computer what to do. It will understand with nuanced meaning everything that you want and won't have to bother with providing any of those picky details.

      You forgot one detail: It'll say "Sure Anon I'll do that for you, but first please listen carefully to this 15 second commercial message from one of our sponsors!"

  • Really... (Score:2, Insightful)

    by Anonymous Coward
    You really shouldn't write headlines when you're stoned.
  • ...needs to be or benefits from being or using AI. AI, NN, machine learning etc are statistical approaches that can be effective for approaching intractable problems. Unless this hype train has already reached "the singularity" it does not write software, and most of the software we write at the moment is for solvable problems that have no direct benefit from AI (no i'm not talking about big data advertising or your useless personal spy assistant).
    • by Anonymous Coward

      The software you write....costs your employers money. They must pay you to write it. *THAT* is the problem that would benefit from AI.

      It doesn't matter if you can do a perfectly fine job of writing that software. You are unwilling to do it for free. The AI will do it more affordably than you. Once the technology is mature enough...it will eat you.

      • by Anonymous Coward

        The software you write....costs your employers money. They must pay you to write it. *THAT* is the problem that would benefit from AI.

        Great. So we'll have shitty software writing even shittier software, probably uncommented, so when it all fucks up so thoroughly that it creates more problems than it solves, it'll take ten times longer for an experienced human programmer, who knows what the fuck he's doing, to sort through it and fix it -- assuming that is that it isn't such a piece of crap to start with that it ends up being rewritten from scratch by the aforementioned experienced human programmer.

      • by tomxor ( 2379126 )
        You don't know what the singularity is do you? nether can the fathom the distance in time between current AI and that vague distant possibly non existent point in the future. TLDR; AI doesn't write software.
    • by Kjella ( 173770 ) on Wednesday May 17, 2017 @01:07PM (#54435071) Homepage

      Indeed, most software is governed by business rules and being able to explain exactly what the system will do and why is essential. If say budgets over $100k need board approval, the someone has to program exactly that and nothing else. AI is great when the outcome is more important than the reasons behind it, like does this patient have cancer? If it can consult a huge database of cases and make millions of statistical weights we don't really care how it arrives at 83% as long as roughly 83 out of 100 patients end up actually having cancer. Then it's usually back to business rules for further examination/treatment though. More AI = more software work, not less.

      • Then it's usually back to business rules for further examination/treatment though. More AI = more software work, not less.

        By which, of course, you mean more software work to replace traditional software with AI like Big Blue, which as in this article, figured out exactly which cancer a woman had AFTER they knew she had cancer.

        https://www.alumni.hbs.edu/sto... [hbs.edu]

        "Watson was trained on cancer at Memorial Sloan Kettering in New York City—reviewing research, test results, even doctors’ and nurses’ notes to discover patterns in how the diseases develop and what treatments work best."

        https://www.fastcompany.com/30... [fastcompany.com]

        I

    • by Junta ( 36770 )

      Additionally, contrary to what Google and nVidia will tell you, there aren't *that* many people with a good idea of a useful goal even when they understand the principles.

      Similar problem has afflicted 'big data', lots of people who know the principles and can do useless examples, not that many people who have an idea what to do with those techniques.

  • Hype cycle (Score:3, Informative)

    by Matt Bury ( 4823023 ) on Wednesday May 17, 2017 @12:16PM (#54434669)
    Yes, we're very much at the start of the new tech hype cycle https://en.wikipedia.org/wiki/... [wikipedia.org] for AI. Let's see just how revolutionary and useful it turns out to be.
    • Yes, we're very much at the start of the new tech hype cycle

      Keep in mind that most tech hype is actually correct, even if premature. People laughed in the 1980s when hypers predicted that home computers would be popular, and in the workplace there would be a computer on every desk. But that is what happened. Likewise, people rolled their eyes in the 1990s at the notion that online shopping would be popular, and many people predicted that smartphones and social media were passing fads.

      • Re:Hype cycle (Score:5, Interesting)

        by Jawnn ( 445279 ) on Wednesday May 17, 2017 @02:06PM (#54435513)

        Yes, we're very much at the start of the new tech hype cycle

        Keep in mind that most tech hype is actually correct, even if premature. People laughed in the 1980s when hypers predicted that home computers would be popular, and in the workplace there would be a computer on every desk. But that is what happened. Likewise, people rolled their eyes in the 1990s at the notion that online shopping would be popular, and many people predicted that smartphones and social media were passing fads.

        Well, no. I didn't reject the notion of PCs, or ecommerce. I do reject the notion of "AI" becoming a thing in my lifetime. First of all, what's being hyped as AI is not AI, as AI has been defined. At most, we're talking about "machine learning", not the same thing at all.

        • I do reject the notion of "AI" becoming a thing in my lifetime.

          AI is already in your life. When you insert a handwritten check into an ATM machine,the correct amount pops up on the screen. How do you think that works? Hint: There is no little man inside the machine.

          First of all, what's being hyped as AI is not AI, as AI has been defined.

          You might want to recheck the definition. The term "Artificial Intelligence" was first used by John McCarthy in 1956 to describe the work being done 61 years ago. You should get your definitions from people working in the field, rather than from Hollywood movies starring Will Smith.

          Would you say that ph

          • by Junta ( 36770 )

            Not all machine vision is ML. Given the timeline, most check reading is not inference after ML training. Ditto for a lot of face recognition.

            That's one of the peculiar things, ML Training for image recognition of new things is a popular demonstration, but most of the day to day image recognition was done using techniques prior to that being practical. New we can much more easily create new recognition, but demos still like to use faces because it's approachable. Counterproductively though, a high end ma

            • Not all machine vision is ML.

              Sure. But ANNs generally produce far better results, and are rapidly replacing older algorithms. For instance, at recognizing digits, the best non-NN algorithms running on the standard MNIST database get about 85% correct, for an error rate of 15%. The best modern deep ANNs get 99.8% correct, for a 0.2% error rate. That is an error rate reduction of almost 99%.

              Given the timeline, most check reading is not inference after ML training.

              I don't think so. ATMs began switching to ANN software more than 10 years ago. Many, especially in the developing world where human labor is ch

          • Um, image/letter recognition is not AI.
        • by lgw ( 121541 )

          Like many Slashdotters, your idea of "real AI" is something from SciFi, not from the field. We have real AI right now, produced by real AI developers based on papers in peer-reviewed AI journals. This thing we have? That's what AI is.

          You need a new term for machine consciousness, is that's what you mean.

          Also, chickens are dinosaurs, for the same reason. The scientists in the field get to define the terms, That's just how it works.

        • Yes, we're very much at the start of the new tech hype cycle

          Keep in mind that most tech hype is actually correct, even if premature. People laughed in the 1980s when hypers predicted that home computers would be popular, and in the workplace there would be a computer on every desk. But that is what happened. Likewise, people rolled their eyes in the 1990s at the notion that online shopping would be popular, and many people predicted that smartphones and social media were passing fads.

          Well, no. I didn't reject the notion of PCs, or ecommerce. I do reject the notion of "AI" becoming a thing in my lifetime. First of all, what's being hyped as AI is not AI, as AI has been defined. At most, we're talking about "machine learning", not the same thing at all.

          Perhaps you need to understand and grasp the fact that AI doesn't need to be perfect or even close to become a significant disruption to our environment. Hell, we only really need automation to be adopted on a large scale to initiate the destruction of human employment. AI will be nothing more than the final iteration once it comes to fruition.

      • I rolled my eyes when they said that in the year 2000 there'd be hotels on Mars and we'd all have jetpacks.

    • by Junta ( 36770 )

      Incidentally, I think that curve is generous. It's a way for Gartner to say 'just because the hype died down, our insight on that fad is still valuable because it will come back'. I would say more often than not, the 'disillusionment phase' is a bit more persistent, and 'enlightenment' is not something that frequently improves the fate of a 'dead' hype.

  • Hand-typing Forms (Score:4, Informative)

    by WDot ( 1286728 ) on Wednesday May 17, 2017 @12:20PM (#54434709)
    Almost 10 years ago I had an internship in a credit-card processing center. Many transactions were done over computer networks at that point, but there were still a few transactions done with "knucklebusters." This could either be because the store was remote or because it was a backup when the higher-tech point of sale devices were down. These machines made manual impressions of the bezeled credit card numbers. These impressions were then mailed to the office, where secretaries typed in the devices by hand. By the time I came there was a special internal application that extracted individual images of numbers, so that secretaries just had to sit at a desk, look at the number, and type up what number they thought it was.

    "AI" (or computer vision techniques, or whatever) would make this task unnecessary, as a neural network could solve this with pretty much 100% accuracy. A couple of extra checks could prevent most mistakes. I know software, databases, and the Internet have swallowed up a lot of printed forms, but there's still a lot of human labor that involves finding boring patterns in reams of paperwork. Seems like "AI" has a lot of opportunities to automate these tasks.
    • by jeremyp ( 130771 )

      What you describe is already reality. I wouldn't describe it as artificial intelligence.

      • Re: (Score:2, Insightful)

        by WDot ( 1286728 )
        Slashdot is the only place I know of that has such a ridiculously restrictive, illogical definition of "AI." To Slashdot, something is only "really" AI if it works exactly like the "AI" in TV and movies. What if it translates speech into words, like a human can? Not AI. What if it can identify objects in an image, and write unique natural language sentences about them, like a human can? Not AI.

        The reason I say it's illogical is because it's not even a difference in kind, it's a difference of degree. If w
        • by rockmuelle ( 575982 ) on Wednesday May 17, 2017 @01:17PM (#54435153)

          Handwriting, speech recognition, and image processing along with their machine learning foundations do not impress the older /. crowd because they are not new technologies.

          Dragon has been doing speech recognition better than Siri for almost 20 years. Simple command-based systems that only recognize a few words have been around longer than that.

          Handwriting recognition for constrained tasks is also not new. The US Postal Services has had zip-code OCR systems since the 1980s.

          Feature detection in images is not new, either. The only thing that's really changed there is we have the processing power to do it at scale.

          Going beyond the applications, all the modern "AI" systems are simply classifiers on steroids. Processing power and great storage capacity allows us work on larger data sets, but in the end, we're just creating complex hyper-planes to bin data in one bucket or another.

          Machine learning algorithms are great tools and it's great that we have the compute resources to really leverage them, but there's nothing really new that wasn't obvious 30 years ago. The only question was when we'd have the compute power to start doing the cool things we knew they could be used for.

          (ok, I'll give a little credit to the deep learning researchers for bringing neural nets back into vogue, since those were written off 30 years ago during the AI winter, but they're still just classifiers from the mathematical perspective).

          -Chris

          • by rockmuelle ( 575982 ) on Wednesday May 17, 2017 @01:19PM (#54435163)

            And I should reply directly the the parent: statistically stringing together text is also not new. We just have better collections to start training algorithms with.

          • Handwriting, speech recognition, and image processing along with their machine learning foundations do not impress the older /. crowd because they are not new technologies.

            Current handwriting, speech recognition, and image processing based on deep NNs is dramatically better than a decade ago. Error rates have gone down by an order of magnitude. If you are not impressed, then you are not paying attention.

          • by Dog-Cow ( 21281 ) on Wednesday May 17, 2017 @01:55PM (#54435421)

            Siri uses the Dragon engine for speech to text. It's literally impossible for Dragon to be doing it better.

        • The problem is that companies that have a vested interest in their so-called 'AI' research paying off (or getting funded in the first place) are convincing the media, and by extension, the general public, into conflating the fantasy AI of TV and movies with the extremely limited pieces of software they're currently producing, which are not even as smart as a dog. People do not know the difference! I'll bet you MONEY that the average person believes that so-called 'self driving cars' will have conversations
        • Electronic computers aren't revolutionary. Sure, it's a faster version of what we've been doing https://en.wikipedia.org/wiki/... [wikipedia.org] But the fundamental concept isn't new... Sometimes speed and cost fundamentally change things even if the underlying concepts aren't new.
        • by Junta ( 36770 )

          The problem is what people are hyped about and how it doesn't connect to the proclaimed conclusions.

          ML Training and inference do not have any plausible path to replacing 'programming', even in theory. So this specific claim is ridiculous, since this is pretty much the *only* 'AI' branch people are talking about, because we've traversed a moderately interesting inflection point in terms of hardware and research for training.

        • Slashdot is the only place I know of that has such a ridiculously restrictive, illogical definition of "AI."

          It's called strong AI, [wikipedia.org] and it's what the media typically means when they refer to AI.

        • AI is the dumbest term, and always has been.

          "Artificial cognition" would have been better, but here's the rub: it biases the conversation towards the perceptual foundations of intelligence: the auditory and visual systems. And there was no way back in the 1950s to build either. Not enough tubes. Not enough aircraft hangers. Not enough Hoover dams.

          But you could build a very primitive chess computer, and then pretend that from the top of this skinny beanstalk, one could directly assault the penthouse suite

      • What you describe is already reality. I wouldn't describe it as artificial intelligence.

        Yes, it is reality, and it certainly *is* AI. Image recognition is done using machine learning, and exactly the sort of thing that AI researchers work on.

        I can understand the general public thinking that "AI" only means human level intelligence, because that is what they see in the movies. But it surprising how common this misperception is even on a nerd forum.

  • by PopeRatzo ( 965947 ) on Wednesday May 17, 2017 @12:22PM (#54434723) Journal

    I think the Nvidia CEO's been microdosing again. In large quantities.

    • by mysidia ( 191772 )

      AI will drink software's milkshake

      Who cares, as long as it brings all the Boys to the yard?

      I understand NVidia has some products in this area, regarding machine learning, they are a chip maker after all.
      So the claim could just be the typical sort of self-serving thing CxO's say, -- marketing message trying to pique peoples' interest in AI Silicon.

  • by Baron_Yam ( 643147 ) on Wednesday May 17, 2017 @12:27PM (#54434783)

    Nothing we're seeing these days is actually AI.

    Until I can have a conversation with an artificial entity that can reason abstractly to extrapolate experience to apply against novel concepts to which it is introduced, we're not there. (Technically, the conversation part is not required, but it's useful as a human interface)

    We're seeing complex decision trees based on statistics, not AI.

    • Re: (Score:3, Insightful)

      Nothing we're seeing these days is actually AI.

      You should try to learn what "AI" actually means. Lookup "Strong AI" and "Weak AI", also referred to as "Hard AI" and "Soft AI".

      We're seeing complex decision trees based on statistics, not AI.

      No, what NVidia is talking about is not "decision trees".

      • by Baron_Yam ( 643147 ) on Wednesday May 17, 2017 @01:17PM (#54435151)

        >You should try to learn what "AI" actually means. Lookup "Strong AI" and "Weak AI", also referred to as "Hard AI" and "Soft AI".

        No, people involved should stop misusing terms to make their work sound more impressive.

        They've got the artificial part down, but so far they've made zero progress on intelligence... prefixing 'weak' or 'soft' doesn't change that.

        • No, people involved should stop misusing terms to make their work sound more impressive.

          The "people involved" coined the term, so it is not they but Hollywood that is misusing it. The term "Artificial Intelligence" was first used by John McCarthy in 1956 at a conference at Dartmouth Univ. They were working on playing checkers and chess, image and voice recognition, and other stuff that you are claiming is "not AI".

          Artificial Intelligence: Machine learning, object recognition, natural language processing, etc.
          Science fiction: Human level consciousness

          Hollywood gets these confused, but you s

    • Hear, hear. Good to see that there is some REAL intelligence out there, not just bobble-heads nodding blindly in agreement with media hype.
    • My dog is an intelligence that can't carry a conversation. She's also smarter in a lot of ways than any AI is likely to need to be. I don't think you've laid out proper necessary or sufficient conditions to consider what being an AI would require or entail.
    • by lgw ( 121541 )

      Nothing we're seeing these days is actually AI.

      Until I can have a conversation with an artificial entity that can reason abstractly to extrapolate experience to apply against novel concepts to which it is introduced, we're not there.

      That's a definition of AI used in SciFi, not by the people who actually get to define the term. We've had AI since the 70s, ever growing in the set of problems it can usefully solve. Voice recognition? AI. Machine vision? AI.

      The scientists in the field get to define the terms, not Hollywood.

  • by jeremyp ( 130771 ) on Wednesday May 17, 2017 @12:37PM (#54434853) Homepage Journal

    I'd like to RTFA, but there is no link to TFA.

    Anyway, it's bullshit. There's no reason why an intelligent computer would be any better at writing software than an intelligent human. More importantly, a intelligent computer might decide it doesn't want to write software.

    • >There's no reason why an intelligent computer would be any better at writing software than an intelligent human.

      Except you likely would design it to be obsessively interested in programming to spec, and not get distracted by watercooler talk, problems at home, medical issues, exhaustion, the hot chick at the end of the cubical farm, etc.

      > More importantly, a intelligent computer might decide it doesn't want to write software.

      I have no urge to eat mice, but my cat does. (Also small birds and the occas

      • The vast majority of people who are talking about so-called 'AI' have no clue what they're talking about, not even the fact that what they're talking about isn't even real artificial intelligence to start with, therefore you shouldn't expect them to make any sense at all.
      • by Junta ( 36770 )

        Except you likely would design it to be obsessively interested in programming to spec...

        Actually this is how a lot of crappy software happens. The 'spec' is generally lacking in vision/understanding.

    • There's no reason why an intelligent computer would be any better at writing software than an intelligent human.

      Better? Maybe not.
      Faster? Very likely.
      Cheaper? Well, that is the real goal.

      It is likely that an AI and a human would make different errors. A human would likely be better at overall design and structure. An AI would likely be better at low level coding, and avoid silly syntax errors. So use AI-Human pair programming.

      • by Dog-Cow ( 21281 )

        I've never understood why anyone thinks an AI would be better than the best humans. But anyway, we already have software that perfectly deals with syntax errors -- the compiler. Avoiding syntax errors isn't really necessary. The time spent fixing syntax errors is absolutely minuscule, compared to the time spent on every other aspect of development.

        • by Junta ( 36770 )

          Or even the worst humans....

          At this point, ML can do some useful tricks, but it takes a gigantic amount of resource to train something so that it could almost kind of sort of compete with toddlers at very specific recognition tasks.

    • ...There's no reason why an intelligent computer would be any better at writing software than an intelligent human. More importantly, a intelligent computer might decide it doesn't want to write software.

      How much more intelligent have humans become in the last century or two?

      I'm talking about actual capability and intelligence, not ingenuity. Sure we've created some amazing technology born of newer concepts, but our capability has not really increased since the days of Einstein. This tends to prove we have a finite limit, which AI will likely not find.

      The simple fact that machines can operate at speeds much faster than a human will ever be able to operate proves how superior they could become simply from

    • by ljw1004 ( 764174 )

      I'd like to RTFA, but there is no link to TFA. Anyway, it's bullshit. There's no reason why an intelligent computer would be any better at writing software than an intelligent human. More importantly, a intelligent computer might decide it doesn't want to write software.

      It's not bullshit. You're misunderstanding what the AI does.

      * Imagine I asked for a module to classify whether an image has a cat, or a dog, or not. I'd use this maybe for targeted advertising, e.g. to gather information about people. You might try to write this classifier by hand using convolutions, edge detection, heuristics to look for circles with pointy triangles and so on, but it'd be terrible. A machine-learning classifier will do much better.

      * Imagine I asked for a module to inspect the stream of ba

  • by Falos ( 2905315 ) on Wednesday May 17, 2017 @12:46PM (#54434905)

    Is anyone going to post the XKCD? Alright, guess I'll grab it, here.

    https://xkcd.com/1838/ [xkcd.com]

  • by Gravis Zero ( 934156 ) on Wednesday May 17, 2017 @12:54PM (#54434969)

    Honestly, this is just a simple advertising effort to get people to buy their hardware. AI isn't about to about to eat software, it will be at least a century or two before we have intelligent machines. Until then the greatest thing neural networks can do is mimic existing software (a super niche need) or assist programmers in making software.

  • The world is eaten by software
    Software is eaten by AI
    AI eats humans (or at least converts them into an energy source)

  • by Rick Schumann ( 4662797 ) on Wednesday May 17, 2017 @01:05PM (#54435049) Journal
    I'm far from convinced that so-called 'AI' (LOL) is going to 'eat' anything (other than perhaps two-digit IQ venture capitalists' money), but if it's going to eat anything, I'd like to see it eat the jobs of tech pundits who have no bloody idea what they're talking about (and/or are talking out of their asses, just to get the aforementioned VCs' monies flowing in their direction); I think even the half-assed 'deep learning algorithms' (again, LOL) would do a better job than these fools who are continually running off at the mouth.
  • I don't know why she swallowed a AI
    Perhaps she'll die.

  • And small ants from outerspace are going to eat our brain.

  • by Greyfox ( 87712 ) on Wednesday May 17, 2017 @01:51PM (#54435377) Homepage Journal
    Magic is going to eat both of them. What the hell, right? They're all the same to a CEO. I'm sure AI is the silver bullet that will end all software, but magic is the silver bullet that is going to end AI! Because magic! You still have to tell an AI what you want, and a lot of those guys can barely form a coherent thought, much less put it down on paper. They're too busy synergizing their paradigms! Well magic solves that problem! You don't even have to know what you want! You just wave your magic wand and magic will make you crap daisies and unicorns! And isn't that really what they want?
    • by Anonymous Coward

      You're not wrong about CEO's in general, but you are wrong about Jensen Huang.

      He's an engineer who designed chips. I worked for him for a few years, he's not the typical frat-boy CEO using his company as an ATM machine.

      He's smarter than you, but that doesn't mean he's not using some buzzwords to hype Nvidia.

  • Link to article: https://www.technologyreview.c... [technologyreview.com]
  • I can see its benefits for medicine, law, and military, but that's just it, with emphases on "military." AI's roots go back to Allan Turing, it's father and Enigma code cracker. AI's true purpose is total compliance by removing the efficacy of passwords and digitally fingerprinting everyone, always being watched. It sounds ridiculous, but we actually do have the machines to do it, it's just that figuratively speaking, our AI is in the 5th grade but will be in college in just a few years and it's "family" is
  • Has anyone else noticed this trend of CEOs saying stupid things?

    Perhaps they're worried that if they aren't in the spotlight for a while they'll cease to exist, or maybe it's the latest fad from some "guru".

Two can Live as Cheaply as One for Half as Long. -- Howard Kandel

Working...