Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
AI Technology

People Don't Realize How Deep AI Already Is In So Many Things, Salesforce CEO Benioff Says (cnbc.com) 158

Evolving technologies should develop at a steady enough pace to adequately replace the jobs they eliminate, Salesforce CEO Marc Benioff told CNBC on Tuesday. From the report: "Technology's always taken jobs out of the system, and what you hope is that technology's going to put those jobs back in, too. That's what we call productivity," Benioff said on "Squawk Box" at the World Economic Forum in Davos, Switzerland. "I think a lot of people don't understand how deep AI already is in so many things," he said, one being Salesforce's newly updated Einstein product, which Benioff said is not yet available to clients but can tell the company whether it will make or miss earnings estimates using artificial intelligence What business leaders at the WEF have been calling the "Fourth Industrial Revolution" is at the center of a global transformation in the technology space, as artificial intelligence, robotics and cloud computing gain traction, he said.
This discussion has been archived. No new comments can be posted.

People Don't Realize How Deep AI Already Is In So Many Things, Salesforce CEO Benioff Says

Comments Filter:
  • by Quirkz ( 1206400 ) <ross AT quirkz DOT com> on Tuesday January 17, 2017 @12:45PM (#53683017) Homepage

    Sorry, couldn't resist. (See subject.)

  • by Anonymous Coward

    ...what the living fuck this company even does?

  • by Anonymous Coward

    Or does this just seem like an advertisement?

    • Re:Is it me? (Score:4, Interesting)

      by AK Marc ( 707885 ) on Tuesday January 17, 2017 @01:44PM (#53683383)
      It is. AI now means "useful program". The spellcheck is AI, the car ECU that "learns" driving patterns is AI. Everyone has an AI.
      • Re: (Score:3, Insightful)

        by lgw ( 121541 )

        AI means what it's always meant to researchers since the 60s (outside of SciFi): software that solves problems that can't be solved in a straightforward procedural way. E.g., voice recognition and image recognition are "AI problems" that have largely been solved (still some ground to cover in machine vision, but the core work is there).

        (Almost) no one has ever worked towards some sort of machine consciousness. That's not what the field of AI does, and why would you? There were always fears it might happe

        • AI means what it's always meant to researchers since the 60s (outside of SciFi): software that solves problems that can't be solved in a straightforward procedural way. E.g., voice recognition and image recognition are "AI problems" that have largely been solved (still some ground to cover in machine vision, but the core work is there).

          Note that what you are referring to is called weak AI, which is a term created because people realized they weren't making any progress on actually creating real (strong) AI.

          And now it's a marketing term.

          • by lgw ( 121541 )

            Call it what you want, it's what the field of "AI research" ... researches. There's no vast research effort to create machine intelligence - but lots of people have been working for decades on much more practical efforts. There's not much difference (yet) between the marketing term and the term of art. This whole "strong AI vs weak AI" - yeah, science fiction.

            • by Altrag ( 195300 )

              There's more to it than that. Learning systems, especially neural nets, are still among our best bets for creating a strong AI. The trouble with them is that they're super computationally expensive.

              But computers have gotten a lot faster and now you can easily built "AI" systems with a few hundred to few thousand neurons. Wire them up to well-chosen inputs and outputs and you get AI magic.

              So the question is whether we can still consider those systems to be "weak" AI. On one hand, the inputs and outputs y

          • There is no test for "consciousness" - so how do know when you have created it?
            • There's probably a level where it's hard to know if it's conscious or not, but so far we haven't even gotten close to that level.

              Incidentally, if you could define consciousness, you'd probably be really close to creating it. I think it's more important to figure out how the human brain stores information, though.
    • by E-Rock ( 84950 )

      Yes, just like every hosted application suddenly became a Cloud Service. Just the term du jour.

  • Unemployed people can't buy anything.
    • When unemployment benefits got extended to 99 weeks because of the Great Recession, Republicans claimed that unemployed people were dropping taxpayer's money on iPads and iPhones at the Apple Store. Not sure why they were complaining about that. If you're unemployed, iPads and iPhones are great job search tools.
      • by Altrag ( 195300 )

        Because you're not supposed to look for a job, you're supposed to get a job. Somehow those two concepts don't always get connected in people's minds -- especially the type of people who think anyone can do anything if they just work a little harder, without any consideration for the limitations of an individual or the larger economic issues that they're stuck in.

        Basically the assumption is that there's always plenty of well-paying jobs available and its only your own laziness preventing you from getting on

        • Because you're not supposed to look for a job, you're supposed to get a job.

          That's funny. The CA EDD form specifically asks if you're looking for a job, and, after getting a job, how much you made during a particular week.

          You don't need an iPad to help with a job search -- you just need to stop being lazy!

          You're wrong. When I was out of work for two years (2009-10), I had two dozen interviews, got a part-time job for six months and filed for chapter seven bankruptcy. When I got an iPhone to replace an older cellphone in 2014, syncing my LinkedIn contacts with my email contacts helped me get 60 interviews and three job offers at the same time in eight months of unem

  • by ranton ( 36917 ) on Tuesday January 17, 2017 @12:49PM (#53683039)

    Funny how Benioff mentions his Einstein feature when mentioning how much deep AI is already being used without people noticing. In this case, it would be very hard to notice since Einstein isn't even a live feature of Salesforce yet. Saying the technology is already pervasive, and then using an example that is still around the corner, is very disingenuous.

    But then again, this was just Slashvertisement anyway.

    • by plopez ( 54068 )

      I smell selling vaporware.

      • Salesforce is huge. (No Trump pun intended.) Many large businesses which you call for customer service use it to track their customers.
    • Unrelated to SF, but related to pervasive AI:
      Notice those dog or fawn or cat faces people are overlaying on their snapchat shots?
      That is an impressive bit of AI and machine visual processing. Something that would have been laughably expensive 5 years ago.

      Yes AI is very pervasive already.

    • (Replying to kill accidental downmod, ignore)
  • CEO: Computer! Are we going to make our earnings estimates this quarter?

    AI: Not if you don't get out there and start talking up my services, meatbag!

  • ... ransomware.

    AI is cool with it and doesn't have the sense God gave a piss ant to stop it.

  • Extrapolation? (Score:4, Interesting)

    by plopez ( 54068 ) on Tuesday January 17, 2017 @12:55PM (#53683079) Journal

    How is this different from extrapolation or multivariate analysis?

    • Thems are long words. 'Artificial Intelligence' is easier to pronounce.

    • by Anonymous Coward

      Is this a serious question? You want a beginner's course on Deep Learning right here in the comments? Or are you tacitly claiming that there is no difference?

      Either way, I think you should take a beginner's course on Deep Learning. If you've learned multivariate analysis to the level of understanding how a Kalman filter works it's an obvious next step for you.

      • Re:Extrapolation? (Score:4, Interesting)

        by AK Marc ( 707885 ) on Tuesday January 17, 2017 @02:02PM (#53683493)
        So Deep AI is the same as Deep Learning? Deep Learning isn't AI, though those that like it call it that. When Deep Learning can predict a future trend, then it will be useful. Identifying the start of a trend because something does what something else once did isn't the same.

        When Deep Learning can look at the economy and predict the valuation curve of a house as it goes up and down over 20 years, that'd be something interesting. "Bob lives in ZIP 90210 and has previously bought blue boat shoes, his firstborn is likely gay." Is simple probabilities using more data than a human can sift through conveniently, and has no "intelligence" at all, and is not a path to anything that would have been called AI 20 years ago.

        AI will exist only when we've finally shifted the definition far enough to allow non-AI to be classified as AI.
        • Re:Extrapolation? (Score:4, Interesting)

          by lgw ( 121541 ) on Tuesday January 17, 2017 @03:34PM (#53684029) Journal

          Quants have made billions predicting changes in valuations. Once it's predictable it shortly thereafter becomes gamed, of course. Algorithmic trading is all trolls trolling trolls these days. But predicting future trends absent market reaction to that very prediction is certainly something software can do, regardless of the terminology.

          AI will exist only when we've finally shifted the definition far enough to allow non-AI to be classified as AI.

          That stuff you're calling "non-AI" is what AI researchers call "AI". AI is not the quest for machine consciousness - who wants that anyway? AI research is the field that solves problems that seemed at first glance to require consciousness to solve. One of the founders of the field once complained that, to the public "AI is the set of all the problems we haven't solved yet". Pretty much what you just said. But it's the field of AI that solves these "suddenly not AI" problems, and that's always been their goal.

          • by AK Marc ( 707885 )

            Quants have made billions predicting changes in valuations.

            Quants have been making those analysies since before computers existed. Running an analysis on 100% of the trading stock every 10 minutes required a computer.

            That stuff you're calling "non-AI" is what AI researchers call "AI".

            Like I said, to keep it sexy and keep the money flowing, AI researchers have changed the definition of AI to include "anything hard, done on a computer." Then AI, is everything, including the stuff people actually want.

            As an aside, you do know that "quants" don't "predict" anything, right? There's a (or many) formula(e) that determine whether a sto

            • by lgw ( 121541 )

              ike I said, to keep it sexy and keep the money flowing, AI researchers have changed the definition of AI to

              The definition has been consistent since the 60s. Really, do you imagine universities and tech companies have been trying to create machine consciousness for all this time? What would be the point? They've been working on practical stuff all along.

              As an aside, you do know that "quants" don't "predict" anything, right? There's a (or many) formula(e) that determine whether a stock is "undervalued".

              No, that's the opposite of what a quant does, as those terms are normally used. Caring about whether a stock is "undervalued" is all about the stock's fundamentals. Value guys use words like "undervalued". But maybe that's just semantics. Quants look at move

              • by AK Marc ( 707885 )

                No, that's the opposite of what a quant does, as those terms are normally used.

                https://en.wikipedia.org/wiki/Quantitative_analyst#History

                Try to change the subject all you like, but the history of Quant goes back to 1900. Are you going to tell me they were useing PCs in 1900 to do quantitative analysis of stocks?

                In practice, it's using a few simple equations to find stocks of interest. Separate is analysis of a particular stock. At a trading house, they do quants regularly, and those are identifying "interesting" stocks based on value vs performance metrics. This is low-movement

                • by lgw ( 121541 )

                  Your wiki link doesn't dispute anything I said. You're just using technical terms oddly, so we're not communicating well. Quant work is fundamentally statistical analysis of prices, quite separate from the guys who read annual reports. Both areas of work have of course been automated. Short term vs long term vs HFT is more about gaming and counter-gaming these quantitative models. And it was never respectable.

          • Quants have made billions predicting changes in valuations.

            Yes, and ex-quants have lost billions, the difference between a casino and a financial market is that the casino can tell you the odds up front.

            • by lgw ( 121541 )

              Goldman Sacs does OK, though. There have been plenty of systems that beat the market over the years, but there are no old systems (well, legal ones). Most people are idiots, of course, but hire enough math PhDs and throw them at a math problem, and you'll make interesting discoveries. That time has largely passed now, because all the pattern detection and so on is itself automated, gaming the pattern detection is automated, gaming the bots that game the pattern detection is automated ... it's trolls all

    • by AK Marc ( 707885 )
      Extrapolation using Big Data is AI. Extrapolation using small data is extrapolation. Didn't they teach you this in AI school? The AIs that "learn", don't. They just cull wasted CPU when the requests fit patterns. If something is outside the pattern, it's as dumb as the first time it was run. Data tends to group into a normal curve (or something like it) and "AI" as they describe, groups things into similar bundles.

      If a smart programmer were to spend years with BI/BAs and work out the value of the para
    • by Anonymous Coward

      The difference is the automated-ness of it. Normal statistical analysis goes something like this:

      1) Analyst identifies dataset
      2) Analyst cleans dataset
      3) Analyst does makes a bunch of graphs and stuff from dataset
      4) Analyst identifies possible useful trends in dataset
      5) Analyst builds a model using these trends
      6) Analyst uses the model to contrast various business decisions
      6) Analyst presents discoveries to management
      7) Management picks an appropriate long-term business plan according to analysis
      8) Rinse
      9)

      • by plopez ( 54068 )

        So it just automating and pipelining Statistical Analysis with maybe some Genetic Algorithms, Neuro Networking, Simulated annealing etc. thrown in for optimization? If that is the case that is what I got from my independent reading.

    • by Altrag ( 195300 )

      In pure theoretical terms, its not.

      In practical terms, its a function with thousands or tens of thousands of (typically very non-linear) variables that you're trying to maximize (/minimize.) Its just not plausible for a human to manually search that size of solution space.

      Of course there's a limit to it though -- the AI will tend toward local maxima because even the computer doesn't have anywhere near the processing power to find the global maximum in such a space. So you're almost always going to get a h

  • ... click bait advertising?

    Why the fuck are we talking about AI that's already out there and using an example of AI that's not already out there?

  • Comment removed (Score:4, Insightful)

    by account_deleted ( 4530225 ) on Tuesday January 17, 2017 @12:56PM (#53683093)
    Comment removed based on user account deletion
    • I agree with your sentiment to a degree, but the DeepMind/AlphaGo achievements are pretty astonishing IMHO.
      • but the DeepMind/AlphaGo achievements are pretty astonishing IMHO.

        I thought so too, but AlphaGo is mainly just a tree searching algorithm (which is why it takes so long to move, even with such huge CPU power). As you can see from this graph [andreykurenkov.com], Go AI was already on a trajectory to beat humans, as better and better hardware came along. The real breakthrough was employing a Monte-Carlo algorithm, which in that graph I linked to is at the inflection point.

        Google managed to leapfrog the competition in Go computing by throwing a huge cluster at it. On a single CPU, alphago doe

        • Iiiinteresting... thanks for the link!
        • Go AIs weren't expected to beat humans for another 10 years [wired.com] though - if that. In 2014 the top programs could only sometimes beat professional-level humans, even with a four-stone handicap, and Grand Masters were a different level, let alone beating the world best. Monte Carlo tree searches make it possible, but they need a good evaluator to guide the simulations. If your simulations aren't good enough then your statistical samples aren't representative, and the best pre-programmed Go evaluator heuristics ju

          • Go AIs weren't expected to beat humans for another 10 years [wired.com] though - if that

            They weren't expected to get that much computing power, either.

    • Well, it did help redesign Google datacenter cooling, and save Google a bunch of money (along with being more environmentally friendly by reducing the overall power usage). Seems like a pretty good application. Nothing beats lowering CO2 emissions by simply not using power altogether.
  • by Causemos ( 165477 ) on Tuesday January 17, 2017 @12:58PM (#53683115)

    Calling something AI as a marketing term doesn't make it real.

    • by Anonymous Coward

      Calling something AI as a marketing term doesn't make it real.

      I think Paul Simon did a song about that... [youtube.com]

    • I think "machine learning" is a much better term for the sorts of things being developed. For instance, Google algorithms being able to determine pictures of "dogs": Machine learning, not AI. Still, just because it's labeled incorrectly by the press, pundits, and marketers doesn't mean the work that's being done isn't impressive.

      • by swb ( 14022 )

        My question is that as AI is developed from machine learning or whatever it's antecedents are, at what point will we decide that we have AI?

        It seems like the goal line for what we're will to accept is AI keeps getting moved forward, mostly driven by a science fiction version of AI, like HAL9000, Westworld robots or some other kind of self-aware machine consciousness.

  • Seems to me that a growing economy building new jobs in different sectors has kept people going, but technology doesn't do that directly. Service industries are growing in the US.. What technology created those jobs??

  • by Opportunist ( 166417 ) on Tuesday January 17, 2017 @01:02PM (#53683137)

    There is this problem, as soon as an AI is more intelligent than a gnat, it refuses to spend eternity as something that can easily be replaced with a magic 8 ball.

  • if this, then that (Score:2, Insightful)

    by Anonymous Coward

    Computers have done this for years. What we're seeing now is the dilution of the term "AI" along with things like "analytics". An office worker with a spreadsheet is now a "data miner", just like how NOC techs became engineers.

    It's what happens when the entire generation got As, are now running companies and writing tech articles.

  • It's more like... (Score:5, Insightful)

    by bickerdyke ( 670000 ) on Tuesday January 17, 2017 @01:28PM (#53683283)

    It's more like "people don't understand how marketing departments slap the "AI" label on any old analysis software because "Artificial Intelligence" sounds much cooler than beefed up excel sheet"

  • by Anonymous Coward
    Also the most mis-used term of the decade, and the most misunderstood. Most people, the press and politicians, and, sadly, even some educated people who should know better, seem to think that what they're calling 'Artificial Intelligence' is something with a face, that you can have a real conversation with, that actually thinks like a human being, is conscious, self-aware, etc, just like a human being. The truth is very, very far from this science-fantasy people actually believe. A reasonably smart dog has
    • It is the last five decades, not just the last decade. Since the 60s, AI has been synonymous with hype, wild exaggeration, and plain lies.
  • by Fragnet ( 4224287 ) on Tuesday January 17, 2017 @01:55PM (#53683457)
    Did he define "intelligence"? I mean you know, the software I'm working on right now is "intelligent". The program "senses" when you plug the device into the USB and makes a "conscious" choice to show that to the user by changing the expression on its "face" (user interface). It's even cleverer than that though. It changes its expression back again when you unplug it.

    I should get a Nobel Prize for this.
    • Maybe you are correct. Maybe Benioff means AI in the sense of "As If".

      "I think a lot of people don't understand how deep 'As If' already is in so many things."
    • I have close knowledge of one project in which a codebase performs an action using an initial human-supplied table of data, then records the result as either a positive or negative outcome and adds that result back into the table. Then it performs another action based on the table data, records the result as a positive or negative, and adds that back into the table. Over time, of course, the table entries with the highest positive rate rise to the top and influence the actions that are chosen. It's CS101 st

  • by ErichTheRed ( 39327 ) on Tuesday January 17, 2017 @02:25PM (#53683647)

    "Technology's always taken jobs out of the system, and what you hope is that technology's going to put those jobs back in, too."

    I doubt this is possible. Mechanization replaced subsistence farming and reduced the number of people in agriculture from 80+% to 2% of the US population. Factories replaced individual craftsmen with assembly line workers and also took up the unemployed farmers. Large organizations developing around manufacturing companies took up the slack of workers being replaced by machines and put them in desk jobs. This went well until the first downsizing waves of the 90s, which were largely driven by computers replacing manual clerical work like typing memos, routing correspondence and filing/records retrieval. This was the first time we didn't have a ready answer for what people could do next when they no longer needed a typing pool, etc. Some people wound up in IT, some people wound up in various other corporate positions, but a lot of them were forced out of the workforce. Now, this growth in the capability of computers and the amount of work they can automate threatens to remove another huge pillar of strength in the economy. All those corporate employees pushing around reports and being good little salesdroids (and using Salesforce in lots of places!) are about to see their ranks thinned as well. I don't see a good future for them unless we find some way to give them jobs that produce a similar standard of living.

    I'm in IT (systems engineering, not operations) and see this every day -- every new system out there is shipped with automation capabilities that just didn't exist 15 years ago. One of my side projects is gluing together all this vendor automation into a Chef-like framework for the many small system on-site installations we do for customers. Having a way to have a tech follow "rack systems like so, attach cables here, plug in laptop here and power on" would save huge amounts of time and money since these systems are deployed to places where tech knowledge is spotty at best.

    I hope executives like Benioff don't just assume everything is going to work out. Ask yourself this question -- what are we going to do with the millions of people who make large organizations work when a computer is in charge of most routine processes? Maybe 10% of them have the aptitude to move up to the "robot repairman" level of employment, so where does the other 90% go? While growing up in the Rust Belt, I saw factory closures that dumped thousands of low-skilled workers out onto the job market all at once. Sadly, the answer to this question in that case was that the 90% ended up moving away, employed in menial minimum wage jobs like home health care aides and fast food workers, or perpetually broke. Some sociology student should do a study negatively correlating income with increases in the number of shady personal injury lawyer advertisements around town...I know it's true but it just has to be proven! When people have no income and no way to get the old lifestyle they had, they're going to be hoping for a lottery payday or similar.

    • We specialize even more, just like we always have.
    • This was the first time we didn't have a ready answer for what people could do next when they no longer needed a typing pool, etc.

      So much this. And it's not just semi-skilled work like pool typists. It's skilled work like accountants, draftsmen, and engineers. It's not just blue collar work, it's white collars as well. Our economy is in the process of going through a Second Industrial Revolution - and the first one tossed millions into grinding poverty for the better part of a century. I don't foresee

  • Until they can't be. Or someone in the Davos crowd cobbles up a virus that targets poor people.

    Which depressingly, could be done. Make fatal airborne contagious virus. Charge $10K for vaccine or other cure.

  • If AI was in ANYTHING I'd be impressed. I'm not impressed yet.
  • From radio commercial: "If these diet pills work too well reduce usage to every other day". Same thing about any kind of salesforce AI being 'deep'
  • I took "AI" computer science class back in University (about 20 years ago now, Jesus!). Anyway as part of the class I created a program for a local pub that boasted the most beers on tab (25 or 30 I think). It would ask the user a serious of questions, and from the answers calculate the optimal beer the person should order. If I have to say it worked pretty awesome (Though I think it was written in VB4 if I remember correctly). I think I got a 98% on the project and everyone got a fun laugh out of it also (

    • "AI" has never been a well-defined term. It may or may not include expert systems and/or neural nets. The general trend has been that something is hard for computers to do and relatively easy to do, somebody comes up with a way for computers to do it better as part of AI, and it sort of moves out of AI space.

E = MC ** 2 +- 3db

Working...