Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Google AI

Google's James Manyika: 'The Productivity Gains From AI Are Not Guaranteed' (ft.com) 63

Google executive James Manyika has warned that AI's impact on productivity is not guaranteed [Editor's note: the link may be paywalled], despite predictions of trillion-dollar economic potential. From the report: "Right now, everyone from my old colleagues at McKinsey Global Institute to Goldman Sachs are putting out these extraordinary economic potential numbers -- in the trillions -- [but] it's going to take a whole bunch of actions, innovations, investments, even enabling policy ...The productivity gains are not guaranteed. They're going to take a lot of work." In 1987 economist Robert Solow remarked that the computer age was visible everywhere except in the productivity statistics. "We could have a version of that -- where we see this technology everywhere, on our phones, in all these chatbots, but it's done nothing to transform the economy in that real fundamental way."

The use of generative AI to draft software code is not enough. "In the US, the tech sector is about 4 per cent of the labour force. Even if the entire tech sector adopted it 100 per cent, it doesn't matter from a labour productivity standpoint." Instead the answer lies with "very large sectors" such as healthcare and retail. Former British prime minister Sir Tony Blair has said that people "will have an AI nurse, probably an AI doctor, just as you'll have an AI tutor." Manyika is less dramatic: "In most of those cases, those professions will be assisted by AI. I don't think any of those occupations are going to be replaced by AI, not in any conceivable future."

This discussion has been archived. No new comments can be posted.

Google's James Manyika: 'The Productivity Gains From AI Are Not Guaranteed'

Comments Filter:
  • by byronivs ( 1626319 ) on Monday September 02, 2024 @12:17PM (#64756476) Journal
    Whaaaaaaaat?
    • If you were going for Funny, the joke didn't stand. Nor the vacuous Subject.

      However I do have a minor personal experience to share about ChatGPT losing its marbles. Project was file analysis using HTML with embedded JavaScript. First few sessions seemed quite productive, with lots of functionality, but then the so-called AI started cutting pieces away, seemingly at random. Maybe someone has a constructive suggestion?

      Asking for constructive suggestions on Slashdot? Now that's ROFLMAO.

      • Comment removed based on user account deletion
        • by Calydor ( 739835 )

          Huh. Ironically sounds a lot like the way an analog vs. digital TV signal works. Once the degradation hits a certain point the digital signal just collapses entirely, while the analog would keep working with ghosting and other artifacts in the resulting image and sound.

          • Did you just say that we are devolving? Was Devo right? "The name Devo comes from the concept of "de-evolution" and the band's related idea that instead of continuing to evolve, mankind had begun to regress, as evidenced by the dysfunction and herd mentality of American society."

            • As an aside, "de-evolution" does not make any sense. Evolution is not directed, individuals cannot be said to be more evolved or less evolved. There is only adaptation to the environment. As the environment changes, adaptation occurs, whether that means revisiting previous solutions or inventing new ones is random.

              Evolution as a directed process from simple to complex is an idea closely related to the Christian conception of Man as the pinnacle of natural selection, which itself was a grudging 20th centur

          • Comment removed based on user account deletion
        • by shanen ( 462549 )

          Thoughtful and interesting reply. From another source I'm thinking about taking a fresh swing at the same problem with Claude (from Anthropic). However the initial impression was that it's quite similar to ChatGPT.

      • by narcc ( 412956 )

        You're dramatically overestimating what is possible with an LLM. Try to remember that there is a disconnect between what you'd expect from the interface and what's really happening in the background. LLMs generate text on the basis of how tokens appeared in relation to one another in the training data. It's not operating on facts and concepts. It's not composing replies after careful analysis and reasoning. Those things are not possible for an LLM.

        The Three Mile Island disaster was caused by a similarly

        • What you say of LLMs is true but they're the public poster-child to woo investors & the public in order to push up valuations & generate more investment. In that sense, LLMs are working incredibly well.

          Meanwhile, in the boring backrooms, they're more than likely selling models based on other data, non-linguistic data. It's just machine learning, glorified Bayesian inference than can work on any data that has consistencies/recurrent patterns in it, e.g. photos & music. Also, if you take out th
          • What's becoming more obvious as time goes on is that the current levels of hype & mountains of money being poured into AI isn't sustainable. Here comes Gartner's trough of disillusionment; what a roller-coaster ride!
        • by shanen ( 462549 )

          Hmm... Not sure I had any clear expectations of what it could do. I saw it more as an almost random experiment that produced some surprising results at first and then went quite sour...

          I've engaged ChatGPT in a number of dialogues. Some interesting, but I also suspect some of them may be harmful. Too easy for me to think like that? (Old joke: "Too much computer use is bad for mental hygiene."

  • by Rosco P. Coltrane ( 209368 ) on Monday September 02, 2024 @12:25PM (#64756500)

    when even the biggest proponents of the scam are starting to gently prepare the believers that the promises might be emptier than anticipated.

    • Re: (Score:1, Flamebait)

      by rsilvergun ( 571051 )
      This is just protection in case it takes longer for the stuff to take off and The investments in AI don't immediately pay off. It's basically there to protect them from the SEC. Elon musk can get away with blowing off the SEC because he has a kind of reality distortion field like Steve Jobs did but for the most part if a company overstates its capabilities you're going to have something like Theranos where the people involved go to jail because they cost rich people real money
      • I think it's probably a wise thing to do, because the talking heads on the media have been absolutely going with crazy with how transformative AI will be and it's affecting stock prices (not just Google's).
    • It sounds more like continuing to string them along. He says the gains are possible, they'll just be delayed until after you invest more, and there are more legal protections for organizations deploying AI.

      It is very artfully-constructed corporate speak. There's a little bit of breaking the ice on the bad news, a little bit of pre-emptive blaming of the government, but still continuing to beg for money.

    • by gweihir ( 88907 )

      Yes. Looks like it. Of course, those trillions will still spay spent.

    • Warning: Losses In Mirror Are Closer Than They Appear.
    • Re: (Score:3, Insightful)

      Everything new is old. This reminds me of the 4GL hype of the mid 90s - "nobody will ever have to code again, just drag and drop and you get an application!"

      AI generated code is essentially an invitation to retrain people into programming archaeologists, whose job is to go into code they didn't write, and figure out why it isn't working. As anyone who may have had one of those gigs could attest to, there are a lot of times when the effort to unfuck existing code is greater than what it would be to just wr

  • Illusory at best. People will have to spend time fact-checking the output from this brain-dead excuse for 'technology', and also correcting everything it fucks up.
    Of course the thing I worry about the most is how many people will have to die at the hands of so-called 'self-driving cars' before they ban those death-machines from public roads for good.
  • I don't see any sign that more of anything is actually going to get produced. That's what a productivity gain is. When a worker can produce more of something in the same hours. This increases supply which allows real wages to go up.

    What I do see is the exact same amount of supply but with fewer workers doing the work. So the demand for labor goes down and with it wages but prices at best stay the same or more likely start to climb again because of price gouging by monopolies

    We are entering into a fou
    • I don't see any sign that more of anything is actually going to get produced

      To a capitalist, productivity is how much is being produced per unit of labor.

      There may be no sign that more stuff is being produced, but there are plenty of signs that there's less and less labor due to AI.

    • "When a worker can produce more of something in the same hours."

      How do you actually measure that? If I invest $10 million in the S&P500 and make a few hundred grand a year, is that off-the-charts productivity? How do you measure the productivity of an Exchange-Traded Fund, and if you're ignoring all that income, what good is your productivity statistic anyway?

      • by BranMan ( 29917 )

        Best measure of productivity is output / worker. Take to total value of everything a company produces - its gross earnings. Divide that by the total number of people it took to produce that. If that number keeps going up year by year you have higher productivity. There are some companies that have reported they have reached $1M / employee, or more. That's the measure I'd use.

    • >We have nukes now so we can't do that this time even if we wanted to.

      Oh you poor thing. Read a little history to disillusion yourself. We're no better than those thousands of years ago. Each genocide committed then can be committed again, and with little more excuse than a little fear here, a little lie there, and a politician who wants a little glory. You think the wealthy of Russia wouldn't unleash hell? Do you think an America shorn of her wealth and power couldn't lash out in desperate hope or ange
    • I've been gaining productivity in reduced time using search engines. For a complicated search, I had on over to Bing, use the free Copilot, and tell it to find what I want. It's not great but it's often better than Google.com.
  • Nobody bothers to explain how an AI doctor is any better at improving patient outcomes than WebMD. An neural network analysis of specific test results is certainly not what is being sold to CXOs as AI.
  • Only ever implied things are guaranteed to maybe happenâ¦
  • by Baron_Yam ( 643147 ) on Monday September 02, 2024 @01:30PM (#64756694)

    > the tech sector is about 4 per cent of the labour force. Even if the entire tech sector adopted it 100 per cent, it doesn't matter from a labour productivity standpoint

    AI isn't just for coders, it's a tool for everyone who has a computerized but still fairly manual, low-skill, and time-consuming task that's just a bit too complex for a simple script. You're not looking at 4 percent of the labour force, you're looking at all sorts of jobs. And even if AI actually does a worse job, it will do it so much faster and for so much less money it's going to get deployed regardless.

    It's unlikely to replace 100% of anyone's job. It'll be like replacing handwriting with a word processor - it'll make a few tasks much less time consuming and a company that used to have a dozen people doing something might only need half that (but "AI-augmented").

    The funny thing is that all that labour that's going to be made more efficient at a slight cost in quality is all stuff that's more or less unnecessary. Letters we spend half an hour writing so somebody might read the first sentence before calling you or filing it away, that kind of thing. Collecting data, filling out forms, AI agents to talk to customers, simple art generation in whatever media. Diagnostic systems everywhere will be upgraded from flow charts to AI - hopefully with a reasonability check on the outputs.

    AI will churn that crap out so fast we're going to have our own AI assistants to filter it for us and give us summaries of what it thinks we really need to know.

    • I'd like to see an AI that can replace a management team. I'll bet a good 60% of those jobs could be automated.
    • by RobinH ( 124750 )
      Except nobody doing those jobs was writing out letters from scratch every time. They have form letters or they pull up the last one they sent and do a search and replace. Even professionals like psychologists pull up their last assessment report and use search and replace to get 90% of it done, and then go and fill in the details for the specific individual they just tested. And they certainly wouldn't trust an AI to write it for them because they *want* it to mostly say the same thing as the last one.
  • An AI taking my job is stupid, what we need is robots. I mean, why aren't robots farming? Seems like an easy win for robots. After that they could work in Amazon factories, if Amazon could get its suppliers to provide them products in robot-sortable packaging. Then the next frontier for robots is package delivery and logistics.

  • by TechyImmigrant ( 175943 ) on Monday September 02, 2024 @02:05PM (#64756770) Homepage Journal

    Back in the day, there was a program for the Apple ][ called "The Last One". It claimed to be the last program that needed to be written, because it wrote programs and you just fiddled with its UI to tell it what program to write. These were always in the form of wrapping business logic around a primitive database. Of course it didn't catch on.

    Since then, every couple of years, something got proposed as an automated way of writing programs and it doesn't catch on because the shortcomings are obvious.
    The shortcomings of AI based code generators are obvious. They don't and can't understand the big picture of a program and so can't code well within that framework.

    In digitial logic design, we used to use schematics. Then HDL and synthesizers started to happen and it was primsed that the synthesizer would design the circuit for you. You just enter the HDL description. Of course, now always all RTL is designed in a HDL for a synthesizer, but the synthesizer is not designing, it's mapping between design expression level. AI does less than a logic synthesizer because it's not reliable.

    • The shortcomings of AI based code generators are obvious. They don't and can't understand the big picture of a program

      I found the current "AI" tools work rather well when breaking down the problem to single steps and you building your code from that like with legos, with the big picture being in your head.. but this still requires you to know how to code .. so the marketing didn't deliver, but you can still get use out of it.

  • ...be useful for many things and may even allow us to solve previously intractable problems, the nonsense coming from pundits and futurists is astounding. As the philosopher said, Predictions are hard, especially about the future. One of the definitions of "the singularity" is that it's the point at which the future becomes unpredictable. I claim that the future has always been unpredictable. I remember the early days of personal computers when pundits wrote that people could use computers to store recipes.

    • by gweihir ( 88907 )

      I predict a tsunami of half-baked, crappy AI will be forced on us and the most common tech support question will be "how can I turn this off?"

      Looking at what has been happening in the software space in the last few years, that is the most likely outcome.

  • Meaning the same problems for current software, in valuable and needed industries like medicine or law, still exist.

    Worries about licensing and liability likely prevent a lot of helpful tools from being offered. Add to that the need for expensive domain experts who you might be trying to put out of business, or removing certain tasks from... It's not a recipe for lots of innovation or competition unless you're careful, or angry enough.

    I know other doctors are the ones creating new doctors (so they have re

  • At first it's "just" going to replace humans. Once that's well on its way, _then_ productivity will increase. Like, epic style.

  • by quonset ( 4839537 ) on Monday September 02, 2024 @04:02PM (#64757108)

    "In the US, the tech sector is about 4 per cent of the labour force. Even if the entire tech sector adopted it 100 per cent, it doesn't matter from a labour productivity standpoint.

    If the entire tech sector went to AI, how could there not be a productivity gain? Code would be cranking out at a far faster rate than it is now, bugs would be found and fixed more quickly, usable software would be available far earlier, and more cheaply, for healthcare and retail. That in turn increases productivity across multiple sectors. And this is only code. Think of the equipment which could be produced: routers, switches, NAS, etc. Security should be improved as well since better methods could be developed. And let's not get into the extra productivity from tech support which could provide useful answers to questions.

    Labor force productivity might decline because you would need fewer people to do the same job, but that would be far overshadowed by the productivity gains mentioned above. If he can't see how such a small percentage of use could lead to a larger gain, he has no business being in the position he's in.

    • If the entire tech sector went to AI, how could there not be a productivity gain? Code would be cranking out at a far faster rate than it is now, bugs would be found and fixed more quickly, usable software would be available far earlier, and more cheaply, for healthcare and retail. That in turn increases productivity across multiple sectors. And this is only code. Think of the equipment which could be produced: routers, switches, NAS, etc. Security should be improved as well since better methods could be developed. And let's not get into the extra productivity from tech support which could provide useful answers to questions.

      You just described the ideal scenario of AI. That's also the "bubble."

      None of that is going to happen today, nor tomorrow, nor even in 10 years.

      What this guy is saying is: "Look, we inflated the AI hype to such a higher degree that the expectations are going to fall hard and short against reality.

    • If the entire tech sector went to AI, how could there not be a productivity gain? Code would be cranking out at a far faster rate than it is now

      An AI can copy/paste faster than a person can; however, knowing which code to copy/paste is still a domain that a person is stronger in.

      bugs would be found

      ROFLMAO, how is an AI going to recognize what a bug is?

      and fixed more quickly

      Really? REALLY?

      usable software would be available far earlier

      You are clearly not referring to our current and near future AIs here.

      and more cheaply, for healthcare and retail.

      The price of the software has absolutely nothing to do with the price of labor. The price of the software is what the market can carry. Any money saved will go straight to an executive's pockets.

  • And, guess what, even with all that work invested, they are still not certain. In fact, I expect that for LLMs, most will never materialize.

  • the real gains will come not from chatbots and flashy gen AI - rather, at the deeper levels. Shaving a few instructions off of C++ math library functions that are called trillions of times per day is actually a big deal. Giving researchers a few dozen likely candidate compounds to test (out of a universe of millions or more) is actually a big deal (assuming they find a winner among the candidates).
    • the real gains will come not from chatbots and flashy gen AI - rather, at the deeper levels. Shaving a few instructions off of C++ math library functions that are called trillions of times per day is actually a big deal. Giving researchers a few dozen likely candidate compounds to test (out of a universe of millions or more) is actually a big deal (assuming they find a winner among the candidates).

      I agree, 100%...here's the thing. Google, MS, Amazon, others, have invested over a trillion dollars and thrown the best minds at the industry and infinite resources at this new generation of AI. Why are we hearing about copilot and tools that may SOMEDAY improve life. It's a far easier problem to optimize existing libraries. Why aren't see we seeing those announcements trickling in? Python, JavaScript, Java, C#, all have VMs with huge opportunities for efficiency improvements...especially Python. How

  • AI is a set of tools that can be used incorrectly, inefficiently, or for the wrong job. If used correctly, efficiently, and for the right job, it can be very helpful.

  • Somebody should start a regular stock that invests in puts and shorts in likely investor fads. That way I can make money off the bubble poppage without the complexity of puts and shorts. I could get wealthy while investors take it in the "shorts".

Every nonzero finite dimensional inner product space has an orthonormal basis. It makes sense, when you don't think about it.

Working...