Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
AI Businesses Software Technology

Accenture Chief Says Most Companies Not Ready for AI Rollout (ft.com) 33

Most companies are not ready to deploy generative AI at scale because they lack strong data infrastructure or the controls needed to make sure the technology is used safely, according to the chief executive of the consultancy Accenture. From a report: The most hyped technology of 2023 is in an experimental phase at most companies and macroeconomic uncertainty is holding back IT spending generally, Julie Sweet told the Financial Times in an interview ahead of the company publishing quarterly results on Tuesday. Accenture reported another big jump in revenues from generative AI projects in the three months to November 30, with $450mn in bookings compared with $300mn over the previous six months. But they remain small relative to group sales of $64bn annually.

Corporate executives are keen to deploy the technology to understand data across their organisation better or to automate more customer service, Sweet said. "The thing that is going to hold it back, though, isâ...âmost companies do not have mature data capabilities and if you can't use your data, you can't use AI. That said, in three to five years we expect this to be a big part of our business." Accenture and other consulting groups have boasted of multibillion-dollar investments in generative AI, including hiring and training staff, in the hope of a windfall from deploying the technology to clients across the world.

Sweet said executives were being âoeprudentâ in rolling out the technology, amid concerns over how to protect proprietary information and customer data and questions about the accuracy of outputs from generative AI models. "We are still at the stage where most CEOs, asked if there is someone in their organisation who can tell them where AI is being used, what the risks are and how they're being mitigated, the answer is still 'no.'"

This discussion has been archived. No new comments can be posted.

Accenture Chief Says Most Companies Not Ready for AI Rollout

Comments Filter:
  • by aldousd666 ( 640240 ) on Wednesday December 20, 2023 @01:43PM (#64094005) Journal
    Safety in this context is about 'brand damage control' not 'personal safety of your users.' Most people cannot ensure that their LLMs only say things that they would not consider to be offensive, offtopic, incorrect, or otherwise damaging to the company putting out the feature. Not the to the users.
    • Not every company is OpenAI, with a client-facing chatbot. Using something for the purpose of hiring and training staff, like it says in the summary, is unlikely to offend the company's customers.
      • You're right. But the other kinds of AI like boosted trees have been around for a good long while and didn't require the "Year of AI" to come to market. They've already been here and companies have already decided whether or not they'll deploy them based on better business reasons. Immediate generation is the same caveat as a chatbot. And not even ask uses of Open AI are chatbots either.
    • by gweihir ( 88907 )

      That is once concern. The other is chat "AI" leaking data, giving bad advice and doing a lot of damage that way. Just think of the artificial moron inventing company policies and procedures when asked. None of which seems really avoidable at this time.

    • At least you can train an LLM. Try that with people! Oh wait. Ehhh never mind
  • by rsilvergun ( 571051 ) on Wednesday December 20, 2023 @01:47PM (#64094015)
    for IT consultancy services?

    I'm shocked. Shocked!

    Well, not that shocked.
    • by Provos ( 20410 ) on Wednesday December 20, 2023 @01:58PM (#64094079) Homepage

      The despair.com poster has it right:

      CONSULTING
      "If you're not a part of the solution, there's good money to be made in prolonging the problem."

    • I mean...

      At least they're not jumping on the speculative AGI as a business plan bandwagon?

    • by mjwx ( 966435 )

      for IT consultancy services?

      I'm shocked. Shocked!

      Well, not that shocked.

      Yes, this is my surprised face.

      However I suspect it's more "your business is not ready for AI, hire us to make your business ready or miss out on the bajillions promised by AI". Basically a near perpetual consulting gig as I doubt AI will ever live up to what they want it to.

  • As "AI" is fundamentally broken, nobody is ready for any kind of productive rollout.

    • Honest question: What do you believe comes after humans? Whether it be in 10, 1000, or a million years. What are the properties of those beings?

      • by gweihir ( 88907 )

        Why would I believe there is anything that comes after humans?

        • So 'nothing', then? Do you believe that humans are the epitome of what this universe will produce when it comes to intelligent life?

          That seems shortsighted and arrogant.

    • by dvice ( 6309704 )

      Current AIs are good enough for multiple tasks. For example for sorting cucumbers. So they are not fundamentally broken.

      Latest AI from Google can also be used to gather data from scientific papers, with actual working references. So even if they are broken, I wouldn't call it fundamentally broken. It just doesn't currently work so well that you can just ask it to write a scientific paper for you. In 2028 the AI will be pretty much perfect and you can just ask it to write the paper for you, but even before t

      • by gweihir ( 88907 )

        Obviously, I am talking about the current "hype AI", not AI in general. And yes, any tool that can hallucinate, create fake evidence and is _unable_ to flag those results as not accurate is broken. Since for chat AI / LLMs, this behavior cannot be fixed, they are fundamentally broken.

  • by XXongo ( 3986865 ) on Wednesday December 20, 2023 @02:02PM (#64094105) Homepage

    Most companies are not ready to deploy generative AI at scale because they lack strong data infrastructure or the controls needed to make sure the technology is used safely

    Where the "infrastructure" needed is to hire a shitload of human beings to check what the AI did and remove the parts that make no sense, and the parts where the AI simply made up facts that sounded good.

    • LLMs are a small part of generative AI, and generative AI is only part of current AI.

      Even LLMs are improving rapidly. Everyone's heard about the lawyer who got false citations from ChatGPT, but that was a year ago. There have been plenty of improvements since then.

      • by XXongo ( 3986865 )

        Even LLMs are improving rapidly. Everyone's heard about the lawyer who got false citations from ChatGPT, but that was a year ago. There have been plenty of improvements since then.

        Apparently not, since just last week [newsweek.com] the lawyer for Michael Cohen was caught citing fictional cases because he apparently used ChatGPT to file a brief.
        https://arstechnica.com/tech-p... [arstechnica.com]

      • LLMs are a small part of generative AI, and generative AI is only part of current AI.

        Even LLMs are improving rapidly. Everyone's heard about the lawyer who got false citations from ChatGPT, but that was a year ago. There have been plenty of improvements since then.

        It's more likely the reason we haven't heard any more lawyer-related LLM snafus is because of the glaring, omnipresent news coverage about it that followed - everyone else figured out it was a really dumb idea.

  • Curious as to how "most companies" are figured. "Most companies" have nothing to do with computers, other than using them to process billing and such, so it would be obvious that "most companies" do not know "AI".

    Does this person think that "most companies" should know something they don't spend any time using? Or does he just not know many companies, other than his selected few?

  • for new technology. Thanks for the wisdom Accenture Chief, but companies are never ready for the next technology. In the 1990s companies weren't ready for dates with 4 digit years. Companies will figure it out, with or without consultants.

  • by Tony Isaac ( 1301187 ) on Wednesday December 20, 2023 @03:17PM (#64094310) Homepage

    are never "ready" to raise children. The process of having to raise children, prepares them. They make mistakes along the way and learn from them, if they are good parents.

    The process of incorporating AI, will prepare businesses to use AI. They will make mistakes. Some will learn from the mistakes, some won't.

    • by mjwx ( 966435 )

      are never "ready" to raise children. The process of having to raise children, prepares them. They make mistakes along the way and learn from them, if they are good parents.

      The process of incorporating AI, will prepare businesses to use AI. They will make mistakes. Some will learn from the mistakes, some won't.

      Given how badly behaved most kids are, I'd say that method is also a failure.

      Same with AI, it'll be sold as a cure-all by high priced consultants, specifically to "rightsize" the workforce. Fail to live up to the most basic of its promises and the company had already let much of it's workforce go so they're stuck spending more money trying to fix a turd, same as a bad parent gives a hyperactive child more sweets in an attempt to quieten them down.

      • Given how badly behaved most kids are, I'd say that method is also a failure.

        You are assuming, apparently, that it is *possible* to have consistently well-behaved children. Most of us learn more from our mistakes, than we do from our successes. I would suggest that, while we don't condone bad behavior in children, the mistakes they make, and the consequences of them, is a feature, not a bug.

        Same with AI. Lots of people will make a lot of dumb moves regarding the adoption of AI. And those dumb moves are what it takes to teach them what not to do.

  • Honestly, these big consulting companies throw a bunch of buzzwords at a current theme, and people take them seriously? Our workspace recently got reorganized based on such nonsense. Management thinks it looks sleek and modern. Those if us who wirkvthere? Well, there's a lit more WFH now. Which choice we thankfully have.
    • Honestly, these big consulting companies throw a bunch of buzzwords at a current theme, and people take them seriously?

      Yeah, thanks but no thanks. We already have an uphill climb to convince the lawyers, compliance, and cyber to let us POC some things. Upper executives see this and we'll be sidelined at least a year.

  • He's correct (Score:4, Interesting)

    by gtall ( 79522 ) on Wednesday December 20, 2023 @03:55PM (#64094422)

    Take Accenture and other "consultancy" companies. It seems to me they could all be replaced with an AI bot that spews the latest whizzy de jure management advice. Hell, the right company could sell their "Bot in a Box: Run Your Own Consultancy for your Very Own Company". Don't go to some high priced consultancy management company for the latest in Buzz Word Bingo, use our Bot in a Box and be envy of you all your competitors.

  • ...isn't something the companies are ready to roll out? Big friggin' shock there.

"Why should we subsidize intellectual curiosity?" -Ronald Reagan

Working...