

AI Use At Large Companies Is In Decline, Census Bureau Says (gizmodo.com) 75
An anonymous reader quotes a report from Gizmodo: [D]espite the AI industry's attempts to make itself seem omnipresent, a new report this week shows that adoption at large U.S. companies has declined. The report comes from the Census Bureau and shows that the rate of AI adoption by large companies -- that is, firms with over 250 employees -- has been declining slightly in recent weeks. The report is based on a biweekly survey, dubbed Business Trends and Outlook (or BTOS), of some 1.2 million U.S. firms. The survey, which asks businesses about their use of AI tools, such as machine learning and agents, found that -- between June and now -- the rate of adoption had declined from 14 to 12 percent. Futurism notes that this is the largest drop-off in the adoption rate since the survey first began in 2023, although the survey also showed a slight increase in AI use among smaller companies.
The moderate drop off comes after the rate of adoption had climbed precipitously over the last few years. When the survey first began, in September of 2023, the AI adoption rate hovered around 3.7 percent (PDF), while the adoption rate in December 2024 was around 5.7 percent. In the second quarter of this year, the rate also rose significantly, climbing from 7.4 percent to 9.2. The new drop-off in reported usage comes not long after another study, this one published by MIT, found that a vast majority of corporate AI pilot programs had failed to produce any material benefit to the companies involved.
The moderate drop off comes after the rate of adoption had climbed precipitously over the last few years. When the survey first began, in September of 2023, the AI adoption rate hovered around 3.7 percent (PDF), while the adoption rate in December 2024 was around 5.7 percent. In the second quarter of this year, the rate also rose significantly, climbing from 7.4 percent to 9.2. The new drop-off in reported usage comes not long after another study, this one published by MIT, found that a vast majority of corporate AI pilot programs had failed to produce any material benefit to the companies involved.
All I want to consider (Score:2)
Don't trust AI to always give you accurate results or even provide a correct sentence correction if it proposes to make your document more accessible.
Re:All I want to consider (Score:5, Interesting)
Summary: AI use is actually increasing.
Re: (Score:2)
ChatGPT: "The headline directly reflects the main finding of the summary...the headline captures the key point: AI adoption at big firms is falling, while the summary expands on the data, timeframe, and broader context. Would you like me to also point out why the headline doesn’t mention the increase among smaller firms?"
It didn't catch the difference between use and adoption.
Re: (Score:3)
It also didn't seem to understand the core premise of the article, which is one level of inference deeper: A prior trend was expected to continue, based on various projections and hype. The need to comment on this exists only because said expectation is not being met.
Re: (Score:2)
what is this, my primary school experience?
go touch grass honey
Re: (Score:3)
It's not clear to me whether the authors of the article caught the difference either, because all of their "adoption" figures are percentages, not percentages per stated unit of time.
catch the difference (Score:3)
I can only adopt something once, after that, I'm a user. If every single company adopted AI today, then adoption would be zero for ever after.
Re: (Score:2)
We've seen a general decline as well at my large multi-national home away from home that unnecessarily causes me to have to walk for 15 minutes to get there after a 15 minute train ride
Re: (Score:2)
The number of companies adopting AI for the first time is increasing, but the rate of increase is lower than previously.
It doesn't about the number of companies that are stopping using AI, which is what we need to know in order to determine whether overall usage is up.
Re: (Score:2)
Re: (Score:2)
What I have seen is that starting around Feb/March there seemed to be a lot of cooling among large enterprises that seems to have accelerated at certain types of business lines.
Define "adoption" (Score:4, Interesting)
If you ask officials at those companies, they'll give you the official answer. If you ask everyday people at those companies, they might tell you that they are using it without the company's official blessing.
My company has tentatively started allowing use of Copilot, including GitHub Copilot. But they're not paying for it yet. Is that "adoption"? And just about everybody goes to ChatGPT for answers some of the time. Is that "adoption"?
The survey probably defines "adoption" as paying for licenses or partnering with AI vendors. If so, it's not surprising that those numbers might be going down, because there are a lot of AI vendors selling junk, and companies quickly figure out that it's junk.
Doesn't matter how...it declined (Score:5, Insightful)
If you ask officials at those companies, they'll give you the official answer. If you ask everyday people at those companies, they might tell you that they are using it without the company's official blessing.
My company has tentatively started allowing use of Copilot, including GitHub Copilot. But they're not paying for it yet. Is that "adoption"? And just about everybody goes to ChatGPT for answers some of the time. Is that "adoption"?
The survey probably defines "adoption" as paying for licenses or partnering with AI vendors. If so, it's not surprising that those numbers might be going down, because there are a lot of AI vendors selling junk, and companies quickly figure out that it's junk.
You're debating the number. The point is however the number is calculated, it's declining. A decline of 35% is a decline of 35%, regardless of how the numbers are calcualted. So they reported using it before and changed their minds...sorry, that doesn't look good for the AI advocates. IMHO, AI is a religion...a belief in a better future while disregarding the present. LLM are just text prediction engines. They are far more useful than anyone guessed, but at the end of the day, they just guess the next word based on training sets. They don't think. IMHO, they have not demonstrated themselves as reliable enough to actually replace a human operator on any task I know of. They "could" be amazing someday and most advocates I know promise great things in the future while dismissing my concerns of how awful they perform today.
I use Claude 4.0 daily...officially blessed by my employer...to generate Java code. It literally can't match braces or put commas or semi-colons in the correct place reliably. I was very excited to get these tools and every day I use them I grow uncertain. I feel bullish when it actually does something right that saves me time, but I have to throw out it's answers the majority of the time and then feel bearish...and the cycle repeats often...optimism...then pessimism...and things haven't improved in the last year. I'd love for it to help me find bugs or make sense of our giant codebase or even something simple, like write a useful unit test. However, it only writes code that compiles approx 50% of the time.
I think many of the customers were excited for it's potential and then after using it, realized it's not useless, but Marc Beinhoff and Mark Zuckerberg are lying to you when they say their AIs can replace a mid-level facebook developer or most of salesforce's staff. It's extremely limited in use and very unreliable....and expensive. Everyone wants it to give them all the superpowers Jensen Huang or Sam Altman promises it will...it's exciting and worth every penny if it worked...then you try it out and realize it doesn't.
AI is a bubble. It's irrational exuberance and hope for the future. Eventually people will come to their senses and get a clear picture of what AI can do well and what it does poorly....and yeah, a lot are going to drop it. If it resembled working, adoption would rise, just like previous technological revolutions, like the internet, web 2.0, smart phones, big data, etc. AI looks a lot more like VR than the iPhone. Infinite theoretical potential, but limited real-world use and nearly all customers are enthusiasts. At best, it's a "someday" technology.
Re: (Score:3)
Feels kinda like Apple Newton to me. Maybe we get it right later, after a complete overhaul. Maybe the entire idea is scrapped and we get an iPhone kind of thing.
I'm laughing at LLMs and their champions as much as I laughed at Newton users. It's a toy. If you have disposable cash, go play!
Otherwise, let the experts do their work. There are lots of really good applications for iterative machine learning, such as detecting heart failure from x-rays, but LLM is just a novel use case. It will never do what has
Re: (Score:2)
Eat Up Martha
Re: (Score:2)
Apple Newton got everything right except processing power. Excellent software, fantastic handwriting recognition, a very good form factor for a time when most people who had to move between meetings were lugging a lot of analog note taking gear.
But the processing power was insufficient to even capture the pen movement when writing, which gave bad data to the handwriting recognition, and the interface was too sluggish for the use case.
And I'm noting this because there's a difference in kind here. The Newton
Re:Doesn't matter how...it declined (Score:4, Informative)
Re: (Score:2)
Recursive Ingestion Indigestion?
Re: (Score:2)
The attempts to build a huge castle of functionality on the clay foundation of LLM's will lead to these kinds of issues. Hallucinations aren't so much a problem of an LLM as the base operating process of them, and that means that trying to build them away is going to cause the kind of issues you describe.
And the real issue with that is, even for simple things that used to be at least correct enough with a simpler LLM call, the layers of attempts to fix the more complex things will get in the way. And at the
Re: (Score:2)
A decline in the rate of increase of people adopting it for the first time. Not quite the same thing.
Re: (Score:2)
The point is however the number is calculated, it's declining
The devil is always in the details. The decline is meaningful only if the number as calculated is meaningful. So while your argument is true on its face, if the number excludes a significant portion of AI use (such as unofficial use), then the number excludes a meaningful portion of usage, and doesn't necessarily reflect a larger trend.
You're saying you weren't weighed naked (Score:2)
The point is however the number is calculated, it's declining
The devil is always in the details. The decline is meaningful only if the number as calculated is meaningful. So while your argument is true on its face, if the number excludes a significant portion of AI use (such as unofficial use), then the number excludes a meaningful portion of usage, and doesn't necessarily reflect a larger trend.
It's fair to assume it was measured the same way before and after. This is the ancient precision vs accuracy debate. For example, when you step on the scale...are you naked or clothed? Well if your weight went up 30lbs and you were clothed both times....you gained weight. Yeah, maybe the 2nd time you had a winter coat on, but it wasn't 30lbs worth. You're debating the accuracy. I am claiming accuracy doesn't matter, only precision. This study is only measuring relative use.
Also, addressing your p
Re: (Score:2)
I see what you are saying and I mostly agree with it.
I do think though as we look at technology that they tend to be overhyped when released, bust out, but still continually deliver unforeseen value in the long run.
And I think these LLM and whatever will follow will do that.
Re: (Score:2)
My brother
Your ability to use the tool is orthogonal to management's assumption that everyone can use the tool regardless of their role, or that it will be reliable for operational tasks with just mild supervision. You aren't responsible for their belief that it's a substitution for skill.
If use really is reducing then it's not because it's good or bad inherently (although it's pretty obviously not reliable enough for the roles it's being placed in in a lot of cases), it's actually because people tried it o
Re: (Score:2)
The problem with your argument is that not all AI is created equal. There is still a whole lot of junk out there.
For example, my company recently starting using a product from GainSight called Staircase AI. It's supposed to be able to analyze customer emails and help triage them by flagging emails that are "extremely negative." In reality, its flagging has been very unreliable. It regularly tags ordinary support emails as "extremely negative" even when a reading by a human would not come to that conclusion.
Re: (Score:2)
why is that a problem with my argument?
my argument relies on the execs not having the skills to appraise that unevenness.
Re: (Score:2)
Sure, they might not be able to discern which AI is good and which isn't. But they certainly *can* figure out that when the AI software salesperson told them they could cut their staff by 50% if they just adopt their magical AI product, and they ended up having to hire more people instead...that kind of failure they can understand. A lot of the AI products are actually THAT bad. At the same time, some actually do help productivity. If there's one thing executives can do, it's counting beans.
Re: (Score:2)
This summary you just gave is a good one. On average if that's the experience they have, with nothing other than hype to go by... sooner or later they're going to have some kind of consensus about it being a scam, even if they can't typify exactly what kind of scam it is.
That wont be universal, but OpenAI et al need to make profits at scale or they're going to go bye bye sooner or later.
Re: (Score:2)
Instead of guessing how the survey defines things, you can look at the BTOS methodology at the US Census Bureau. Not only do they explain what they mean by adoption, but they provide the actual forms sent to businesses for their data gathering.
Your company would answer "yes, it's adotion", by the way. You're in the uptick thing. An indivual using ChatGPT to generate a slop answer instead of finding the facts wouldn't count, as it's not done using any form of company resources other than those provided for g
Re: (Score:3)
Here are the actual survey questions:
Between MMM DD – MMM DD, did this business use Artificial Intelligence (AI)1 in producing goods or services? (Examples of AI: machine learning, natural language processing, virtual agents, voice recognition, etc.)
During the next six months, do you think this business will be using Artificial Intelligence (AI) in producing goods or providing services? (Examples of AI: machine learning, natural language processing, virtual agents, voice recognition, etc.)
The survey does not define "use" but leaves its meaning up to the survey responder.
Re: (Score:2)
See that wasn't so hard. Now you know that your company is in the "uses" category, while a company which has some users personally accessing some minor services and nothing more will be in the "not uses" category, as the respondent will not even know it's happening.
This will provide a useful aggregated approximation of use. And what's more interesting is, as awareness within the company of employers using personal services increases, the "uses" category would be expected to rise. Yet it's going down.
Re: (Score:2)
I never had a question about my own company's status. But there are lots of other shades of gray, that the generic survey question does not get it. It's still entirely possible that the overall trend of "using" AI is up, despite this statistic going down.
'Artbroken (Score:3)
Do you mean to tell me the much heralded AI revolution was hype? Heartbroken I am. Those poor billionaires and bagmen.
Re: (Score:2)
Captain Renault in the Casino
Re: (Score:2)
Altman is sad nau. Poor guy. He won't be richer than Elon if this continues.
Re: (Score:2)
Re: (Score:2)
In my book, neither of them being rich is particularly good.
Re: (Score:2)
I know! Throw 'em in the deep end (Score:2)
As far as I can tell, very few people are methodical at problem solving.
Remember the kids back in math class that that only passed if the same questions as last year were on the test? Templates. Throw a new question at them and they doodled until the end of class. Reality isn't like that.
OpenAI is the new crypto - All hype no value (Score:5, Insightful)
Hahahahahahahahahahahahahahahahahahahahahahhahaahahahahahahahahahaha
Greedy investors got duped by the tech bros again.
The shame is all that investment OpenAI hoovered up could have been invested in a sustainable economy and creating some jobs. Instead we have predictive text on steroids. We we don't need that. We are in no danger of running out of bullshit content to drive advertising thanks. That's not a problem we need to solve. Neither is surrendering your Intellectual Property.
Re: (Score:2)
Re: (Score:2)
Market cap is entirely, "stock price multiplied by the number of shares", so from that point of view, the "value" that you talk about means nothing, because it's based primarily on hype. Any company could pop up with "AI" as the focus, get an IPO, and get a $1 billion dollar valuation, even while the company itself doesn't make any money and is actually losing money. That's what all of this hype is causing.
The true value is "value of company assets" from a more objective perspective. So sure, with the
Re: (Score:2)
It's valued that way due to a few factors. The main one being entities like Microsoft, SoftBank and others sitting on massive amounts of cash they want to invest in something that will give massive returns. With the home computer field being mature, gaming being oversaturated with enormously expensive AAA titles and heading for some kind of burnour, and mobile phones having nearly no growth space left, they've been looking for a new "next thing" for quite some time.
And when LLM's turned out to be promising
Re: (Score:2)
While OpenAI is reporting extraordinary revenue growth ($12–$13 billion projected for 2025), it is also burnin
Re: (Score:3)
I dunno about that bro..I am paying OpenAI and Claude subscriptions.
I've given them as much money in a year as I paid Google for a quarter century of use.
Having a general purpose intellect on tap is very handy. I was in Korea recently and it was handy to take pictures of product labels of groceries and ask AI if it contained the things.i was allergic to.
.
Re: (Score:2)
Cotton with a bold strategy comment
Re: (Score:2)
Dude, with that low a UID have you superceded mortal talk and moved on to communicating in Haiku?
Re: (Score:2)
Language translation from images isn't new, especially not from printed Korean to English. Korean is one of the most gratifying languages to machine translate, for many reasons, and you could have had an app in a mobile phone ten years ago doing that. Maybe even tweny, though that'd be pushing it a bit more. That's not even remotely a good reason to pay Altman money.
Re: (Score:2)
I get your point. But my point is LLMs are truly general purpose. I don't need "an app for this" or "an app for that" or an app for something no one thought of yet. It is intellect on tap. If the question deals with general knowledge and logic, you can ask it and typically get a decent answer.
Standing in the Korean convenience store after my flight came in at night, searching for food after everything around had closed, I didn't need to research OCR and label translation apps for Korean products. And then
Re: (Score:2)
Except they aren't truly general purpose. A question that deals with general knowledge will be answered with an aggregate of what the training data contains on the topic, which will contain inaccuracies. Now, those may not matter to you, in which case I'm left wondering how you go through life. And if they have to do with logic, no logic will be part of the response. Again only an aggregate of the training data.
And you never needed to research any of that OCR, label translation or whatever. When you prepare
Re: (Score:2)
I was in Bali, Indonesia earlier on, and OpenAI translated to Balinese. Also, OpenAI claims it can translate Turkmenistani food labels. And Google indicate Turkmenistan seems to be catching up in food labelling standards. Entities in the world move ahead of the pigeon holes we slot them in :)
Of course AI makes mistakes. That's why you must exercise your judgement. Also, use multiple AI to check each other's work where important.
Yes, 20 years. What can I say? I really hate VBA. They made me employee of the m
Re: (Score:2)
OpenAI makes a lot of claims it can't cash. And it's not a pigeon hole, it's experience.
So checking multiple slop hallucinations against each other will provide you with... what exactly? How does that save you time from actually learning? Especially since if it's important, you have to learn anyway to be able to ensure the slop you've generated isn't too badly off.
You're not using "AI" to translate between those things. You use different slop generators, trained in different way and with different algorithm
Re: (Score:2)
Like you, I have more own interest in my experience than OpenAI's or other AI vendor claims. And my experience has been to successfully use plain text to instruct AI LLMs to:
Re: (Score:2)
You've managed to identify a use cases where AI offers a clear benefit, transcription/translation. I would consider that more of a "special purpose" than general purpose.
It also doesn't require any special app, at least not on my $40 Android phone. The built-in default camera app offers to translate when you point it at foreign text.
I'm at a loss why I would want to pay anyone anything to double up on this feature.
Re: (Score:2)
:-) Different strokes for different folks. My 3-year old $200 Android phone does nothing of that sort you experience with the default camera app. Moreover, I don't have to interpret "denatured whey protein" as pertinent to my diary allergy -- the AI picks it up and warns me.
My experience has been to successfully use plain text to instruct AI LLMs to:
generate poetry -- even blending unrelated genres that I am sure no one attempted before)
summarize documents
research diseases, medication and side-effects
review
Re: (Score:2)
If you're the type to take medical advice from aggregated internet randos, well, there's no fixing stupid. But at least I understand stupid.
This is the part I don't understand: You have a dairy allergy and didn't know whey was a milk product? Do you remember it now, or are you scanning every box of food you eat with your phone, querying the AI about milk products? Do you have learning difficulties that preclude you reading the label yourself?
Re: (Score:2)
Looks like I have set off your LLM allergy. Or is it simply an Altman allergy? Well, there's no accounting for taste.
Yes, I have a learning difficulty, you insensitive oddity! I don't understand that things like whey and sodium caseinate derive from milk if the entire ingredient panel is in Korean. And have you ever seen the *amount* of ingredients in Korean processed food? Pages! Your typical 'Whole Foods' product it is not.
So ... I just tell the AI my allergy, take a picture and have it sort every
Re: (Score:2)
Let's assume your ingredient panel isn't in Korean. (If this is a regular occurrence for you, you should probably learn Korean.)
Are you still pointing your phone at English ingredient panels, and if so, why?
Re: (Score:2)
Your first assumption is correct, my usual ingredient panels are in English. Allergens are helpfully highlighed.
Yes, I occasionally point my phone at English ingredient panels - but only to blow up minute fonts, not interpret their information.
(No, I am not willing to learn Korean - I have an AI for that. Even if I did learn, a little knowledge can be a dangerous. Say I recognized Korean representations for milk and cheese and whey. But then missed the significance of the Korean word for sodium caseinate [google.com])
Re: (Score:2)
And as far as checking CT images go, I do have a medical consultant. It is a question of curiosity, to see how capable LLMs are.
As far as researching Pubmed papers goes, I do what good PhD advisors do - use free labour if you have access to it.
Re: (Score:3)
Companies need investment to use AI (Score:2)
Fad (Score:3)
The fad is starting to plateau.
How ironic, and completely, utter unpredictable.........
They'll be a few token attempts to revive it / rename it / pretend we've invented whole-new-AI for a few years, then it'll die and people will mostly forget about it (especially when AI companies start trying to recoup their losses by charging at a rate that actually PROFITS them), normalise the useful parts according to that increased (i.e. real) cost, then we'll move on to some other fad for a decade or so.
Re: (Score:2)
The plad is starting to fateau.
AI.......overhyped (Score:3)
Re: (Score:2)
Look at the financials first (Score:3)
When interest rates are low, businesses can put more money into things that may take several years, because the cost of debt is low. With higher interest rates, many businesses are not going to put money out for AI only because of the hype. We see the BIG players like Microsoft, Oracle, as well as others like Samsung and Apple putting money into AI, because they can sell AI services or use it as a selling point for selling devices like phones/tablets. But for the majority of businesses, until there is a "buy an AI system to replace the first level of customer service and get it set up within three months", most won't be spending millions if not billions on AI without a very specific use for it.
Now, if interest rates were to drop back down to where they were even in November of last year, we might see more companies willing to put in the money for AI.
As expected (Score:2)
Real progress is being made, and I expect future AI will allow us to solve previously intractable problems
Unfortunately, when billions are at stake, the hype vastly exceeds the reality
Lying salesweasels sell immature tech to clueless executives with predictable results
I find today's AI to be useful in some cases and useless in others
I also find it amusing that some believe the outrageous hype and predictions
I'm mostly here for the Funny (Score:2)
And this juicy target didn't get any Funny? As of now that's out of 56 comments.
Not even some sort of obligatory joke about "That AI fad sure died fast. Let me ask my AI what went wrong?"
Do I need to clarify the "here" reference? As in Slashdot space, wherever that is? Virtual reference problem? Or more of a null pointer problem? However the real problem is with Slashdot time. Slashdot has become sort of like a continuous toilet that gets completely flushed each day? However I'm basically on a one visit per
AI use is not down just the rate of infection. (Score:2)
Adoption must decline (Score:2)
Adoption is like a fire, it burns until it consumes all fuel. The rate of adoption is usually more like a sigmoid graph and it tapers off as adoption increases toward 100%.
Unlike a fire there is fuel that won't burn, people that won't adopt AI. Most of those that will adopt AI already have.