AI's Productivity Boost? Just 16 Minutes Per Week, Claims Study (nerds.xyz) 93
"A new study suggests the productivity boost from AI may be far smaller than executives claim," writes Slashdot reader BrianFagioli:
According to research cited in Foxit's State of Document Intelligence report, while 89% of executives and 79% of end users say AI tools make them feel more productive, the actual time savings shrink dramatically once people account for reviewing and validating AI-generated output.
The survey of 1,000 desk-based workers and 400 executives in the United States and United Kingdom found executives believe AI saves them about 4.6 hours per week, but they spend roughly 4 hours and 20 minutes verifying those results. End users reported a similar pattern, estimating 3.6 hours saved but 3 hours and 50 minutes spent reviewing AI work. Once that "verification burden" is factored in, executives gain just 16 minutes per week, while end users actually lose about 14 minutes.
The survey of 1,000 desk-based workers and 400 executives in the United States and United Kingdom found executives believe AI saves them about 4.6 hours per week, but they spend roughly 4 hours and 20 minutes verifying those results. End users reported a similar pattern, estimating 3.6 hours saved but 3 hours and 50 minutes spent reviewing AI work. Once that "verification burden" is factored in, executives gain just 16 minutes per week, while end users actually lose about 14 minutes.
let them eat cake! (Score:3)
Now go convince the management.
They will sell out their best and brightest for short term gains, collect a golden parachute then do it again at the next company.
Results Likely Skew to Optimistic (Score:2)
Re: (Score:2)
I agree. But that only makes these numbers even more reliable as an upper bound on productivity increase. And when a generous upper bound is this abysmally bad ...
Re: Results Likely Skew to Optimistic (Score:2)
Based on the summary I feel the opposite.
If I were to estimate the amount of time AI saves me I'm going to try and account for the time I spend reviewing it.
Then the researchers add that time again making the time reviewing AI output double counted.
Re: (Score:3)
My personal anecdote is that I have been using Claude recently and it has enabled me to do things that I would not have been able to do. Some of the tasks are writing C++/Qt code and integrating it into the rest of our app's code base. I am not a C++ programmer.
Many people appear to be skeptical of AI's capabilities and I suspect that this limits the potential gains. I was very skeptical about what an AI tool could do for me, but I read an article that gave me the motivation to really try it. My experiment
I remember a time when... (Score:5, Interesting)
...the same questions were asked about CAD
Salespeople sold CAD to management as reducing the time it took to make drawings
Engineers saw it differently
Back when it was hard to make or change drawings with a pencil, we made less of them. The effort required to redraw a complex drawing was so great that only the most necessary changes were made
With CAD, we could easily make lots of drawings with many different versions of an idea, and then pick the best one
CAD didn't increase "productivity", it increased quality of designs
Re:I remember a time when... (Score:4, Insightful)
Re: (Score:1)
Re:I remember a time when... (Score:4, Interesting)
There are even some mathematical proofs for that by now. For example, "creativity" of gen-AI seems to be limited to below the level of a professional by the very approach used: https://www.psypost.org/a-math... [psypost.org]
This is a pretty devastating result, because in a larger context, it means that AI-slop is not a temporary problem, but the only thing the tech can really produce.
For code, we already have results that things get slower (https://mikelovesrobots.substack.com/p/wheres-the-shovelware-why-ai-coding) and evidence is mounting that AI code is insecure, inefficient, hard to review, violates KISS and generally a really bad idea except, maybe, for use-once code.
For chat-bot use, we know there are serious psychological risks ("AI Psychosis") that may be more general that currently observed.
On top of that, add that LLMs are still not earning remotely as much money as they cost to train and operate and it looks the time of fast gains ("low hanging fruit") is over.
The whole thing is just an excessively bad idea from start to end. Some people saw that right from the start. I guess others need to get caught up in a few catastrophes. And still others will not be able to accept it even when it kicks them in the face. The really hilarious thing is that if LLMs had worked well, putting them in everywhere would still have been an excessively bad idea. A classical lose-lose situation made possible by a ton of wishful thinking and denial.
Re: (Score:2)
There are even some mathematical proofs for that by now. For example, "creativity" of gen-AI seems to be limited to below the level of a professional by the very approach used: https://www.psypost.org/a-math... [psypost.org] This is a pretty devastating result, because in a larger context, it means that AI-slop is not a temporary problem, but the only thing the tech can really produce.
Cropley modeled creativity as the product of effectiveness and novelty ...This finding indicates that large language models are structurally incapable of maximizing both variables simultaneously, preventing them from achieving the high scores possible for human creators who can combine extreme novelty with extreme effectiveness.
This proof seems to be limited to modeling the output of a single LLM. But there are plenty of examples of multiple AIs being optimized for different abilities and then used together as a team. For example, creating new mathematical proofs: https://www.theneuron.ai/expla... [theneuron.ai]
Re: (Score:2)
AFAIK, the computing power of multiple LLMs combined is not greater than that of a single one. So the proof would carry over.
Re: (Score:2)
"CAD didn't increase "productivity", it increased quality of designs"
Increasing the quality of work IS increasing productivity. You could have just as easily said that making "less of them" was a choice to reduce productivity demand and that CAD removed the need for that.
Re: (Score:2)
You even explain how CAD has massively increased productivity. It's very easy to take an existing solution and alter it to suit another purpose. This has led to entire business segments which were simply not possible without CAD.
And it was engineers who saw that possibility, and went with it. They're the ones who knew what kind of productivity improvement CAD provides. Sales people thought it meant trying many drawings, but engineers instead perfected basic designs and used them as the basis for alterations
Re: (Score:2)
Today, with LLM's, it's not at all like that. They're not useful for anything which maps onto our previous experience with using tools to enhance productivity.
Delegating work to an assistant, that's how it maps. It's not an experience many of us have had much of, but it is a skill. To not try to do everything yourself, and plan ahead so you can break your work up in chunks that can be tasked out. Like a project manager, but you know the work more intimately. The kind of skill it takes to get a new hire up to speed and productive quickly instead of leaving them at their desk with some busy project or to fend for themselves.
Re: (Score:2)
Delegating work to an assistant doesn't come with the assistant consistently providing incorrect information with authority, and being literally unable to learn from experience or being taught. So no, it doesn't map to that at all.
Re: I remember a time when... (Score:2)
Re: (Score:1)
Recently my co workers have been using them more...
Re: (Score:2)
You are confusing two issues here. Obviously, there is always some resistance to changes in tooling and procedures, because some people fear being left behind. This is not what this study is around at all.
Re: I remember a time when... (Score:2)
It creates a lot more busy work. I think I can draw parallels in my own industry, we see this in software development with AI tools. We are refactoring and rewriting stuff more now because the tool makes it easier, but it is still disruptive and often unnecessary in that the work does not really translate into more revenue for the business.
Other things like using AI for planning and test coverage does improve our software long-term and gives us the boost we needed to set some projects into motion.
For mechan
Re: (Score:2)
But at the same time it also increases the proliferation of new designs, which causes too much efficiency in production.
Re: I remember a time when... (Score:2)
It's often not the best one, but the one with least opposition, i.e. the political compromise solution.
Re: (Score:2)
In other words, CAD was like word processing. In the typewriter era, if you make an error, you either broke out the white-out (if it existed - it was an invented product), or you were forced to re-type the entire page again.
Word processing meant you didn't have to do that, which is why it was one of the early killer apps (along with the spreadsheet) of computing - you could type up a page, if you made a mistake, you could correct it, then commit the error-free page to paper. And changes could be made quickl
Re: I remember a time when... (Score:2)
If the quality of the product goes up, presumably the value of the product goes up with it. So, the product is now contributing more to GDP. So, productivity has increased.
At least it's a positive number! (Score:2)
They hid the negative using apples and oranges! (Score:2)
estimating 3.6 hours saved but 3 hours and 50. Converted that is 3.6 and ~3.83, so regular people see a savings of about -0.23 yes that is a minus sign.
Re: (Score:2)
Let's say we give them the 16-minute gain (other studies have found a loss[1]). What about the quality of the work produced?[2] What about long-lasting the negative impact on the people who use it?[3,4]
link 1 [arxiv.org]
link 2 [arxiv.org]
link 3 [arxiv.org]
link 4 [arxiv.org]
Re: (Score:3)
What about the cases where AI helps you solve something you could never solve without it? How do you qualify that?
https://www.bbc.com/news/scien... [bbc.com]
Re: (Score:2)
You need to be really incompetent for LLM-type AI being ablow to do things you cannot do in your professional capacity. And how do you propose you review these results if you were not competent to obtain them? Review is harder (!) than doing.
Yes, I am familiar with the work you cite. It is not an LLM doing science. It is an LLM dealing with bad data management in a scenario where hallucinations are acceptable. These are rare and are caused by human incompetence.
Re: (Score:2)
AlphaFold is not an LLM.
Re: At least it's a positive number! (Score:2)
Well Duh! (Score:2)
Doesn't mention how often they had to correct (Score:2)
If you're spending 4 hrs a week on verification but it's correct 90% of the time, maybe you're wasting your time.
Re: (Score:2)
That depends on what you're reviewing. Will this code change expose all of our customer data and get us sued and fined into bankruptcy? Dude, chill! The AI is like 90% sure it's OK!
Re: (Score:2)
Obviously it depends.
Re: (Score:2)
Re: (Score:2)
Even lower-quality ISP and cloud services generally need 3 nines to be commercially viable to use. AWS recently dropped below that due to use of AI slop code in production and may be in real trouble as a result. For higher reliabilities, 5 nines are the generally accepted minimum. Something need to go higher.
Thinking that one 9 is enough is just utterly disconnected and clueless nonsense.
Re: (Score:2)
Of course, how stupid of me. The only problems anyone works on already operate in the same tolerance as the space shuttle. Some errors can be absorbed; others are catastrophic, treating them all the same is moronic.
Re: (Score:2)
You clearly are an idiot that thinks highly of himself. Good luck with that.
Re: (Score:2)
You're the one claiming the only meaningful decisions have to have nearly absolute precision. That's what is truly idiotic. What a weird world you must live in where you can only decide with nearly perfect information. What tripe.
Re: (Score:2)
Not in most engineering tasks. The further along a problem gets before its found, the more 0's are added to the cost of fixing it. It's very cheap to make a fix in concept stage, more costly in design stage, starting to hurt in implementation face, and if it makes it past that, costs start being the kind that takes down small companies and get managers of larger corporations fired.
Re: (Score:2)
You need to be in a _really_ low quality field if 10% error rates are acceptable. In most areas, error rates of experts making decisions need to be much, much lower to not completely kill profits.
Re: (Score:2)
There are plenty of situations in life where 10% error tolerance is more than acceptable.
Re: (Score:2)
In expert decision making? No, there are not.
Sure, in life you run into idiots like you that have much higher error rates than 10%, but I guess you are not getting to decide anything actually important.
Re: (Score:2)
Again, this is just nonsense, morally righteous nonsense. There are all kinds of decisions that happen every day for every person that do not require this level of accuracy. Your moral superiority means nothing.
Re: (Score:2)
"Moral superiority"? What nonsense. You are just trying to move the goalposts because you have nothing. What I am talking about is risk management, no morals involved.
Re: (Score:2)
I know right? I mean, of every 100 lines of code I make, there's maybe a mistake in 1 line. Why do I even bother testing my code with accuracy like that?
Re: Doesn't mention how often they had to correct (Score:2)
And the 10% are ending up on the list of known vulnerabilities.
The problem is that you don't know where the 10% are.
Re: (Score:2)
What matters is how/if that differs from the status quo. I know slashdot has a hard time reframing the question to anything but software engineering, but an error is not always a "vulnerability." Humans already make many errors and to not make any errors it takes many humans.
AI is a tool. As are many of its users (Score:2)
Meanwhile, AI probably saves me a day a week.
I use it a couple different ways, as a well-supervised junior dev who does the crap I don't like.
And then as a high-knowledge tech resource I use as a sounding board for technichal architecture questions.
In neither case do I trust it completely, but by working with it similarly to how I work with some of my on-the-spectrum techs, I get good results.
Ok, but hear me out... (Score:4, Funny)
If we increase our investment just 10-fold, we could probably get that to 16 minutes per day. Imagine the possibilities!
Mental load reduction? (Score:2)
Re: (Score:1)
Re: (Score:3)
My experience is that it's usually the opposite. Verifying LLM output is soul numbingly boring compared to solving problems. It also dulls the intellect, as several studies have ascertained.
Re: (Score:2)
Re: (Score:2)
Yep. The evidence is mounting that reviewing LLM output is a hellish job and not good for you. The problem, that the business-grads in charge seem to completely have overlooked, is that they are messing with the work experience of senior (!) people and are making it much worse. Senior people have skills and can leave. Senior people will leave when mistreated. There is no known way to compensate for or recover from losing most or all of your senior people. They take a lot of institutional knowledge and speci
Re:Mental load reduction? (Score:4, Informative)
There are indicators that it is so bad that people are leaving well-paid jobs, see e.g. https://medium.com/@Reiki32/wh... [medium.com]
Somebody here called AI code "review resistant" and that seems to be exactly what it is. Bloated, inefficient, insecure, misleading comments, but all gussied up to the max to look like a rockstar (well, a wannabe rockstar) coded it.
Not even the worst of it. (Score:4, Insightful)
Going from 'thinking about things you know about' to 'keeping a close eye on an erratic intern who can bullshit really fast' is a fairly dramatic downgrade in terms of the quality and apparent futility of what you are doing. At least junior people sometimes improve thanks to mentoring, even if it's not something you do specifically to save time in the immediate term. A relentless torrent of glib and dense, though, is hell compared to just doing it yourself; so the idea that you aren't even saving time by doing so is pretty grim.
Re: (Score:3)
This is part of why studies are showing a dulling of intellectual capability from use of LLM's. Doing verification of LLM output instead of thinking for oneself, even if only done some of the time, is harmful. Given how confident LLM generated output reads, I'm not surprised by this at all.
Re: (Score:2)
Same. This may result in the loss of a major part of a whole generation of senior people. Which cannot be replaced easily and cannot be replaced at all if you loose too many.
Re: (Score:2)
There is, presumably, an amount of time savings where this could be justified(at least for things that you, ultimately, only do because they pay the bills; not ones of some intrinsic value); but it seems particularly grim to deal with the changed nature of the work for such paltry savings.
Indeed. And also remember that the LLM companies are all losing money like crazy at this time. There was enough time now to do the obvious optimizations to bring the cost down. It has not happened. The easy stuff does not do it. But the problem with the harder stuff is that you cannot force it and have no clue when it will be happening. It may be 10 years, 100 years or even 1000 years before somebody figures out a way to make LLMs the needed 3 orders of magnitude cheaper to run and to train. It it may happe
Sounds about right (Score:3)
I've used AI to generate python code. It saves me a little time as it gives me a template of what I wanted. The code however is almost never correct and most often does not run correctly. I still need to study the code and fix it. It's always wrong on tiny details. Like I had one that best described as a shopping cart template. I have a list of things. Compute a metric for each item. Also compute a metric for the total of all items.
Things AI code got wrong: Item = Array(i) is not the correct way to get an item in an array in Python. It is Item = Array[i]. The AI code did not understand variable persistence when it comes to loops. Defining Metric = [computation] only in a loop means that print(Metric) after the loop has ended at best gets the last value of the variable if it does not error out for being out of the loop.
Re: Sounds about right (Score:1)
What is the downside of the incorrect syntax? No point worrying about it unless it's going to cause a problem, as long as it works. Generally () is just a direct replacement for [] anyway.
Re: (Score:3)
What is the downside of the incorrect syntax?
Besides it not running? I think that is huge downside.
No point worrying about it unless it's going to cause a problem, as long as it works.
Again, it not working correctly because it has the wrong syntax is a problem. Or not working at all.
Generally () is just a direct replacement for [] anyway.
Not in the languages that I aware: Python, Java, C, Rust. Go, Swift. Just those "obscure" languages.
Re: (Score:2)
Ok will I use AI extensively for Python and it always works for me with some corrections.
Re: (Score:2)
Re: (Score:2)
I didn't know in that case. All I was saying was that I have never personally seen AI make a syntax error like that. And I also said that I have never used Array in Python because there isn't much advantage to it.
Re: (Score:2)
Re: (Score:1)
I have always been a "it's good as long as it works" kind of guy. And I have been developing for over 40 years.
Re: (Score:2)
Why i'd never vibe-code: editing isn't any fun. (Score:4, Insightful)
That's what it comes down to. When you start vibe-coding, you're no longer really coding, and you're not even really creating anymore.
You're just editing. All you're doing is code reviews and quick bug fixes...and those tend to be my least favorite parts of my job.
At least code-reviewing a junior developer, you're teaching, mentoring, instilling some new disciplines or expanding their horizons.
There's no satisfaction in doing that to a bot. Especially because the next time it codes something for you, it is going to come up with something completely different as if the 'experience' you tried to give it doesn't matter anymore.
Yeah, maybe it gets the job done...but I'm not in this to 'get the job done'. If this is what the job was or is going to become, then I'll quit, do my own coding on the side for open-source or other projects, and just make money as a substitute teacher... ...that is, if I didn't have to pay for health insurance, but America sucks in that regard and always will.
Re: (Score:2)
I only ask AI to write me throwaway code which I wouldn't write without AI at all. With production code, I don't ask AI to write it at all, instead I just ask it questions like. "I upgraded this library from version A to B and now things broke down, what could be the reason?". Another use case is to just paste an error message from logs into AI and it can usually explain pretty well what is going on and how to fix it.
Re:Why i'd never vibe-code: editing isn't any fun. (Score:4, Insightful)
some have suggested that's just because it has more or less illegally webscraped the entirety of stackoverflow and reddit, so you're really just doing a resource-intensive google search to find the right stack overflow question/answer page, without either of those sites getting any credit for it.
Re: (Score:2)
I bet people said similar things when high-level languages were replacing assembly.
There is creativity -- you have to be creative in writing prompts.
I do see an issue: the prompts are insufficient to reproduce the final output, but the prompts are the input to the process. We don't use a compiler to create assembly code, then try to maintain the assembly code while discarding the high-level language code. But vibe coding does something similar: discards the most abstract input, while keeping the output of t
Most important employee at Renholm Industries (Score:2)
Re: (Score:1)
But I only work 16 hours a week... (Score:2)
Probably very polarized results.. (Score:3)
So as presented it *sounds* like the typical respondent thought they spent about as much time as it saved and the numbers were fairly big.
I'm skeptical that a survey of self-reported experience would manifest that way. I wager that some said 9 hours saved with barely anything needed and some said 9 hours of fixing the mistakes and no time saved, and that instead of '16 minutes per week', you just have slightly more people annoyed by it than enthusiastic about it.
My guess is that it wildly varies by the job and situation. People for whom it works badly for are looking around and wondering what the hell is wrong with the people advocating, and the people for whom it works are amazed at the time savings, and given the current hype the former people are forced to do it anyway, even as it is terrible.
For example, a discussion arose where developer mentioned they could make a quick tool to take care of something that was likely to be an issue for users if folks wanted it. An executive declared "no need, I just had Claude make the tool for me in like 2 minutes, and I'll share with everyone". That executive felt *incredibly* empowered, they didn't know how to code and they used the tool and the result of the tool in the AI generated sandbox looked right. Then folks tried the tool and it failed horribly because it didn't have any actual clue about the technology to manage, it managed only to make a tool and sandbox for it to demo in that was consistent with the desired narrative, with no whiff that the results had no relationship to the documentation site included in the executive's prompt. Further, if it *had* worked, it would have broken a number of security mechanisms based on how it *tried* to do things. However, the executive would never feel the consequences, they unleashed their 'awesome tool' and the grunts had to clean up the mess and bringing that back up to executives to ask them to stop doing that stuff is a very risky move employment wise, and they'll not believe you anyway, it's just a conspiracy to keep our jobs after all...
Even when it is being reasonably useful, it's annoyingly prone to mistakes, randomly screwing up even if it's being asked to do the same easy little thing it has done successfully the last 12 times in a row in other contexts. Leaving you feeling a bit gaslight as you see the internet gushing about how unbelievably impossibly good Claude Opus 4.6 is after you just spent a week using your work's budget on that premium model to see if at least *they* are on to something and still finding a pretty annoying experience while Anthropic is crowing that they have all but finished off the job of software developer.
YMMV, a lot! (Score:2, Interesting)
A friend of mine is doing a hardware startup with 2 people that he says would have taken at least 3 extra coders pre-AI. He and his partner are doing the valuable EE and mechanical parts of the business. A lot of the software they need is not what's differentiating their startup, and AI is just fine. For them, it sounds like AI is giving a 150% productivity gain, and without it their business idea would only be marginally viable.
Re: (Score:3)
A friend of mine is doing a hardware startup with 2 people that he says...
Q.E.D.
There you have it, folks. Incontrovertible evidence that AI is every bit as wonderful as all the boosters say it is. Bite on that, haters!
Plausible (Score:2)
Earlier studies found about the same. The only possible other gains (or losses) are via work quality and stress on the workers. Just for coding, this does not look good. Code becomes bloated, review resistant and probably unmaintainable pretty fast when coded by AI instead of humans. On the stress side, it does not look good either. Senior people are getting forced to review code that is hard and unpleasant to review. And, different to a junior coder, they cannot just explain how to write better code (and a
Eventually, (Score:2)
AI will become a net drag on overall productivity, driving it downward.
I say this because I suspect AI will become the next equivalent to cat videos, social media, and doom scrolling. People will make goofy and/or shocking and/or distasteful videos just for fun, will play around with AI-generated music, will troll and attack each other with AI content, and will use AI as a quiz-book to dig up weird or shocking facts.
The entertainment aspects of AI are being downplayed, but I think it will be the next crack-
Use dependent (Score:3)
My use case is reading a summarizing contracts and scientific articles. Now, while I'm spending time coming up with prompts and reading responses, I might as well read the stuff myself and become personally familiar with it. However where AI helps me is conversing about it, like you would with a team. So if I'm between customers on the road, I can feed it into AI and ask questions in voice mode, review the answers later. That's my gain and arguably a lot more than 16 minutes.
What I can say fro from my experience (Score:2)
ÃZn my current latest few experiences is for example a task that would have taken probably 5 ours of coding, testing it took about 2.5 hours with Claude AI. It could have been faster, but I wanted to do a thorough code review to make sure all looks good. And the best news was that I only used prompts. So all in all was a win. Now, the code review activity prove to be a pleasant experience since I discovered not only it was perfect, it followed the same code style, variable naming, etc. In a way was lik
Speaking personally, I think this is too narrow (Score:2)
My own experience has been that I can use AI to develop materially better outputs than I could otherwise make without spending a lot longer on each task. And sometimes just materially better full stop, by treating AI as a thought partner. It's not perfect, but it can be pretty damn good.
No funny here? (Score:2)
But I want to know how many games of solitaire/FreeCell/Backbone it can win per second, let alone per week.
Anyway, the story seems to be about a temporary problem. For now we still need humans checking the genAI work, but pretty soon that bottleneck will be eliminated. My own workaround is to check with a different genAI, but being careful to reword the question to bias it against the first solution. If it still comes up with the same answer, then I'm pretty (too?) likely to trust it.
But NO advertising infection has happened yet (Score:2)
Remember when Google actually worked well? Before they purposely degraded it so you increased your engagement with google?
I'd say the AI searches way better and it also doesn't suck up time with tangents like search does just naturally does with tons of bad or poor results that can be tempting. Sure the AI saves many hours over normal work-- because normal work is loaded with distractions and the AI is razor sharp in it's focus, more than search ever was, even at it's peak.
Give me an AI search that doesn
Re: (Score:2)
Sorry, but I think you're just imagining that they won't start blending the advertising into the AI results. To the contrary, I think the increasingly evil google will notice the market value of ads that aren't even visible as such and the AI search results will soon be much more subtly polluted than we mere humans can even understand.
But if they (the google folks or AIs) do find an alternative business model, it will probably be even more evil than that.
Gimme funny?
Average fallacy (Score:2, Funny)
Listen, Iâ(TM)m an OG here. I became rich enough to stop working if I wanted to thanks to Slashdot fact-based coverage of Bitcoin around 13 years ago. This community used to embrace innovation. Now we see these crappy stories from random sites trying to minimize the impact of AI in this world. Thanks to AI I can build and automate data security process in a few days instead of months. If you are scared to lose your job maybe you should stop working like a robot. And yes I might lose mine but I still th
We overestimate the productivity gains (Score:2)
We developers always provide estimates based on the amount of time we think it will take to write the actual code. We forget that writing code is, maybe, 25% of the time required for the full SLDC for that code. It's part of why developers suck at estimating time, and why we have story points.
AI speeds up that code-typing part. So to begin with, it's speeding up the part that's already the fastest. Worse, a lot of that typing often leads to weird bugs that a human wouldn't have caused, leading to more debug
getting up to the level versus excelling (Score:2)
I think both can be true. LLMs are trained on the internet data mostly, especially for technology. Which means they can mimick that level. They mimick some kind of average.
So if you're doing something that you don't know anything about, but the net has loads of examples for, your acceleration by using an LLM will be grand.
And if you're a specialist that spot