Forgot your password?
typodupeerror
AI IT

AI's Productivity Boost? Just 16 Minutes Per Week, Claims Study (nerds.xyz) 93

"A new study suggests the productivity boost from AI may be far smaller than executives claim," writes Slashdot reader BrianFagioli: According to research cited in Foxit's State of Document Intelligence report, while 89% of executives and 79% of end users say AI tools make them feel more productive, the actual time savings shrink dramatically once people account for reviewing and validating AI-generated output.

The survey of 1,000 desk-based workers and 400 executives in the United States and United Kingdom found executives believe AI saves them about 4.6 hours per week, but they spend roughly 4 hours and 20 minutes verifying those results. End users reported a similar pattern, estimating 3.6 hours saved but 3 hours and 50 minutes spent reviewing AI work. Once that "verification burden" is factored in, executives gain just 16 minutes per week, while end users actually lose about 14 minutes.

This discussion has been archived. No new comments can be posted.

AI's Productivity Boost? Just 16 Minutes Per Week, Claims Study

Comments Filter:
  • by jriding ( 1076733 ) on Saturday March 14, 2026 @11:50AM (#66041178)

    Now go convince the management.
    They will sell out their best and brightest for short term gains, collect a golden parachute then do it again at the next company.

  • Given the size of the study, the difference of just a few minutes looks like noise. I'm thinking that because the sample chosen was composed entirely of people already using AI, that many of them are early adopters who would naturally take a more optimistic view of AI, skewing the results more toward "time saving" than is actually warranted.
    • by gweihir ( 88907 )

      I agree. But that only makes these numbers even more reliable as an upper bound on productivity increase. And when a generous upper bound is this abysmally bad ...

    • Based on the summary I feel the opposite.

      If I were to estimate the amount of time AI saves me I'm going to try and account for the time I spend reviewing it.

      Then the researchers add that time again making the time reviewing AI output double counted.

    • My personal anecdote is that I have been using Claude recently and it has enabled me to do things that I would not have been able to do. Some of the tasks are writing C++/Qt code and integrating it into the rest of our app's code base. I am not a C++ programmer.

      Many people appear to be skeptical of AI's capabilities and I suspect that this limits the potential gains. I was very skeptical about what an AI tool could do for me, but I read an article that gave me the motivation to really try it. My experiment

  • by MpVpRb ( 1423381 ) on Saturday March 14, 2026 @11:52AM (#66041188)

    ...the same questions were asked about CAD
    Salespeople sold CAD to management as reducing the time it took to make drawings
    Engineers saw it differently
    Back when it was hard to make or change drawings with a pencil, we made less of them. The effort required to redraw a complex drawing was so great that only the most necessary changes were made
    With CAD, we could easily make lots of drawings with many different versions of an idea, and then pick the best one
    CAD didn't increase "productivity", it increased quality of designs

    • by crunchy_one ( 1047426 ) on Saturday March 14, 2026 @12:01PM (#66041202)
      It's true that CAD improved the quality of designs. However, all indications so far point to AI degrading the quality of work performed with its assistance. AI is not saving time, but it is helping to produce worse outcomes.
      • And the worse your product, the more frequently must your customers replace it ... buying  your new   *,ai designed "upgrade" ....  we know where that goes ...
      • by gweihir ( 88907 ) on Saturday March 14, 2026 @02:16PM (#66041422)

        There are even some mathematical proofs for that by now. For example, "creativity" of gen-AI seems to be limited to below the level of a professional by the very approach used: https://www.psypost.org/a-math... [psypost.org]
        This is a pretty devastating result, because in a larger context, it means that AI-slop is not a temporary problem, but the only thing the tech can really produce.

        For code, we already have results that things get slower (https://mikelovesrobots.substack.com/p/wheres-the-shovelware-why-ai-coding) and evidence is mounting that AI code is insecure, inefficient, hard to review, violates KISS and generally a really bad idea except, maybe, for use-once code.

        For chat-bot use, we know there are serious psychological risks ("AI Psychosis") that may be more general that currently observed.

        On top of that, add that LLMs are still not earning remotely as much money as they cost to train and operate and it looks the time of fast gains ("low hanging fruit") is over.

        The whole thing is just an excessively bad idea from start to end. Some people saw that right from the start. I guess others need to get caught up in a few catastrophes. And still others will not be able to accept it even when it kicks them in the face. The really hilarious thing is that if LLMs had worked well, putting them in everywhere would still have been an excessively bad idea. A classical lose-lose situation made possible by a ton of wishful thinking and denial.

        • There are even some mathematical proofs for that by now. For example, "creativity" of gen-AI seems to be limited to below the level of a professional by the very approach used: https://www.psypost.org/a-math... [psypost.org] This is a pretty devastating result, because in a larger context, it means that AI-slop is not a temporary problem, but the only thing the tech can really produce.

          Cropley modeled creativity as the product of effectiveness and novelty ...This finding indicates that large language models are structurally incapable of maximizing both variables simultaneously, preventing them from achieving the high scores possible for human creators who can combine extreme novelty with extreme effectiveness.

          This proof seems to be limited to modeling the output of a single LLM. But there are plenty of examples of multiple AIs being optimized for different abilities and then used together as a team. For example, creating new mathematical proofs: https://www.theneuron.ai/expla... [theneuron.ai]

          • by gweihir ( 88907 )

            AFAIK, the computing power of multiple LLMs combined is not greater than that of a single one. So the proof would carry over.

    • by dfghjk ( 711126 )

      "CAD didn't increase "productivity", it increased quality of designs"

      Increasing the quality of work IS increasing productivity. You could have just as easily said that making "less of them" was a choice to reduce productivity demand and that CAD removed the need for that.

    • You even explain how CAD has massively increased productivity. It's very easy to take an existing solution and alter it to suit another purpose. This has led to entire business segments which were simply not possible without CAD.

      And it was engineers who saw that possibility, and went with it. They're the ones who knew what kind of productivity improvement CAD provides. Sales people thought it meant trying many drawings, but engineers instead perfected basic designs and used them as the basis for alterations

      • Today, with LLM's, it's not at all like that. They're not useful for anything which maps onto our previous experience with using tools to enhance productivity.

        Delegating work to an assistant, that's how it maps. It's not an experience many of us have had much of, but it is a skill. To not try to do everything yourself, and plan ahead so you can break your work up in chunks that can be tasked out. Like a project manager, but you know the work more intimately. The kind of skill it takes to get a new hire up to speed and productive quickly instead of leaving them at their desk with some busy project or to fend for themselves.

        • Delegating work to an assistant doesn't come with the assistant consistently providing incorrect information with authority, and being literally unable to learn from experience or being taught. So no, it doesn't map to that at all.

          • Occasionally it does but good luck putting a LLM on PIP and then showing them the door...
          • I won't argue but I'll share my experience. I was already better than most of my peers in my industry (software development), depressingly and frustratingly so most of the time. It sucks to feel obligated to carry team after team out of the piles of garbage that they wrote so that we can deliver quality products on time. Take my assertion for what you will. After switching to using LLMs for most of my work, the same gap remains - indeed it has widened.

            Recently my co workers have been using them more...
    • by gweihir ( 88907 )

      You are confusing two issues here. Obviously, there is always some resistance to changes in tooling and procedures, because some people fear being left behind. This is not what this study is around at all.

    • It creates a lot more busy work. I think I can draw parallels in my own industry, we see this in software development with AI tools. We are refactoring and rewriting stuff more now because the tool makes it easier, but it is still disruptive and often unnecessary in that the work does not really translate into more revenue for the business.

      Other things like using AI for planning and test coverage does improve our software long-term and gives us the boost we needed to set some projects into motion.

      For mechan

    • But at the same time it also increases the proliferation of new designs, which causes too much efficiency in production.

    • It's often not the best one, but the one with least opposition, i.e. the political compromise solution.

    • by tlhIngan ( 30335 )

      In other words, CAD was like word processing. In the typewriter era, if you make an error, you either broke out the white-out (if it existed - it was an invented product), or you were forced to re-type the entire page again.

      Word processing meant you didn't have to do that, which is why it was one of the early killer apps (along with the spreadsheet) of computing - you could type up a page, if you made a mistake, you could correct it, then commit the error-free page to paper. And changes could be made quickl

    • If the quality of the product goes up, presumably the value of the product goes up with it. So, the product is now contributing more to GDP. So, productivity has increased.

  • No minus sign in front of the productivity change.
    • Sarcasm sure, but I wonder if they figured this out on their own or if they used AI and spent an hour figuring out how it hid the negative.

      estimating 3.6 hours saved but 3 hours and 50. Converted that is 3.6 and ~3.83, so regular people see a savings of about -0.23 yes that is a minus sign.
    • by narcc ( 412956 )

      Let's say we give them the 16-minute gain (other studies have found a loss[1]). What about the quality of the work produced?[2] What about long-lasting the negative impact on the people who use it?[3,4]

      link 1 [arxiv.org]
      link 2 [arxiv.org]
      link 3 [arxiv.org]
      link 4 [arxiv.org]

      • by dvice ( 6309704 )

        What about the cases where AI helps you solve something you could never solve without it? How do you qualify that?
        https://www.bbc.com/news/scien... [bbc.com]

        • by gweihir ( 88907 )

          You need to be really incompetent for LLM-type AI being ablow to do things you cannot do in your professional capacity. And how do you propose you review these results if you were not competent to obtain them? Review is harder (!) than doing.

          Yes, I am familiar with the work you cite. It is not an LLM doing science. It is an LLM dealing with bad data management in a scenario where hallucinations are acceptable. These are rare and are caused by human incompetence.

        • by narcc ( 412956 )

          AlphaFold is not an LLM.

          • This! It's really fucking annoying when everyone critical of AI is anti LLMs being jammed into places they don't belong. Someone inevitably comes back with an example of computer vision or machine learning being actually useful AI as though it removes all the harm a LLM does. We need to start making the distinction between AI types.
  • "account for reviewing and validating AI-generated output" but then the fix is easy "don't review or verify anything!"
  • If you're spending 4 hrs a week on verification but it's correct 90% of the time, maybe you're wasting your time.

    • by sjames ( 1099 )

      That depends on what you're reviewing. Will this code change expose all of our customer data and get us sued and fined into bankruptcy? Dude, chill! The AI is like 90% sure it's OK!

    • "We've achieved one 9 of reliability; is it really worth spending more than 10% of our time trying to improve?"
      • by gweihir ( 88907 )

        Even lower-quality ISP and cloud services generally need 3 nines to be commercially viable to use. AWS recently dropped below that due to use of AI slop code in production and may be in real trouble as a result. For higher reliabilities, 5 nines are the generally accepted minimum. Something need to go higher.

        Thinking that one 9 is enough is just utterly disconnected and clueless nonsense.

        • Of course, how stupid of me. The only problems anyone works on already operate in the same tolerance as the space shuttle. Some errors can be absorbed; others are catastrophic, treating them all the same is moronic.

          • by gweihir ( 88907 )

            You clearly are an idiot that thinks highly of himself. Good luck with that.

            • You're the one claiming the only meaningful decisions have to have nearly absolute precision. That's what is truly idiotic. What a weird world you must live in where you can only decide with nearly perfect information. What tripe.

    • Not in most engineering tasks. The further along a problem gets before its found, the more 0's are added to the cost of fixing it. It's very cheap to make a fix in concept stage, more costly in design stage, starting to hurt in implementation face, and if it makes it past that, costs start being the kind that takes down small companies and get managers of larger corporations fired.

    • by gweihir ( 88907 )

      You need to be in a _really_ low quality field if 10% error rates are acceptable. In most areas, error rates of experts making decisions need to be much, much lower to not completely kill profits.

      • There are plenty of situations in life where 10% error tolerance is more than acceptable.

        • by gweihir ( 88907 )

          In expert decision making? No, there are not.

          Sure, in life you run into idiots like you that have much higher error rates than 10%, but I guess you are not getting to decide anything actually important.

          • Again, this is just nonsense, morally righteous nonsense. There are all kinds of decisions that happen every day for every person that do not require this level of accuracy. Your moral superiority means nothing.

            • by gweihir ( 88907 )

              "Moral superiority"? What nonsense. You are just trying to move the goalposts because you have nothing. What I am talking about is risk management, no morals involved.

    • I know right? I mean, of every 100 lines of code I make, there's maybe a mistake in 1 line. Why do I even bother testing my code with accuracy like that?

    • And the 10% are ending up on the list of known vulnerabilities.

      The problem is that you don't know where the 10% are.

      • What matters is how/if that differs from the status quo. I know slashdot has a hard time reframing the question to anything but software engineering, but an error is not always a "vulnerability." Humans already make many errors and to not make any errors it takes many humans.

  • Meanwhile, AI probably saves me a day a week.

    I use it a couple different ways, as a well-supervised junior dev who does the crap I don't like.
    And then as a high-knowledge tech resource I use as a sounding board for technichal architecture questions.

    In neither case do I trust it completely, but by working with it similarly to how I work with some of my on-the-spectrum techs, I get good results.

  • by eepok ( 545733 ) on Saturday March 14, 2026 @12:23PM (#66041234) Homepage

    If we increase our investment just 10-fold, we could probably get that to 16 minutes per day. Imagine the possibilities!

  • Did they expend less mental energy by the end of the day? Did the LMM make tasks less intimidating so they were more readily worked on? 16 minutes saved AND less metal energy expenditure would be significant. Would be an interesting follow up study with this cohort.
    • Are  savings in  'metal energy expenditures' measured in watts .. or twinkies/break  ? Are you implying  that use of *.ai leads to lower air-conditioning costs ?
    • My experience is that it's usually the opposite. Verifying LLM output is soul numbingly boring compared to solving problems. It also dulls the intellect, as several studies have ascertained.

      • These are the mid and long term effects that employers don't know/care/want to talk about.
      • by gweihir ( 88907 )

        Yep. The evidence is mounting that reviewing LLM output is a hellish job and not good for you. The problem, that the business-grads in charge seem to completely have overlooked, is that they are messing with the work experience of senior (!) people and are making it much worse. Senior people have skills and can leave. Senior people will leave when mistreated. There is no known way to compensate for or recover from losing most or all of your senior people. They take a lot of institutional knowledge and speci

    • by gweihir ( 88907 ) on Saturday March 14, 2026 @02:43PM (#66041468)

      There are indicators that it is so bad that people are leaving well-paid jobs, see e.g. https://medium.com/@Reiki32/wh... [medium.com]

      Somebody here called AI code "review resistant" and that seems to be exactly what it is. Bloated, inefficient, insecure, misleading comments, but all gussied up to the max to look like a rockstar (well, a wannabe rockstar) coded it.

  • by fuzzyfuzzyfungus ( 1223518 ) on Saturday March 14, 2026 @12:30PM (#66041244) Journal
    There is, presumably, an amount of time savings where this could be justified(at least for things that you, ultimately, only do because they pay the bills; not ones of some intrinsic value); but it seems particularly grim to deal with the changed nature of the work for such paltry savings.

    Going from 'thinking about things you know about' to 'keeping a close eye on an erratic intern who can bullshit really fast' is a fairly dramatic downgrade in terms of the quality and apparent futility of what you are doing. At least junior people sometimes improve thanks to mentoring, even if it's not something you do specifically to save time in the immediate term. A relentless torrent of glib and dense, though, is hell compared to just doing it yourself; so the idea that you aren't even saving time by doing so is pretty grim.
    • This is part of why studies are showing a dulling of intellectual capability from use of LLM's. Doing verification of LLM output instead of thinking for oneself, even if only done some of the time, is harmful. Given how confident LLM generated output reads, I'm not surprised by this at all.

      • by gweihir ( 88907 )

        Same. This may result in the loss of a major part of a whole generation of senior people. Which cannot be replaced easily and cannot be replaced at all if you loose too many.

    • by gweihir ( 88907 )

      There is, presumably, an amount of time savings where this could be justified(at least for things that you, ultimately, only do because they pay the bills; not ones of some intrinsic value); but it seems particularly grim to deal with the changed nature of the work for such paltry savings.

      Indeed. And also remember that the LLM companies are all losing money like crazy at this time. There was enough time now to do the obvious optimizations to bring the cost down. It has not happened. The easy stuff does not do it. But the problem with the harder stuff is that you cannot force it and have no clue when it will be happening. It may be 10 years, 100 years or even 1000 years before somebody figures out a way to make LLMs the needed 3 orders of magnitude cheaper to run and to train. It it may happe

  • by UnknowingFool ( 672806 ) on Saturday March 14, 2026 @12:34PM (#66041256)

    I've used AI to generate python code. It saves me a little time as it gives me a template of what I wanted. The code however is almost never correct and most often does not run correctly. I still need to study the code and fix it. It's always wrong on tiny details. Like I had one that best described as a shopping cart template. I have a list of things. Compute a metric for each item. Also compute a metric for the total of all items.

    Things AI code got wrong: Item = Array(i) is not the correct way to get an item in an array in Python. It is Item = Array[i]. The AI code did not understand variable persistence when it comes to loops. Defining Metric = [computation] only in a loop means that print(Metric) after the loop has ended at best gets the last value of the variable if it does not error out for being out of the loop.

    • What is the downside of the incorrect syntax? No point worrying about it unless it's going to cause a problem, as long as it works. Generally () is just a direct replacement for [] anyway.

      • What is the downside of the incorrect syntax?

        Besides it not running? I think that is huge downside.

        No point worrying about it unless it's going to cause a problem, as long as it works.

        Again, it not working correctly because it has the wrong syntax is a problem. Or not working at all.

        Generally () is just a direct replacement for [] anyway.

        Not in the languages that I aware: Python, Java, C, Rust. Go, Swift. Just those "obscure" languages.

        • Ok will I use AI extensively for Python and it always works for me with some corrections.

          • You do whatever when you say that parentheses are the same as brackets in Python, I foresee many problems.
            • I didn't know in that case. All I was saying was that I have never personally seen AI make a syntax error like that. And I also said that I have never used Array in Python because there isn't much advantage to it.

              • To me it appears you don’t know that much about programming so maybe you would not know if AI made a syntax error. It was not that you didn't know specifically about Python syntax. It was your dismal that syntax matters in programming. It is your declaration that parentheses are the same as brackets when they are not in most of the languages I know. Do what you want; do not be surprised that AI gives you bad code.
                • I have always been a "it's good as long as it works" kind of guy. And I have been developing for over 40 years.

                  • Again, you don’t know it worked. You know you got a result. You could be getting the wrong results. Did you validate the code to check? No.
  • That's what it comes down to. When you start vibe-coding, you're no longer really coding, and you're not even really creating anymore.

    You're just editing. All you're doing is code reviews and quick bug fixes...and those tend to be my least favorite parts of my job.

    At least code-reviewing a junior developer, you're teaching, mentoring, instilling some new disciplines or expanding their horizons.

    There's no satisfaction in doing that to a bot. Especially because the next time it codes something for you, it is going to come up with something completely different as if the 'experience' you tried to give it doesn't matter anymore.

    Yeah, maybe it gets the job done...but I'm not in this to 'get the job done'. If this is what the job was or is going to become, then I'll quit, do my own coding on the side for open-source or other projects, and just make money as a substitute teacher... ...that is, if I didn't have to pay for health insurance, but America sucks in that regard and always will.

    • by dvice ( 6309704 )

      I only ask AI to write me throwaway code which I wouldn't write without AI at all. With production code, I don't ask AI to write it at all, instead I just ask it questions like. "I upgraded this library from version A to B and now things broke down, what could be the reason?". Another use case is to just paste an error message from logs into AI and it can usually explain pretty well what is going on and how to fix it.

    • I bet people said similar things when high-level languages were replacing assembly.

      There is creativity -- you have to be creative in writing prompts.

      I do see an issue: the prompts are insufficient to reproduce the final output, but the prompts are the input to the process. We don't use a compiler to create assembly code, then try to maintain the assembly code while discarding the high-level language code. But vibe coding does something similar: discards the most abstract input, while keeping the output of t

  • The shit is really going to hit the fan when Stockholders realize the A-suite can be replaced by Jen, Roy, Moss and Richmond. Stockholders should demand all management be replaced by AI. I think the most important employee at every company today is Richmond. Renholm Industries runs much more efficient today because it is run by AI. Richmond monitors the mainframe where the AI lives. Richmond does not know what the lights mean but he will notify someone if any lights stop working.
    • Haha ! The grifting type is well known. Renholm Industries doesn't really produce anything. They are "investor herding " ... a web-based slot-machine. Of-course a slot-machine can be run by *.ai ... in fact any good 19-th Century Mechanical-Turk can run a slot-machine. Compare that valueless Fintek crap with a cabinet makers shop. Individually sourced raw wood : woodworking machines : German-made hand-tools : lathe to produce novel metal jigs as needed :
  • by Junta ( 36770 ) on Saturday March 14, 2026 @01:30PM (#66041340)

    So as presented it *sounds* like the typical respondent thought they spent about as much time as it saved and the numbers were fairly big.

    I'm skeptical that a survey of self-reported experience would manifest that way. I wager that some said 9 hours saved with barely anything needed and some said 9 hours of fixing the mistakes and no time saved, and that instead of '16 minutes per week', you just have slightly more people annoyed by it than enthusiastic about it.

    My guess is that it wildly varies by the job and situation. People for whom it works badly for are looking around and wondering what the hell is wrong with the people advocating, and the people for whom it works are amazed at the time savings, and given the current hype the former people are forced to do it anyway, even as it is terrible.

    For example, a discussion arose where developer mentioned they could make a quick tool to take care of something that was likely to be an issue for users if folks wanted it. An executive declared "no need, I just had Claude make the tool for me in like 2 minutes, and I'll share with everyone". That executive felt *incredibly* empowered, they didn't know how to code and they used the tool and the result of the tool in the AI generated sandbox looked right. Then folks tried the tool and it failed horribly because it didn't have any actual clue about the technology to manage, it managed only to make a tool and sandbox for it to demo in that was consistent with the desired narrative, with no whiff that the results had no relationship to the documentation site included in the executive's prompt. Further, if it *had* worked, it would have broken a number of security mechanisms based on how it *tried* to do things. However, the executive would never feel the consequences, they unleashed their 'awesome tool' and the grunts had to clean up the mess and bringing that back up to executives to ask them to stop doing that stuff is a very risky move employment wise, and they'll not believe you anyway, it's just a conspiracy to keep our jobs after all...

    Even when it is being reasonably useful, it's annoyingly prone to mistakes, randomly screwing up even if it's being asked to do the same easy little thing it has done successfully the last 12 times in a row in other contexts. Leaving you feeling a bit gaslight as you see the internet gushing about how unbelievably impossibly good Claude Opus 4.6 is after you just spent a week using your work's budget on that premium model to see if at least *they* are on to something and still finding a pretty annoying experience while Anthropic is crowing that they have all but finished off the job of software developer.

  • YMMV, a lot! (Score:2, Interesting)

    by LeDopore ( 898286 )

    A friend of mine is doing a hardware startup with 2 people that he says would have taken at least 3 extra coders pre-AI. He and his partner are doing the valuable EE and mechanical parts of the business. A lot of the software they need is not what's differentiating their startup, and AI is just fine. For them, it sounds like AI is giving a 150% productivity gain, and without it their business idea would only be marginally viable.

    • A friend of mine is doing a hardware startup with 2 people that he says...

      Q.E.D.

      There you have it, folks. Incontrovertible evidence that AI is every bit as wonderful as all the boosters say it is. Bite on that, haters!

  • Earlier studies found about the same. The only possible other gains (or losses) are via work quality and stress on the workers. Just for coding, this does not look good. Code becomes bloated, review resistant and probably unmaintainable pretty fast when coded by AI instead of humans. On the stress side, it does not look good either. Senior people are getting forced to review code that is hard and unpleasant to review. And, different to a junior coder, they cannot just explain how to write better code (and a

  • AI will become a net drag on overall productivity, driving it downward.

    I say this because I suspect AI will become the next equivalent to cat videos, social media, and doom scrolling. People will make goofy and/or shocking and/or distasteful videos just for fun, will play around with AI-generated music, will troll and attack each other with AI content, and will use AI as a quiz-book to dig up weird or shocking facts.

    The entertainment aspects of AI are being downplayed, but I think it will be the next crack-

  • by Chewbacon ( 797801 ) on Saturday March 14, 2026 @02:14PM (#66041416)

    My use case is reading a summarizing contracts and scientific articles. Now, while I'm spending time coming up with prompts and reading responses, I might as well read the stuff myself and become personally familiar with it. However where AI helps me is conversing about it, like you would with a team. So if I'm between customers on the road, I can feed it into AI and ask questions in voice mode, review the answers later. That's my gain and arguably a lot more than 16 minutes.

  • ÃZn my current latest few experiences is for example a task that would have taken probably 5 ours of coding, testing it took about 2.5 hours with Claude AI. It could have been faster, but I wanted to do a thorough code review to make sure all looks good. And the best news was that I only used prompts. So all in all was a win. Now, the code review activity prove to be a pleasant experience since I discovered not only it was perfect, it followed the same code style, variable naming, etc. In a way was lik

  • My own experience has been that I can use AI to develop materially better outputs than I could otherwise make without spending a lot longer on each task. And sometimes just materially better full stop, by treating AI as a thought partner. It's not perfect, but it can be pretty damn good.

  • But I want to know how many games of solitaire/FreeCell/Backbone it can win per second, let alone per week.

    Anyway, the story seems to be about a temporary problem. For now we still need humans checking the genAI work, but pretty soon that bottleneck will be eliminated. My own workaround is to check with a different genAI, but being careful to reword the question to bias it against the first solution. If it still comes up with the same answer, then I'm pretty (too?) likely to trust it.

    • Remember when Google actually worked well? Before they purposely degraded it so you increased your engagement with google?

      I'd say the AI searches way better and it also doesn't suck up time with tangents like search does just naturally does with tons of bad or poor results that can be tempting. Sure the AI saves many hours over normal work-- because normal work is loaded with distractions and the AI is razor sharp in it's focus, more than search ever was, even at it's peak.

      Give me an AI search that doesn

      • by shanen ( 462549 )

        Sorry, but I think you're just imagining that they won't start blending the advertising into the AI results. To the contrary, I think the increasingly evil google will notice the market value of ads that aren't even visible as such and the AI search results will soon be much more subtly polluted than we mere humans can even understand.

        But if they (the google folks or AIs) do find an alternative business model, it will probably be even more evil than that.

        Gimme funny?

  • by kTag ( 24819 )

    Listen, Iâ(TM)m an OG here. I became rich enough to stop working if I wanted to thanks to Slashdot fact-based coverage of Bitcoin around 13 years ago. This community used to embrace innovation. Now we see these crappy stories from random sites trying to minimize the impact of AI in this world. Thanks to AI I can build and automate data security process in a few days instead of months. If you are scared to lose your job maybe you should stop working like a robot. And yes I might lose mine but I still th

  • We developers always provide estimates based on the amount of time we think it will take to write the actual code. We forget that writing code is, maybe, 25% of the time required for the full SLDC for that code. It's part of why developers suck at estimating time, and why we have story points.

    AI speeds up that code-typing part. So to begin with, it's speeding up the part that's already the fastest. Worse, a lot of that typing often leads to weird bugs that a human wouldn't have caused, leading to more debug

  • There are people here and everywhere who claim great benefits and those who claim LLMs absorb them into review mode. So major gains versus a wash.

    I think both can be true. LLMs are trained on the internet data mostly, especially for technology. Which means they can mimick that level. They mimick some kind of average.

    So if you're doing something that you don't know anything about, but the net has loads of examples for, your acceleration by using an LLM will be grand.

    And if you're a specialist that spot

What this country needs is a good five cent nickel.

Working...