Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
AI United States

AI Can Technically Perform 12% of US Labor Market's Wage Value, MIT Simulation Finds (cnbc.com) 70

Researchers at MIT and Oak Ridge National Laboratory have built a simulation that models all 151 million American workers and their skills, then maps those skills against the capabilities of over 13,000 AI tools currently in production to see where the two overlap. The answer, according to their analysis: 11.7% of the US labor market's total wage value, or about $1.2 trillion, sits in tasks that AI systems can technically perform [PDF].

The researchers call this the Iceberg Index, and the name is deliberate. The visible AI disruption happening in tech jobs right now accounts for only 2.2% of labor market wage value. The remaining exposure lurks in cognitive and administrative work across finance, healthcare administration, and professional services, and unlike tech-sector disruption, it's spread across all fifty states rather than concentrated on the coasts.

Delaware and South Dakota show higher Iceberg Index values than California because their economies lean heavily on administrative and financial work. Ohio and Tennessee register modest tech-sector exposure but substantial hidden risk in the white-collar functions that support their manufacturing bases.

To validate the framework, the researchers compared their predictions against Anthropic's Economic Index tracking real-world AI usage from millions of Claude users. The two measures agreed on state categorizations 69% of the time, with particularly strong alignment at the extremes.

The Iceberg Index doesn't predict job losses or adoption timelines. It measures technical capability, the overlap between what AI can do and what occupations require. Traditional economic indicators like GDP and unemployment explain less than five percent of the variation in this skill-based exposure, which is partly why the researchers argue workforce planners need new metrics.

AI Can Technically Perform 12% of US Labor Market's Wage Value, MIT Simulation Finds

Comments Filter:
  • The AI bubble (Score:1, Interesting)

    by Luckyo ( 1726890 )

    This is why everyone and their grandmother is all in on AI. It's adoption lags for the sole reason of "people haven't caught up with what it can do, and learned how to let it do it".

    Whatever investment is currently in AI is dwarfed by what value it can already technically do, as long as people actually integrate it. It's the "get people to integrate it" part that is the choke point.

    Meanwhile AI and robotics advancing rapidly is constantly increasing the amount of work it can perform adequately (about as wel

    • Re: The AI bubble (Score:1, Informative)

      by ByTor-2112 ( 313205 )

      None of the "models" can reliably and repeatably do what they are claimed to do. Humans are still the only reliable choice, and there is absolutely no evidence that this will not be the case for the foreseeable future.

      • Re: The AI bubble (Score:5, Insightful)

        by TheGratefulNet ( 143330 ) on Thursday November 27, 2025 @09:34AM (#65821169)

        the hunger by the 1% to remove as much humanity from the workplace is sickening.

        they fully know they are destroying the middle and lower classes (even more than they already have).

        they, like the R party, just dont care. they think they are rich and insulated enough. they never cared what their own people need. the 'let them eat cake' time has come back again, but even worse.

        there will be no thought to social systems needed to support the unemployed (which will be many of us, given enough time).

        I'm glad I'm retiring soon. I would not want to compete in a job market that bosses think can be done by computer, alone.

        and I would not want to be the 'prompt meister' to try to coax answers from the machines that make sense.

        some see a great future with AI. I see nothing but doom and gloom. the greed factor is strong in humans and the class disparity will cause rioting and civil wars.

        maybe not wars. the US has created a special police force that is above the law, so any uprisings will EASILY be dealt with. they thought about that. ICE is not just for foreigners. its a general purpose police force answerable only to 1 person.

        people, please show me I'm wrong. but all signs point to a very bad future for 95% of the 'thinks for a living' workforce.

        • the hunger by the 1% to remove as much humanity from the workplace is sickening.

          To be fair, this is nothing new. The entirety of the industrial revolution has been finding ways to use automation to decrease the amount of human labor used to make things (i.e., increase "productivity".)

          The problem is that we do not have an economic system in which a society works when there is no need for human labor, and a small but rich fraction of the population owns the machinery that produces everything.

          • by 0123456 ( 636235 )

            We know what will happen in a world where there is no need for human labour. The 'elite' will build Terminators to eliminate most of the humans.

            They are not going to pay for billions of people to sit around watching porn all day.

            • Itâ(TM)s also possible that the global population will shrink in response to not needing as many offspring to do things like working the farm.
            • We know what will happen in a world where there is no need for human labour. The 'elite' will build Terminators to eliminate most of the humans.

              I doubt it. The elites have capital to invest in the means of production. Absent this they have no real power or purpose.

              When you no longer need to pay countless thousands of people to perform a task this is a double edged sword. It not only means you can make do with less it means anyone else can step in and accomplish the same tasks without you.

          • We do. It's got to do with socuakising the means of production and the main reason it hasn't been done lately is that it will require some literal wading in the blood of the oppressors as they'll be better protected than Louis XIV and his pals. As soon as the masses really have nothing left to lose, however, that won't help,

      • Reliable and repeatable isn't the goal. The goal is to be able to verify a solution faster than one could create it....almost a P vs NP thing.
      • by gweihir ( 88907 )

        Indeed. But the proponents of the hype are not rational.

    • To me, AI is like the discovery of the electric motor. It can affect everything across our entire life. However, the key is how it is used. When the electric motor was invented, people didn't tout that it would make people turning wrenches obsolete. It was made as a very useful tool for stuff at the time, and was a huge improvement in factories from the old "big steam motor in the basement, with tons of gears and belts...".

      We also have had AI for decades. Genetic algorithms, fuzzy logic, etc. It is on

      • by Luckyo ( 1726890 )

        >However, it isn't perfect.

        Perfect is the enemy of good.

        Notably, this is also the difference between engineers and scientists. Latter live in the world of perfect abstraction. Former live in real world, where perfect abstraction is exceedingly counter-productive, and good enough is the gold standard.

    • by Anonymous Coward
      The #1 wet dream of business is to eliminate all workers and replace them with automation/computers/A.I., but no one is bothering to consider the most important question:

      After you eliminate all the workers, and you have 95% unemployment, who will buy your products?

      Keep in mind, any new jobs created by A.I. will eventually become a target for elimination as part of the drive for more "efficiency". But the people pushing this don't care. -- "I got mine. Fuck the rest of y'all".
    • This is why everyone and their grandmother is all in on AI. It's adoption lags for the sole reason of "people haven't caught up with what it can do, and learned how to let it do it".

      I really want you to explain why you know better than the MIT researchers quoted in the summary who determined you are wrong. Most jobs can't be replaced by current LLM, that's what they found. Why do you disagree with it?

  • 20% (Score:5, Informative)

    by groobly ( 6155920 ) on Thursday November 27, 2025 @09:26AM (#65821149)

    20%: The amount of their time actual people will have to spend to fix the mistakes made by AIs doing 12% of the work.

    • There's a few restaurants in town that have those little robot waiters. And they *all* have a human following them around.
      • they *all* have a human following them around

        So do dogs in the park, usually armed with a poopie bag. This is our future.

        • At least my dog has feelings, his affection is real. I doubt people will love robots they way they love pets, no one needs a sycophant faker in their life.

  • by simlox ( 6576120 ) on Thursday November 27, 2025 @09:36AM (#65821175)
    But I still have to try the solutions out and get into my code. All in all, I save time because I don't have to look up solutions around the web myself. But I always have to code it myself, except in one case whete I got it to translate existing (obscure) code to Python - but not error free either, it turned out.
    • I've recently been using Chatgpt for assistance with typescript and css for a work SPA project. One time I asked it if a very small function - only 5 or 6 lines - was the most efficient, idiomatic way to do a thing. It gave me a response, then asked if I wanted an even "more efficient" way - which, duh, of course I do, that's what I asked. The "more efficient" way was a literal copy of my original function.

      I despise CSS so much that I'm sure I will continue to use LLMs to tell me why a layout isn't working,

    • I'm not a programmer but anytime I program its nice to look at examples, sometimes even copy sections, but seems like the base and most of the program structure and idea has to come from a programmers head or its going to be not much better than nonsense strewn together. Vibe coding may have a place with people that were never going to be programmers to begin with, but why would a company or project want someone that just copies and pastes from some questionable computer suggestion machine? If anything LLMs
  • AI is only able to do 12% of digitized work. Recently I bought a tablet with a stylus. I'm doing a course so I made notes. There seems to be no AI that can pick out even one word from my notes. I think we are forgetting that humans occasionally need to interact with things that humans have not already encoded into 1s and 0s. Furthermore, AI is only good if it can work with digitized information.
  • by jlowery ( 47102 ) on Thursday November 27, 2025 @10:16AM (#65821237)

    I have some medium-sized open source projects that I write or contribute to on github.

    1) my daughter's form-based web project--loads of content, loads of pages with various inputs. She had only partial content, and ChatGPT took it upon itself to fill in the rest. The content was quite good. I never did get satisfactory layout, however.

    2) refactoring of a god-object from another repo. It took awhile, but Claude got there much faster than I would have. Added additional functionality at my request. Now published and working. Generated documentation of classes and methods.

    3) conversion of a jsx website to a tax (JavaScript to typescript). I figured this would be a disaster, but nope. Claude did the conversion (about 50 components and additional methods) in a few days working with it. Also, all documentation, including release notes. And tests.

    My experience:

    1) treat the agent like a talented junior developer who is very fast and quite thorough.
    2) it will get confused and forget things. An instruction.md file really helps to prevent regressions
    3) it will get stuck in loops and go down rabbit holes at times. Test and commit often so you can rolllback breakage.
    4) proceed incrementally where possible. Small, discrete steps work best
    5) ask the agent to analyse/explain before doing
    6) don't be afraid to ask it for suggestions; they can be quite good (it did a nice job improving the layout and color scheme of my website, for instance).

    • Great tips. After a couple months of pretending to be a software engineer, I've come to nearly the same conclusions, especially #1. The only thing AI is going to replace in the short term is junior devs (which is going to create a lack of experienced devs in the future). The tools cannot just magically create good software from nothing, but they are the best damn auto-complete you can imagine. This is already showing to be game changer for experienced devs who know what they want and know what 'good' looks
    • Thatâ(TM)s quite a few caveats for something taking over the world.

      How good will it be about modifying it 6 months from now now to add a feature or ferreting out a bug.
      My guess is itâ(TM)s even odds that some of its attempts will utterly destroy it.

      So yes, keep doing frequent commits.

    • by Alinabi ( 464689 )
      This sounds like significantly more effort than writing the code by hand.
    • 1) treat the agent like a talented junior developer who is very fast and quite thorough.

      Every time I see this advice, I can't help but think the person giving the advice hasn't worked with very many junior developers.

  • AI has some uses, but I feel it's still at the retarded child phase. Like any child, I would trust it to function on its own without supervision. Since it's also retarded, it actually requires extra supervision, since when AI freaks out it will really shit the bed and create a horrible mess that humans will need to spend triple the time (or more) to cleanup....
  • If we'd just kill everybody, 100% of the labor market's value could be replaced by no people at all. Isn't that amazing? It also is by all legitimate logic and rules oif extrapolation and deduction the ultimate goal of our economical and political system. I don'tz want to say I am a fucking prophet or something but I think THE END IS NIGH!

  • by Z80a ( 971949 ) on Thursday November 27, 2025 @10:55AM (#65821279)

    As long the AI thinks it is always right and never self check or correct, it's not truly "ready" for anything.
    Specially when the failure mode is "as close as possible to the right answer" because it works by "the statistically more probable answer".
    It's like a digital vegeta, but even worse.

  • by MpVpRb ( 1423381 ) on Thursday November 27, 2025 @10:57AM (#65821281)

    In practice, it's more likely that the best workers will use AI to become more productive, while carefully reviewing its work and correcting if needed
    Yes, this will reduce the need for some poorer quality workers, but it's silly to believe that today's AI can operate unsupervised

  • by whitroth ( 9367 ) <whitrothNO@SPAM5-cent.us> on Thursday November 27, 2025 @11:15AM (#65821303) Homepage

    ... of people now working and earning a living (and, to be real, it's the US) and having medical coverage can all be put out of work, evicted, to die under bridges, while the CEOs increase their ROI.

    What have I missed?

    • So 12% ... of people now working and earning a living (and, to be real, it's the US) and having medical coverage can all be put out of work, evicted, to die under bridges, while the CEOs increase their ROI.

      What have I missed?

      You missed how the 12% was applied. Let's try again:

      > 11.7% of the US labor market's total wage value, or about $1.2 trillion, sits in tasks that AI systems can technically perform.

      It's not 12% of people, it's 12% of work. So maybe 5% of your work but 19% of someone else's work ... which average to 12% (or 11.7% according to their calculations). Now can a company shift things around so that whole people can be let go? Yes and I'm sure some jerk companies will do that, but it's not an automatic thing. Ima

    • What have I missed?

      That it is illegal to die under a bridge. (think about the entire situation, not merely the quick answer, i am saying far more than it appears i am, but would take paragraphs to fully write)

  • It's about to replace all porn actresses as well.

  • by Somervillain ( 4719341 ) on Thursday November 27, 2025 @11:58AM (#65821353)
    As a daily user of these LLM tools, I confident they greatly overestimated the capabilities of the tools and there's no way this model is accurate or comprehensive enough to factor in the various complexities of all these jobs . If your value add was low enough that ChatGPT or Claude can fully replace you, then your job was automated already, or at the very least, shipped off overseas to the lowest bidder.. It is fair to say that if this AI actually worked, businesses could put more duties on a single employee, so departments of 12 could be reduced to 8, for example. However, it's not eliminating the job. It could theoretically slow expansion of headcount as well.

    However, I am EXTREMELY skeptical of any business that says they replaced humans with AI. It's basically unheard of to quickly cut headcount based on productivity gains. Historically, when you introduce automation, it works in tandem with your humans to increase output. Marc Beinhoff is the posterboy of this fraud as he's the first major CEO I know of to brag about laying off people because of his mystery AI he refuses to demo. Traditionally, someone in his shoes would keep the employees and either expand the scope of his offering or ship things faster. Almost all publicly traded businesses prefer growth over cost savings and I am skeptical that salesforce.com saturated every market it is in and has no more growth opportunity. He clearly overhired and is cutting costs, but outright lying to investors.

    Sci-fi AI?...yeah, Jarvis, the Matrix, HAL9000, Mother (Alien), etc could really take a chunk out of the labor market. ChatGPT? No fucking way. This study is just enabling this fraud.

    However if these garbage LLM tools ACTUALLY worked, most employees wouldn't be replaced. They'd find themselves doing the same job, but with much much greater scope...for example front-end developers would be expected to be full stack developers...but really almost no software engineer is a pure coder. We're problem solvers. Writing Java? really fucking easy. It's my full time job..it takes about 10% of my time, at most. The other 90% is high level design, figuring out requirements, and troubleshooting issues...things AI either absolutely cannot do or is piss poor at....and given the stakes, no one is going to vibe code their way through a production issue when billions of dollars of revenue are on the line. Eventually? yeah, they'll cut headcount to accommodate for productivity, but almost no business in the world prefers cost saving to growth. Most want to get as much growth as fast as they can and worry about efficiency later. Most would rather crush their rivals than do exactly what they did last year, only 11% cheaper.
    • If using AI means a department can go from 12 reduced to 8, while retaining the same amount of efficiency, then 4 jobs were just lost. So yes, AI is costing jobs.

    • I love all the YouTube clickbait videos titles "these jobs won't exist in 24 months". Yeah, right. If an entire class of job was going to "not exist" in 24 months it would be painfully obvious as businesses began testing the alternative in advance of eliminations. Which of course would not happen like flipping a switch, we would be seeing 10% or more losses now, with measurable increases every month.

      LLMs excel at generating large volumes of text that appears moderately plausible and coherent. This is litera

  • So, they didn't have AI actually do any of those tasks, they said AI *could* do those tasks by looking at the task and looking at AI and saying "Yup!".

    AI probably can't do those tasks, or people would be replaced already.

    This is a cheerleading study on AI meant to drive more investment, that's all.

    • Not necessarily. Ever go to an office with a lot of 50+ aged people that are using 30 year old tech because it still works? Yeah, I've seen that enough times. The old people will dig in and fight changes because that requires learning something new.

      Today's kids that will be in the work force in 10-15 years will have been born with a cellphone in hand and AI will be something they've used for their entire education process. Think how much further AI will get in 10-15 years. Yeah, we're fucked screwed if we d

      • And then when things break, "today's kids" will be stuck and not know what to do because they don't understand the real process because "AI" handled it all. Honestly, the better place to land is in between those 2 extremes.

  • Today is the worst AI will ever be. It keeps improving. What's more the improvement jump of Gemini 3 shows the improvements are not plateauing. In fact a few more jumps like that puts AI in reach of the holy grail of AGI. Furthermore, the robots are coming online just now. Sure they will suck at first but improve fast and threaten physical jobs.

    It is good to be skeptical but stupid to ignore trends - and most important - it prevents you from thinking of: what if they are right? if massive job loss doe
    • Care to share the composition of your current investment portfolio?

      • Ha Ha as if a slashdot pump would actually move the stock market. But if we read your comment another way, it does make a good suggestion: perhaps start a sovereign wealth fund as currently only 65 million US individuals have IRAs.
    • How do you measure these improvements in a quantifiable manner to gage whether or not they are plateauing?

      All pure speculation that improvements are exponential, just like it's pure speculation that "intelligence" is some kind of emergent property of an LLM?

  • by larryjoe ( 135075 ) on Thursday November 27, 2025 @12:46PM (#65821425)

    What does "technically perform" mean? Depending on that definition, the report could be significant or completely irrelevant. How well is a task done, what is the probability of failure, and what is the impact of the task (in terms of safety and significance)? For example, Tesla FSD could be labeled as technically performant. However, there are scenarios with a more than insignificant probability of failure. That's why FSD is legally Level 2, even though it is technically performant more than 99% of the time.

    The report doesn't even come close to providing insight into these pivotal questions. Furthermore, the O*NET database that forms the basis of the framework for skills evaluation contains short descriptions that are open to wide interpretation, which likely means that the conclusions of the report are heavily dependent on the evaluation abilities and biases of the researchers.

  • AI provides no value to humanity.
    • by sarren1901 ( 5415506 ) on Thursday November 27, 2025 @02:21PM (#65821613)

      That's a silly comment. If AI could in fact do everything, it would provide great utility for humanity. The problem is not our technology advancement, it's our social stagnation. If machines can liberate us all from having to go to work, we could literally sit back and play on passion projects, or do other stuff.

      The only reason AI is scary is most of us (in the USA) labor under the impression that you have to work to eat. Well, that's only true if you let the "owners" of the AI fuck society. Of course, it sure as heck looks like that's precisely what we are going to do, but it doesn't have to be that way.

      Other societies, such as the Chinese, may very well show us how it can offer a good life to it's people while letting machines do most the work. Over here in USA, we'll be killing each other while Musk, Bezos, Gates, etc, sit back in their fortified castles with AI-drone controlled terminators protecting them.

      So to reiterate, the problem is not the technology. It's our social norms.

  • There are whole industries that were slaughtered by the personal computer revolution. I watched it happen from the beginning. There were a ton of naysayers back then, too, and they were all wrong. Then it happened again with the internet, yet many believe it can't happen again.

    The failure of imagination around here is stunning.

    Those of you who think the internet etc. created more and/or better jobs to make up the difference, you must explain why younger Millennials and all Zoomers are so fucked and ha
    • I don't think AI is causing too much of that yet. What we see now is more of a result of the boomer's borrowing all the subsequent generations into permanent debt slavery so they could give their money to billionaires.
    • Those of you who think the internet etc. created more and/or better jobs to make up the difference, you must explain why younger Millennials and all Zoomers are so fucked and have stagnated in the increasingly squeezed "gig economy".

      No, you need to explain how the internet or computers cause those problems. Do you really think the internet or computers are the cause of high housing prices or expensive health care or over-priced higher education or ... ?

      Let me suggest that the problem for a lot of young people is that they were repeatedly told a higher education is a guaranteed meal ticket. They borrowed a lot of money to get that higher education. Now, they have discovered their expensive education not only isn't a meal ticket but c

  • ... unfucking what the ai fucked up.
  • Did they actually assess what *my* skills and *your* skills are, or what skills we use in our jobs? No, of course not, they used representative templates. They generalized what they thought a fast food worker's skills were, or a teacher's skills, or an engineer's skills. And my guess is that they filled in a lot of details using...AI.

    I have serious questions about the quality of the input data they used for this analysis. I suspect they missed a few hundred million skills.

  • Of course AI can replace some of the work people currently do. Other jobs will be created to replace them. One of the jobs that AI can easily do is create a model that produces some interesting result and develop a plan for promoting it as a reliable and accurate representation of reality.

    The real question is how much value will work AI can do have? It used to be that fast accurate typing had considerable value as a job. It was replaced by computers. But producing a typewritten page no longer has much valu

There are two ways to write error-free programs; only the third one works.

Working...