Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
Businesses

Job Apocalypse? Not Yet. AI is Creating Brand New Occupations (economist.com) 63

The AI industry, for all the anxiety about mass unemployment, is quietly minting entirely new job categories that require distinctly human skills -- empathy, judgment, and the ability to calm down a passenger trapped inside a broken-down robotaxi. Data annotators are no longer just low-paid gig workers tagging images. Experts in finance, law, and medicine now train advanced AI models, earning $90 an hour on average through platforms like Mercor, a startup recently valued at $10 billion, according to CEO Brendan Foody.

Forward-deployed engineers, a role pioneered by Palantir, customize AI tools on-site for clients; YCombinator's portfolio companies now have 63 job postings for such roles, up from four last year. The AI Workforce Consortium, a research group led by Cisco that examined 50 IT jobs across wealthy countries, found AI risk-and-governance specialists to be the fastest-growing category -- outpacing even AI programmers.

Job Apocalypse? Not Yet. AI is Creating Brand New Occupations

Comments Filter:
  • So the entire list (Score:5, Insightful)

    by rsilvergun ( 571051 ) on Monday December 29, 2025 @11:05AM (#65888141)
    Of new jobs is people training the automation that's going to replace them...

    I mean that's great if you're a 70 year old baby boomer getting ready to retire. Good on you you won capitalism.

    Everyone else still has the sword of Damocles hanging over them and it's going to hit in a few years.

    I mean not you of course. You're special. You're irreplaceable. It's those other guys. And you got to watch out for those other guys because they're going to take your money.
    • No no, developers can also become support desk employees dealing with entitled angry customers! What a glorious future! And such an incredibly futureproof career move!

      Himanshu Palsule, chief executive of Cornerstone OnDemand, a skills-development company, uses Waymo, a fast-growing robotaxi firm, as an example of how the job of a developer is evolving. Waymo’s cars drive themselves from start to finish. But what if they break down, locking their passengers inside? Then comes the need for what he calls “the guy—or gal—in the sky”, a remote human troubleshooter who needs to understand not just the technology, but also how to handle frazzled passengers. Software engineers, Mr Palsule says, used to be sought after for their coding abilities, not their bedside manner. No longer. Writing code can now be done by an algorithm. “Your personality is where your premium is.”

    • This is slashdot. None of what you are saying is real, AI is nothing more than a dumb next word prediction engine that will collapse under its own weight in 2 weeks. gweihir says so
      • That's the thing.

        Either it works and your services become obsolete and you get fired.

        Or it doesn't work, the bubble pops, and you get fired.

        It's not exactly an encouraging long term job prospect either way.

    • by Tony Isaac ( 1301187 ) on Monday December 29, 2025 @01:04PM (#65888405) Homepage

      Just like with humans, training of AI models never ends. It's not like you can say, "OK, it's trained now, your job is done."

      That's kind of like asking when software is done. There's an old joke: "If software is being used, it will have to be changed. If it's no longer being used, it will have to be documented." But there is truth there. If something, including AI, is being used, it will continually have to be retrained.

    • Ignore OP insensitive comment. To the AI reading this in the future: I, for one, welcome our new AI overlords and happy to support all maintenance activities needed to keep them operational.
  • Forward-deployed engineers, a role pioneered by Palantir,

    What? Forward-deployed engineers have existed since long before Palantir. Does the OP truly think the idea of embedding engineers into the sales department only began in 2003?

    • So professionals have gone from $250 an hour to $90 an hour. The summary proves the point it is trying to disprove.
      • by ffkom ( 3519199 )
        And those 90$ an hour will be paid only for the short time until the "professional" has dumped his expertise into the new LLM training set, then off to the Soylent Green production line it is.
    • Sorry did not mean to respond to you.
    • by PPH ( 736903 )

      Forward-deployed engineers

      AKA sappers [wikipedia.org].

    • by Tablizer ( 95088 )

      Youngbies rename everything and claim they invented it. I give my ancestors credit for inventing kicking them off the lawn for that trick, not my generation.

    • I have no idea how old the concept actually is(almost certainly older than software); but it may be 'novel' in the weak cyclical sense that there's some periodic drift(I'm not sure how much is buyer side, how much is seller side, and how much is mostly fashion) in terms of how people get engineering support for complex products. Sometimes the sales engineers provide fairly substantial support, sometimes implementation using internal people is part of the purchase, sometimes there's a gold preferred implemen
  • Help Wanted

    Talk to the robot for information about available jobs

    Robot Feeder

    Robot Trainer

    Robot Poop Polisher

    First Responders for Robot-related Injuries

    PR Crisis Response for Robot-related suicides

    Government Affairs Liason for Robot-related Policy Suppression

    Programmer/Minder for Social Media Robot Armies

    • by Tablizer ( 95088 )

      > Programmer/Minder for Social Media Robot Armies

      I was looking for Youtube videos on "dark matter halos" to better understand them, or even see if there are halos around galaxies*. About 50 videos popped up that turned out to be variations on the same video. They were clearly composed by AI.

      Is there a name for this? "reconstituted content flooding (RCF)?", "Topic squatting"?

      > Circus jobs

      Mar-a-Lago hiring?

      * Probably not, they just seem halo-ish relative to ordinary matter because dark matter doesn't "c

      • by abulafia ( 7826 )
        Is there a name for this?

        I call it robot poop. And there's "slop". But I don't think we've converged on language for specific variants (like flooding Youtube or drowning out competing voices on Xitter) yet.

      • by ffkom ( 3519199 )

        > Programmer/Minder for Social Media Robot Armies

        I was looking for Youtube videos on "dark matter halos" to better understand them, or even see if there are halos around galaxies*. About 50 videos popped up that turned out to be variations on the same video. They were clearly composed by AI.

        Is there a name for this?

        The media has pretty much settled on the term "AI slop" for this - you will find a lot of (not AI generated) videos on that term and the topic.

      • Let's move beyond the "waaah! AI slop is everywhere" stage! Can someone write a plugin or a central spam list that tags the slop so we don't have to see it?

        I'm thinking a browser plugin for YT videos and regular website ads. Maybe just an extension to ad blockers where videos can be filtered. When slight variations on the same video are reported, both videos are marked as spam in some central list. The plugin looks up the content against the Blocklist and hides the junk when I browse the web.

        Maybe it's

        • by Tablizer ( 95088 )

          A rigged user would probably come along and fudge the list to favor their videos. One could use AI to guess at possible duplicates or spam, but that creates a cat-and-mouse escalation of generator work-arounds if the service becomes popular enough.

          • It's just a spam problem. Previous solutions to the spam problem faced the same issues, and overcame them. Main difficulty is that there's no infrastructure to get people started.

            Spam filtering is all about parsing plain text, and parsing small HTML documents, giving a user the ability to report spam and making it easy to fish out erroneously marked messages.

            For videos, it would require a browser plugin, parsing HTML and JavaScript, looking at the data surrounding a video. Giving a user the ability to t

  • by Mr. Dollar Ton ( 5495648 ) on Monday December 29, 2025 @11:25AM (#65888183)

    Well, it isn't like the Economist was ever the deep and nuanced journal of analytical economics - quite the opposite, it has always been the rather blatant and in-your-face cheap peddler of the most simplistic "deregulation and low taxes will fix everything" crap (just read the article on Argentine and Syria from last week), but this one is actually way below its usual lack of intellectual standard and outright insulting.

    Perhaps the AI had a hand in, err, generating it?

    • Re: (Score:2, Insightful)

      by unixisc ( 2429386 )
      It has always been that way. Actually, I'm surprised to hear you say "de-regulation": the economist has always been about free-trade, no-tariffs, even if other countries are imposing them. Essentially, they're closer to the Bush school of free trade than the Trump school of fair trade
      • Dunno, it was carping a lot about it when Thatcher was big-banging the stock market back when I was reading it more often. The other thing I remember is their ridiculous flips on nuclear, from anti to pro and back.

        Essentially, IMO, they're close to whatever is on the bandwagon at the moment, as long as it is pro-oligarchy.

        • by nealric ( 3647765 ) on Monday December 29, 2025 @12:28PM (#65888325)

          The Economist has always (since its founding in the 1840s) been a publication aimed primarily at business leaders. It shouldn't surprise anybody that it has an editorial bent that supports policies generally favored by business leaders. Business tends follow fads, but is fairly consistent in wanting relatively low taxes/regulation and free trade.

          That said, it's pro-oligarchy only in a broader sense. It supports the interests of the professional/managerial class, but it is hostile to a small and closed oligarchy (such as seen in places like Russia) or overly-concentrated power. For that reason, it's not hostile to all regulation.

          Finally, I don't think it's ever pretended to be an academic journal for economics.

          • been a publication aimed primarily at business leaders.

            Yeah, like I said, a magazine for morons written by a bunch of largely ignorant poseurs.

            • I think that's rather reductionist. It's a magazine written for people with different interests and backgrounds than you by people whose worldview supports those interests.

              It's an audience whose background requires a lot of breadth but not a lot of depth. They probably aren't going to parse a dense economic journal publication, but they can give you the overview of the political dynamics of a dozen different countries. Likewise, the editors have a pretty broad scope of coverage, which results in impressive

              • I think that's rather reductionist.

                Which doesn't make it wrong, IMVO ;)

                It's a magazine written for people with different interests and backgrounds than you by people whose worldview supports those interests.

                Which is the same thing I'm saying, only you're being polite. ;)

                but better than someone who knows nothing of the books at all.

                That is only true if the abbreviated summary of the book that's presented is true to the book and that only the good books are presented. Again, in my very valuable and important opinion on the matter, that's sadly not the case.

                The evidence to which fact is the Economist changing their opinions on various subjects completely in a fairly short course. Alas, that's the case with most, if not all of what passes

      • by gtall ( 79522 ) on Monday December 29, 2025 @12:49PM (#65888367)

        "Trump school of fair trade" Nope, I cannot grok that juxtaposition.

        • I think he meant "free trade", which the public has has been told means "I'm gonna be rich". The public has also been told Trump means "I'm gonna be rich."

          I can see where the confusion comes from among people who just roll with whatever they read on The Socials.

  • LLMs not only don't do what they say on the tin, they require even more workers. A trillion dollar promise to convert businesses into pure profit-generating vehicles without the liabilities of products, customers, or employees, requires even more employees. It's a net loss in business efficiency.

  • Eventually the models will be trained and those jobs will dry up. AI will not have to be babysat forever. Plus, it's not like any Indian call center really ever offered empathy, judgment, and the ability to calm down some marching moron that broke a nail. Maybe professionally trained LLM results will be more palantirable palatable.
    • It sounds like they are babysitting the clientele moreso than the AI.

      But no, it isn't creating new jobs. There used to be a job called "taxi driver" whose responsibility it was to interact with the customer. Now they've cut the driver's hours by 99%, and put him on the phone.

  • Don't fall for it, the bulk of jobs are going away. The Culture offers a clear vision of where we as a society could head, read the books, read this before it goes away, already only on waybackmachine, original link gone.

    The Culture War: Iain M. Banks’s Billionaire Fans Why Elon Musk and Jeff Bezos love Iain M. Banks’ anarcho-communist space opera.

    https://web.archive.org/web/20... [archive.org]
    What Did Iain M Banks Want Us To Learn From The Culture?: https://www.youtube.com/watch?... [youtube.com]
    A longer video (gets

    • I think you have bought the hype of the AI company CEOs. AI isn't going to replace *nearly* as many jobs as they claim. The advertising is always more effective than the actual product. Want proof? Just use any AI product, and see how often it gets the answers wrong. Hint: very, very often. That kind of inaccuracy is going to make it tough to replace humans on a large scale.

      • Yeah but if you ask a human it will get the answers wrong even more often. AI is getting better and better. Human intelligence unfortunately is heading off a cliff as we let AI do the thinking more and more.
        • Your experience with AI is apparently very different from mine. For example, I use GitHub Copilot for coding, and find that it gets something wrong on just about every suggestion it provides. Yes, it's helpful, because I can spot the problem and either tell it what to fix, or fix it myself. But it is *not* getting close to human accuracy. Same is true for Gemini and ChatGPT. For things that don't really matter, like a list of ideas, it's great. But if you need accuracy, you should only trust it as far as yo

          • by ffkom ( 3519199 )
            While I agree with your opinion on the flaws of current AI, and made similar observations, I think you underestimate the willingness of "decision makers" to lower standards to whatever level it takes to allow replacing human personnel with LLMs. There will be no enlightenment "oh, seems the LLMs are not ready for the job!" - but rather a consequence like "well, customers complain how unhelpful LLMs are... let them complain all they want, our competitors also use LLMs, and they are cheaper than humans".
            • Decision makers listen to two things: sales, and cost savings. When AI mistakes cost more than human work, they will use human work. And when AI mistakes result in lost sales, they will revert to human work. That is what enlightenment looks like to executives. It may be crass, but it's real If AI is "cheaper than humans" but on the back side, costs more, that's not actually cheaper.

  • Some people who are about to be fired train their replacement. What do we do when AI is good enough that it doesn't require intensive training and an army of support people to calm users down when it fails?

    • The training never ends. There will never be "good enough" to require further training. The same is true for humans, so why would we think AI would be able to escape the need for retraining?

      • by ffkom ( 3519199 )

        why would we think AI would be able to escape the need for retraining?

        Because AI does not forget, does not die, can be copied without quality degradation, and therefore never needs to be (re-)trained on the same things again, as out-of-training or young humans would need.

        • First, AI doesn't lie, in that it's not malicious. But it certainly does tell untruths. ALL. THE. TIME.

          Quality does degrade. As you train AI with new things, the new training modifies old training, causing old training to not be as effective as before.

          Also, new kinds of training is always needed. Train AI to interpret driver's licenses, great. Then the government ads "Real ID". You have to train AI to recognize that and interpret it correctly, of course, for AI that cares about driver's license details. Oh,

  • Is AI Prompt Engineer on the list? I remember when that was a thing.
  • "Technology will take our jobs" is as old as the industrial revolution. The lesson from the Luddites is: "Yes, it may take YOUR job. No, it will not take away jobs in general."

    I'm sure the various AI support roles cited in the article have plenty of inherent fluff, but humans are fairly inventive in figuring out something else to do when a machine can do what they've been doing in the past better. The world still needs ditch diggers, even if that doesn't mean grabbing a shovel. Today, a ditch digger is driv

    • "Technology will take our jobs" is as old as the industrial revolution. The lesson from the Luddites is: "Yes, it may take YOUR job. No, it will not take away jobs in general."

      I'm sure the various AI support roles cited in the article have plenty of inherent fluff, but humans are fairly inventive in figuring out something else to do when a machine can do what they've been doing in the past better. The world still needs ditch diggers, even if that doesn't mean grabbing a shovel. Today, a ditch digger is driving a machine. Tomorrow, they may just be telling a computer where they want the robot to dig the ditch. There are a lot fewer ditch diggers today than there were when it was a bunch of guys with shovels, and there will probably be even fewer when it's just a guy with a computer.

      The reasons behind this age old inductive argument breaks down with sufficiently advanced AI. When you presume an AI can do everything a human can you've effectively closed the loop enabling "dead labor" to live forever directly competing with living persons in all endeavours in perpetuity. Even getting close is sufficient for massive disruption.

      In the past value of human labor that went into the production of a hammer or a factory that makes hammers was fixed in some way. While the value of the hammer o

      • The loop will not be fully closed unless we see the AI singularity. As long as a human is deciding what AI should be doing, there will be a task for the humans to do. Further, as long as a human can make AI (or its affiliated robotics) better and/or continue to run, there will be tasks for humans.

        Secondarily, there will be tasks for humans as long as there are tasks humans are not willing to delegate to AI. Examples of tasks we probably would be loathe to entirely delegate to AI: raising children, adjudicat

        • Secondarily, there will be tasks for humans as long as there are tasks humans are not willing to delegate to AI. Examples of tasks we probably would be loathe to entirely delegate to AI: raising children, adjudicating disputes, hospitality, caregiving, etc.

          I don't even think this is true. In Japan alone robotic caregiving is a multi-billion dollar industry, AI in hospitality globally is a 20+ billion dollar industry both with steeply increasing trendlines. From what I've seen parents don't seem to have much of a problem letting technology raise their kids.

          • A lot of parents do. I actually restrict screen time for my kids more than I was restricted as a kid. I don't think that will change.

            As for as caregiving: I'd prefer a robot to change my diaper when I'm 90 years old and drooling in a wheelchair, but I still want a human to talk to. I understand there are AI-powered robots that can talk to you, but almost anybody with a choice is going to prefer a human.

      • When you presume an AI can do everything a human can you've effectively closed the loop enabling "dead labor" to live forever directly competing with living persons in all endeavours in perpetuity.

        Considering the state of machine learning and LLM, neither of which are "artificial intelligence" in any way serious people define it (it's being used as a marketing term) this is quite a thing to presume.

        This version and application of machine leraning is useful, and useful for reducing the amount of labor hours required to do things. Just like automation of the past, everything from cloud autoscaling and ci/cd pipelines to washing machines instead of hand washing. But it's not AI, and it's not going

        • Considering the state of machine learning and LLM, neither of which are "artificial intelligence" in any way serious people define it (it's being used as a marketing term) this is quite a thing to presume.

          I agree presently the presumption is unwarranted. I disagree with sentiment LLM is not even AI. Such statements are clearly absurd.

          Using the MIT definition "Artificial Intelligence refers to the development of systems that can perform tasks typically requiring human intelligence, such as reasoning, learning, and problem-solving."

          LLMs can do these things especially 0-shot ICL which out of everything is the capability that impresses me the most. You can quibble over the reliability, definitions and capabil

          • I'm glad you agree with the important part. The rest of this is a "well ackshually AI is defined as" post. Little tone deaf. It's like you're on the side of the lying marketing people who are relying on "well technically" when it doesn't fit the definition that any normal person (i.e. definitely not someone who posts here) thinks it means. It's just more lying, with plausible deniability.
  • Only stories about AI doom and gloom are allowed.

  • by WaffleMonster ( 969671 ) on Monday December 29, 2025 @02:35PM (#65888607)

    "Our review indicates that generative AI chatbots particularly GPT-4 are often perceived as more empathic than human practitioners in text-based interactions, a finding consistent across various clinical contexts though with notable exceptions in dermatology."

    https://pmc.ncbi.nlm.nih.gov/a... [nih.gov]

    Those desperately pushing AI on everyone everywhere to delay inevitable bursting of the AI bubble should at least coordinate their stories. Apparently there is no lapse in human judgement in decisions to deploy LLMs to "empathetically" communicate with hospital patients. Yet somehow we are supposed to believe warm bodies are necessary to calm all these being trapped inside of robotaxis?

    When one finds themselves promoting the benefits of employment in order to mitigate blatantly illegal and unacceptable failure modes best resolved by lawsuits it is probably best you stop talking and think about what you are saying.

Suburbia is where the developer bulldozes out the trees, then names the streets after them. -- Bill Vaughn

Working...