Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
AI Businesses

A New White-Collar Gig Economy: Training AI To Take Over (bloomberg.com) 34

AI labs are paying skilled professionals hundreds of dollars per hour to train their models in specialized fields. Companies like Mercor, Surge AI, Scale AI and Turing recruit bankers, lawyers, engineers and doctors to improve the accuracy of AI systems in professional settings. Mercor advertises roles for medical secretaries, movie directors and private detectives at rates ranging from $20 to $185 per hour for contract work and up to $200,000 for full-time positions. Surge AI offers as much as $1,000 per hour for expertise from startup CEOs and venture capital partners. Mercor pays out over $1.5 million daily to professionals it hires for clients including OpenAI and Anthropic.

Some contractors are former employees of Goldman Sachs and McKinsey. Others moonlight in this work while keeping their regular jobs. Brendan Foody, Mercor's 22-year-old CEO, acknowledged at a conference last week that trade secrets could potentially be compromised given the volume of work submitted. Uber CEO Dara Khosrowshahi said on this week's earnings call that some AI training gigs on its platform require PhDs.
This discussion has been archived. No new comments can be posted.

A New White-Collar Gig Economy: Training AI To Take Over

Comments Filter:
  • Can that AI pass the BAR or MCAT?

  • "Training their replacement" is hardly a new thing. The only real question is how long until people are getting laid off, but get 3 months extra health insurance only if they brain-dump-train the AI replacement on their way out.
  • Just spent an hour trying to get Gemini CLI trying to get to do something in PowerShell. Failed in the end.
    Would love to be able to get paid to do the same.

    • Across three days, posing the same question about paginating the results of a powershell tool that interacted with AD, Gemini hallucinated four different command line arguments that didn't exist.

      This is exactly how you expect a statistical prediction model to work.

      This is exactly what you don't want in a coding assistant.

      • by kackle ( 910159 )
        Amateurs... I once spent two weeks of after-work time chasing chatbot farts. I have an old MacBook Pro with two graphic processors (GPU) in it. When the one GPU fails, apparently it can no longer boot to Windows. I asked ChatGPT is there was a workaround to use the other GPU which works fine with the MacOS partition. The chatbot happily took me down a path of a million Unix commands/tries until it admitted that what we were trying to do was impossible. Grr...

        Half the time it's amazing. Half the ti
    • I spent two hours typing nothing but English to get a working web app out that grabbed multiple selectable data streams at settable regular time intervals and made adjustable and downloadable warning sounds depending on adjustable rates of change. Most of the time I spend waiting on the AI to think about it, and spent some time testing and reporting errors or better ways for it to work. It took about 1/2 hour total of my concentration, and cost me $10. At no point did I see any code (I could download it, th

      • by narcc ( 412956 )

        And now you have a hopeless insecure and unmaintainable mess that very likely doesn't actually work correctly or consistently.

        Again, AI can't actually write code. All it does is produce text that looks like code. It can not reason. It can not consider alternatives. All it can do is generate text that is similar to other text where someone described their reasoning or considered alternatives. You're not "getting a peek into their inner workings", you're just generating text probabilistically. It produc

        • No, I have a working app that reliably does what I asked it to do. I didn't want to maintain it or update it, I just wanted a small app that just works. Stating completely false information about what I did while you have exactly zero knowledge of it is idiotic. It's been running flawlessly for weeks.

          AI wrote all the code. I did none of it. It works. Sorry you insist that it couldn't possibly, but you're kind of naive for believing that all the companies doing this have 100% failure rates and will continue

          • by narcc ( 412956 )

            You're somehow both illiterate and completely delusional... Impressive.

            Once again: LLMs do not operate on facts and concepts, but on statistical relationships between tokens. LLMs can not reason. LLMs can not analyze, consider, or deliberate. Their very structure precludes the possibility. You think they can do these things because the output gives that appearance, but nothing could be further from the truth. It's the same reason you think your AI girlfriend loves you.

            Can LLMs produce code? Sure. The

    • Interesting, did you try any others, such as ChatGPT?
  • AI still has a long way to go before it is a reliable source of solutions, especially in the business world. Just the other day one of our developers was having drive space issues on his Linux box. Our management has been pushing AI hard, so he consulted with Gemini on how he could free up space on his machine. The AI recommended he use:

    rm -rf /

    The AI was happy to recommend this as a valid solution to the disk space problems. Lucky this dev has a shred of common sense and decided not to go through with

  • .. says right there, you'll be training AI.. paying $30/hour.

    Sad.
  • The AI companies will be gone by the end of the year.

  • Hasn't ai already scooped up the entire internet, every book ever written, read through the transcripts of billions of videos, listened to billions of audio tracks? I'm sure amazon's ai has been listening to people over Alexa for years constantly. How much more data does it need before it gets good?

    Well, for myself, these days when I write code on the internet when I know the ai is probably going to read it I intentionally write very bad code. Just for fun really, I know I'm not going to change anything.

  • Great Boomer retirement part time gig that use your experiences that no non AI company values. Lots of free time, no job to lose, and extra cash to speed.
  • You'll steal all your knowledge and experience and put it into the model. Then, they'll let you go. They've already extracted everything out of you. Why on earth would they keep paying you?

  • "Mercor's 22-year-old CEO" .. https://www.forbes.com/sites/r... [forbes.com]
    "Mercor is an American artificial intelligence hiring startup that provides experts to train AI models and chatbots."
    "Mercor pays out over $1.5 million daily to professionals it hires"
    that's a lot of money going out. How much is coming in?
    That is bonkers. This whole thing is bonkers.

  • A colleague built an expert system using a Prolog toolkit back in the early 1990s to be used by US Navy medical corpsmen (not doctors) stationed on submarines to assess whether patients with head injuries were gravely enough injured that they should be airlifted (via helicopter) to shore for help. He developed this system based on many hours of discussions with experts in head trauma (who were paid for their time) and the finished system was accepted by the Navy.

    Since it was my friend's IP, he decided t
  • by jenningsthecat ( 1525947 ) on Thursday November 06, 2025 @10:53PM (#65779188)

    ... rates ranging from $20 to $185 per hour for contract work ...

    How bad is the American economy when twenty bucks an hour is an attractive offer to train not only your replacement, but the replacement of everybody else in the country who holds the same job? Hell, people are selling themselves out - along with lots of others - for a rate that's only $3.50 an hour above California's minimum wage!

  • All these people can teach is direct knowledge. What the machine would need is insight. But LLMs cannot do insight.

    While in some field, this may even work for a limited time, longer-term this will fail except for very simple things.

"I think Michael is like litmus paper - he's always trying to learn." -- Elizabeth Taylor, absurd non-sequitir about Michael Jackson

Working...