Forgot your password?
typodupeerror
AI

Amazon Employees Are 'Tokenmaxxing' Due To Pressure To Use AI Tools (arstechnica.com) 50

An anonymous reader quotes a report from the Financial Times (via Ars Technica): Amazon employees are using an internal AI tool to automate non-essential tasks in a bid to show managers they are using the technology more frequently. The Seattle-based group has started to widely deploy its in-house "MeshClaw" product in recent weeks, allowing employees to create AI agents that can connect to workplace software and carry out tasks on a user's behalf, according to three people familiar with the matter. Some employees said colleagues were using the software to automate additional, unnecessary AI activity to increase their consumption of tokens -- units of data processed by models. They said the move reflected pressure to adopt the technology after Amazon introduced targets for more than 80 percent of developers to use AI each week, and earlier this year began tracking AI token consumption on internal leader boards.

"There is just so much pressure to use these tools," one Amazon employee told the FT. "Some people are just using MeshClaw to maximize their token usage." Amazon has told employees that the AI token statistics would not be used in performance evaluations. But several staff members said they believed managers were monitoring the data. "Managers are looking at it," said another current employee. "When they track usage it creates perverse incentives and some people are very competitive about it."

Amazon Employees Are 'Tokenmaxxing' Due To Pressure To Use AI Tools

Comments Filter:
  • by Anonymous Coward on Tuesday May 12, 2026 @05:05PM (#66140611)

    ...of those apps that make it look like you are moving your mouse a lot?

    • by SeaFox ( 739806 ) on Tuesday May 12, 2026 @05:30PM (#66140645)

      Nah, this is more like working at a company that measures your productivity in LoC typed. People making shit long-winded just to game the numbers.

      Can't wait to hear about some big incident because someone automated something unnecessarily to increase their token usage. It will be literally the fault of these policies regarding AI adoption.

      • We've been having slop-related incidents practically every week in major codebases for nearly a year now...
      • by DarkOx ( 621550 )

        One of the first managers I ever had told me something I have remembered for my entire career. "Be very careful about what you decide to measure, you are certain to get more of it."

        Amazon has chosen to measure AI use. So the natural reaction of employees is going to be to put effort into finding was to use whatever AI tools they have been given. In the best case sure they will use them increase productivity, however after they exhaust the obvious use cases where there is real gains they are going to star

    • More like the equivalent of ballmaxxing [forbes.com], your balls look yuge, but they perform worse, it makes you sick, and your dick looks even smaller.

      You know, a useful metaphor for many things these days.

    • Only in this case, managers at Amazon are congratulating and rewarding people who move their mouse a lot, and begging them to do it more!

  • by dgatwood ( 11270 ) on Tuesday May 12, 2026 @05:12PM (#66140619) Homepage Journal

    Given that big companies have already made it clear that they think AI will let them do the same work with fewer people, and given that using AI costs the company a lot in terms of compute resources, it seems intuitively obvious that the only reason execs would want to encourage more AI use is to find out what jobs can easily have their headcount reduced by more use of AI.

    The people using the most tokens are the ones for whom more of their jobs can be most easily automated. This is not, IMO, a positive sign for the long-term survival of that particular job role. The only rational response is to use AI just enough to show a speed-up, assuming the speed-up actually happens at all, but not enough to be high up on the chart of AI users. Using it way more than that seems self-defeating.

    • "It seemed like such a _good idea_ at the time!"

      And... thus... the Great Depression V.2 begins... companies still sell products, companies still make products, nobody can afford to buy the products.

      • Not from AI. Governments create great depressions when they try to "fix" things. Namely Smoot-Hawley. If we get another depression, guarantee you'll be able to trace its origins to none other than Donald Trump. But it's unlikely due to the reversals outside of his control that have already happened. Or perhaps the Strait of Hormuz will force a global one anyway, which is also a "fix" that is now out of his control.

        • by jsonn ( 792303 )
          Do you have any actual proof or is this typical American Libertarian "Gov bad" think?
  • This costs money (Score:3, Interesting)

    by sinij ( 911942 ) on Tuesday May 12, 2026 @05:14PM (#66140623)
    I moderately to heavily use AI for my work because it is capable of speeding up routine time-consuming tasks. I do that so I can use my time more effectively on other productive tasks. However, I did rough calculation and it costs about 4$/hour in tokens for that. That is subsidized costs where LLMs are offered at a loss to capture market share. True costs are easily double that. This is not trivial cost if everyone in a company starts doing that.
    • True costs are easily double that.

      Yeah, I'm gonna guess it's a lot more than double. My company's enterprise agreement with MS has Claude Opus 4.7 now at 15x token consumption. I bet that's approaching what it actually costs but still not quite it.

  • by Local ID10T ( 790134 ) <ID10T.L.USER@gmail.com> on Tuesday May 12, 2026 @05:16PM (#66140627) Homepage

    That which is measured is improved.

    You want more token usage? You got it!

    • by dogugotw ( 635657 ) on Tuesday May 12, 2026 @05:59PM (#66140669)

      A million years ago when I got my first management job I had to attend a training session on 'Goals and Objectives', the current in vogue management tool. The instructor impressed on us that 'you get what you measure'. He used an example of police wanting to improve road safety by measuring the number of moving violation tickets given out. Ticket quantities went through the roof but there was no improvement in accident rates; go figure. What was true in the 70s is still true today.

      • by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Tuesday May 12, 2026 @07:07PM (#66140759) Homepage Journal

        Indeed. Here in California we are the poster children for this, because we have a 55 mph speed limit while towing which is NEVER enforced. We have a requirement that headlights be aimed correctly, same. We have a law saying that if there are five or more people behind you, you must pull over at the first safe opportunity to let them pass, same. Fender flares must project as far as tires, same. (Anyone who's ever had a rock break their windshield understands.)

        CHP cares specifically and only about revenue generation, so they do nothing to improve safety except go after speeders. That's not nothing, but it's not enough.

        • In many areas of California, pulling someone over actually increases the danger for everyone due to the heavy traffic and the dangerous impact of unexpected slowdowns on the expressway. They avoid enforcement in many cases because it's the correct public service behavior.

          • In many areas of California, pulling someone over actually increases the danger for everyone due to the heavy traffic and the dangerous impact of unexpected slowdowns on the expressway.

            When you're driving, you should expect slowdowns. If you're not sufficiently aware to handle them, you're not sufficiently aware to drive. Also, if you don't know you can proceed to an exit or another safe place when being pulled over, same. This is what happens when we don't expect basic competence from drivers. Which, again, is because we want to maximize profit. In this case it's related to the war on public transportation still being waged by Big Oil, Big Auto, and Big Rubber. None of this has anything

      • A million years ago when I got my first management job I had to attend a training session on 'Goals and Objectives', the current in vogue management tool. The instructor impressed on us that 'you get what you measure'. He used an example of police wanting to improve road safety by measuring the number of moving violation tickets given out. Ticket quantities went through the roof but there was no improvement in accident rates; go figure. What was true in the 70s is still true today.

        Which is why you have to ensure what you measure actually produces desired outcomes and not create perverse incentives. Reminds me of teh Dilbert cartoon where the PHB announces a bug bounty and Wally says “I’m writing myself a new minivan”

    • Goodhart's law in action.

      You'd figure they'd hire managers who are good enough to understand this.

    • That which is measured is improved. You want more token usage? You got it!

      I like a different spin on that. "You get what you reward, not what is best."

      That was the phrase they used when they were training us that there is no magic bullet to measure performance. That judging performance is a hard job, stop reaching for this year's new snake oil. Do your job, know what's going on, who needs help with something. NEVER punish someone for admitting they need help with something, just do all you can for them.

      • First week: You all are using the brake pads up! What is wrong with you! The planes have reverse, use it! Second week: Why are we filing out dings in the props every day? Are you idiots using reverse and blowing up crap in front of the plane? Third week: go to 1st week
    • We've been doing technology for a long time and even technical-competent managers seem to still do dumb stuff like, "We want you to consume at least 2M tokens a month"

      It's not about the quality of the output, but someone, somewhere needs to see a nice chart showing an upward trend in token usage because in their mind, that somehow translates into "more productivity"

      We have literally been told that AI usage is mandatory and that if we're not using AI for everything then we are dinosaurs. Don't get m
  • by PhantomHarlock ( 189617 ) on Tuesday May 12, 2026 @05:31PM (#66140647)

    Those kinds of shenanigans are a good example of why I remain self employed to this day. You can keep your Office Space style bullshit. Modern corporations, especially aggressive companies like Amazon, have gamified the workplace into just sucking every last ounce of energy out of their 'human resources'. You're more a slave and less an employee every year.

  • by Anonymous Coward on Tuesday May 12, 2026 @05:33PM (#66140649)
    Joel on Software wrote on "be careful what you measure" [joelonsoftware.com] twenty years ago and, coincidentally, opens with an Amazon example...

    "Thank you for calling Amazon.com, may I help you?" Then - Click! You're cut off. That's annoying. You just waited 10 minutes to get through to a human and you mysteriously got disconnected right away.
    Or is it mysterious? According to Mike Daisey, Amazon rated their customer service representatives based on the number of calls taken per hour. The best way to get your performance rating up was to hang up on customers, thus increasing the number of calls you can take every hour.

    • Amazon rated their customer service representatives based on the number of calls taken per hour. The best way to get your performance rating up was to hang up on customers, thus increasing the number of calls you can take every hour.

      I wasn't at Amazon, but I can confirm. 20 years ago I worked in a call centre and they used a similar metric, which was basically "calls per hour." There was another perverse incentive too - if you didn't actually solve a customer's problem, they'd probably need to call back again soon, further inflating your stats. I remember listening to the guy who had the best stats, all he ever said was "fix your firewall."

      As other posters have stated, the way to fix this has two parts. First, better metrics. Second, b

  • by nightflameauto ( 6607976 ) on Tuesday May 12, 2026 @05:47PM (#66140653)

    So, they're using an AI tool to automate "looking like" they're using AI? It's AI all the way down? AI driving AI? Maybe Amazon can luck out and just have a whole bunch of AI systems driving other AI systems without needing any employees? AND THE FANTASY CAN BECOME REAL!

    • That's reason AI is doomed, the old "Garbage In, Garbage Out" principle. AI will increasingly be trained more and more on AI slop, resulting in an inevitable spiral into mediocrity.
      • It is called Model Collapse [wikipedia.org], and avoiding it is a hot research topic.

        • It is called Model Collapse [wikipedia.org], and avoiding it is a hot research topic.

          What impresses me the most is that anyone would not understand what happens when AI starts referencing itself, because ot itself and other AI swamping everything else out. At that point, it becomes truth irrelevant and either worthless or a sort of religion.

          • Fortunately, Trump has already trained his followers to accept that truth is irrelavant, and that they should worship everything he says as a form of religion, so they're already primed for Trump's tech bro billionaire contributors to take over their lives! Thanks, Peter Thiel, great puppet show! Jim Henson would be jealous...
  • That's not very hard (Score:4, Interesting)

    by Casandro ( 751346 ) on Tuesday May 12, 2026 @05:53PM (#66140659)

    Particularly if you have a system that supports multi-agent teams, you can just spin up a bunch of them, create a harness that makes them talk to each other in real-time and tell them what you want to do. It's not hard to do that.

    For companies like Amazon that probably makes sense. They want to prolong the bubble of "Frontier model" companies. For the rest of the world, price hikes will eventually make them creative. My prediction is that we might see something a bit more clever than current coding agents that can deliver good, if not even great, results with comparatively tiny models. The smaller the model the easier it is to train, so maybe... future models will be so small you can train them for your project, on your own computers, for a fraction of the price of current solutions.
    Nobody knows, but at our company, the next Anthropic price hike will mean very substantial costs for the company I'm working at. The kind of cost that gets you a substantial number of programmers.

    • You don't need to train a small model, you need to give them access to a vector database with a query tool and they do fine looking stuff up in pieces and using it.

    • Not sure if you have explored it very much but LM Studio and ComfyUI are huge ways you can run your own models. ComfyUI is built out a bit more for training and while it has more of focus on text-to-image workflows, you can configure it pretty well to work with LLMs.

      Many of the models are quite light-weight and a decent gaming system can keep them in memory though quantization is generally needed a bit to help. Quanitization seems very necessary more with image models but in the case of code, you likely cou

  • Any behavior you measure and reward people for increases. That seems like a no-brainer. Maybe they should consider rewarding people for actual accomplishements, and not for wasting the most electricity?
  • at it's finest. When will people learn

  • The trouble with Transhumanists is that they all imagine there will eventually be only one human per company with an army of AI's to do the work.

    But they all think they'll be that one person.

    Also, they're insane.

    • Future trillionaires and aspirational future not trillionaires are the enemies of all organized multicellular life. They're the morbidly rich economic royalists perpetually out for themselves at the expense of everyone else. Maybe we ought to vigorously and vigilantly ensure they're not in-charge anywhere and probably don't become or remain absurdly rich compared to average people either?
  • "Some employees said colleagues were using the software to automate additional, unnecessary AI activity to increase their consumption of tokens -- units of data processed by models. "

    The AI blurb (oh the irony)

    "Midas World is a 1983 science fiction novel by Frederik Pohl, a "fix-up" novel that expands on his classic short story, "The Midas Plague". It explores a future society with extreme abundance due to automation, where the poor are forced to consume a quota of goods to keep the economy running, while t

    • So like Brazil, Idiocracy, Elysium, Brave New World, and the 80's Soviet Union in a blender until it resembles a nutritional paste.
    • I remember that! It was something like if you were poor, you got a big bag of free money every month and were in big trouble if you did not spend it before next month.
  • Peeing in Coke bottles or hitting themselves in the faces with hammers... that we know of.
  • Amazon sounds like a truly horrible place to work. Meta too.

"Don't hate me because I'm beautiful. Hate me because I'm beautiful, smart and rich." -- Calvin Keegan

Working...