Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
Businesses Games

Electronic Arts' AI Tools Are Creating More Work Than They Save (businessinsider.com) 42

Electronic Arts has spent the past year pushing its nearly 15,000 employees to use AI for everything from code generation to scripting difficult conversations about pay. Employees in some areas must complete multiple AI training courses and use tools like the company's in-house chatbot ReefGPT daily.

The tools produce flawed code and hallucinations that employees then spend time correcting. Staff say the AI creates more work rather than less, according to Business Insider. They fix mistakes while simultaneously training the programs on their own work. Creative employees fear the technology will eventually eliminate demand for character artists and level designers. One recently laid-off senior quality-assurance designer says AI performed a key part of his job -- reviewing and summarizing feedback from hundreds of play testers. He suspects this contributed to his termination when about 100 colleagues were let go this past spring from the company's Respawn Entertainment studio.
This discussion has been archived. No new comments can be posted.

Electronic Arts' AI Tools Are Creating More Work Than They Save

Comments Filter:
  • Does it make more work than it saves, or will it replace character artists and level designers?

    • Re: Which is it? (Score:4, Informative)

      by LindleyF ( 9395567 ) on Monday October 27, 2025 @09:22AM (#65753106)
      Noticed that, huh? "It creates more work! Also, I got laid off because my job was specifically the one thing it's really good at, summarization!"
      • by evanh ( 627108 )

        A big part of an analyst's job is to summarise. That's an entirely different job to coding, scripting and puzzle making.

      • by Anonymous Coward
        Maybe it's just the bosses are more easily BSed by AI summaries? How easy is it to prove whether a particular AI is better or worse at summarization?

        I mean it is certainly faster but many AIs have been proven to often be very quickly and confidently wrong.

        Whereas for coders, many of us can soon tell that an AI is wrong and a particular method call does not exist (that said sometimes the AI can find undocumented useful stuff from some comment on the Internet years ago)...
    • by znrt ( 2424692 )

      these assertions are not mutually exclusive. every new technology reshapes the industry and the job market, introducing new skill sets and obsoleting others. this transition takes time, though, and we're currently in a very chaotic and confusing moment of it.

      i found this statement in tfa a bit comical:

      One study of 7,000 professionals, (...) found that 87% of executives use AI daily, compared with 57% of managers and 27% of employees.

      if that is an accurate measure (no idea), it would tell more about that confusion than about the actual transition.

      • by Kisai ( 213879 )

        I'd say that AI is not improving 90% of employees lives, or productivity, and the only reason the "C suite" and "Managers" are seeing improvements is because their jobs largely consist of rubber stamping and bullshitting.

        • Also less consequences if AI fucks-up. Workers make a mistake and they are fired. Managers make a mistake and they learnt a lesson they will use to improve the company resilience and add value for shareholders.
      • by SomePoorSchmuck ( 183775 ) on Monday October 27, 2025 @11:04AM (#65753314) Homepage

        One study of 7,000 professionals, (...) found that 87% of executives use AI daily, compared with 57% of managers and 27% of employees.

        if that is an accurate measure (no idea), it would tell more about that confusion than about the actual transition.

        Makes sense. Executive work consists almost entirely of:
        -reading reports.
        -looking at charts.
        -processing reports and charts and then making a choice based on clearly-stated criteria.
        -going to meetings with other executives where you all discuss the choices you make (based on your direct-reports' reports).
        -announcing the choices you made to employees and external PR outlets.

        That's basically a list of LLM strengths in a nutshell.

        C-suite folks are the reason AI hype will continue to build. It's because C-suite folks are already and always have been human LLMs. C-suite folks have always interacted with other people the way LLMs do. Think about it, their entire job consists of processing information just enough to issue a choice/announcement. They don't need to (and typically do not bother to) understand the issues on a deep, personal experience level -- that's a job for the underlings. Underlings are the human beings who have actual years of experiential true understanding. The underlings use their experience to write up their reports and collate the data to generate the charts that represent their understanding, then mouth-feed it to the little chirruping execs like a mama bird. Execs take all the pre-digested understanding, pick one or two token points that seem most important to their overall goals, and render a conclusion. Sound familiar?

        That's why C-suite folks love AI and see it as the undeniable coolest best biggest yugest future. It feels warm and familiar to them. It does work and talks to them the exact same way they do work and the exact same way all their C-suite cadre talks. They will even be highly puzzled that the rest of their employees don't love AI, because the C-suite folks don't actually understand what their direct reports DO. And that's by organizational division-of-labor design. But that division makes everyone blind to experiences outside their own.

        An LLM is merely a PHB that's been programmed to be nice to people. (at least so far)

        • by znrt ( 2424692 )

          yes, it makes total sense. but what impact do you expect it to have mid/long term? if it makes the c-suite's job easier shouldn't it make them specially redundant? i wouldn't expect that, since apart from summarizing they also contribute to the pyramid of trust and control. then again it might promote even more cognitive dissonance or disconnect between the doers and the talkers/decision makers ... if this impacts output negatively companies might be tempted to address it by, wait for it, hiring even more o

          • yes, it makes total sense. but what impact do you expect it to have mid/long term? if it makes the c-suite's job easier shouldn't it make them specially redundant? i wouldn't expect that, since apart from summarizing they also contribute to the pyramid of trust and control. then again it might promote even more cognitive dissonance or disconnect between the doers and the talkers/decision makers ... if this impacts output negatively companies might be tempted to address it by, wait for it, hiring even more of them!

            Yes, the current script for AI implementation is completely backward from the reality.

            Humans are like the frogs in the fable who cry out to the gods for a king to rule over them (i.e. be at the top of the "pyramid of trust and control"), then are dismayed and feel wronged when the gods send a stork who is verrrry happy to rule the soft tasty frogs.

            We have vestigial C-suites and Senators and Houses Of Commonses because we take comfort being ruled over by gods who look sort-of like us. We are all desperately,

            • by gweihir ( 88907 )

              Interesting. I have never felt any desire to "escape from freedom", directly or indirectly, but if most people feel that way, then this AI hype makes a lot of sense, as do the previous ones.

              • by znrt ( 2424692 )

                people do want freedom, what throws us aback is the risk associated with it. most of us like security much better. also, we tend to have a very blurry and romanticized picture of what freedom really is. the best bet for survival isn't to become free, but to become a master, and there we are.

            • We are all desperately, deeply terrified of freedom and spend our lives coming up with narratives and systems to help us escape from freedom. We like to gripe about our vainglorious presidents and our senile Senators and our Vampire Lestat-styled corporate CEOs, but each revolution just becomes the new tyranny because subconsciously in order to function we need to believe that the world is ordered and someone is in charge, so the instant we successfully overthrow The Man we start making a series of choices that create the new "The Man". We will never choose freedom. We are not capable of it.

              depressing, but i kind of agree.

              but while there's life there's hope, right? a real, meaningful revolution would require lots of energy and good memory, and the other thing that works against us apart from fear is short memory. not just short working memory and small contexts, but long term as well: frog lives are short. by the time we start to recognize the patterns and see the picture we're either out of fucks to give or out of energy to do something about it, and it all starts over again. maybe each new c

        • by sjames ( 1099 )

          Of course, if they thought about it a minute (not their strong suit), AI would be gone in a flash since THEY are the ones best replaced with AI.

        • You nailed it. If your job is just talking, selling ideas, then measure of utility is merely "better than what I fed in". Quality is subjective. As you get down the food chain to where the actual work gets done, in the software biz anyways, the measure of success becomes more restrictive, accuracy and conformance to specifications is required.
    • They're training their replacement. Eventually the slop will be good enough for a handful of janitors to clean up and package.

    • Both. Who will be there when their latest AI design goes down worse than concorde? Less work now is multiplied tenfold down the line.
    • So the places it's going to replace workers are the people who tweak the fuck out of textures and models to improve performance.

      A good example of fucking that up is starfield. For some god-awful reason starfield modeled all the greeble in the game as polygonal objects instead of just using bump mapping like we've been doing for the last 30 years.

      I suspect this was because it made the lighting engine easier especially if you have Ray tracing effects.

      But it absolutely killed performance.

      Border
      • If BMP mapping is what I think it is, which is an image made to look like it has texture but is actually flat, maybe they didn't want to do it because it looks like shit?
    • Management doesn't care about "rework" as a KPI. They only see that their trillion dollar slop generator completed a checklist item and fired the human that used to also let absolute garbage code out in the world. However, now that the perfect computer god is responsible for the failures, and a computer can never be held responsible, EA can't be held responsible. Everybody wins... well, I mean, company management and shareholders, but those are the only real people. Everyone else is just a useless mouth to

    • by Calydor ( 739835 )

      It'll make characters and levels that are deemed 'good enough' and shipped as slop. That it takes a Geforce 7090 to render a NES-era Mario sprite isn't the company's problem.

      • [sarcasm] The solution is easy: Just have another AI fix the mistakes of the existing AI. Then use another AI to fix the fixes of the second AI. Where is my executive level bonus for implementing this? It should be at least 7 figures.[/sarcasm]
        • It's actually a decent description of how agents work. Generate something, analyze, test, refine, iterate until satisfactory results or give up limit.
    • Does it make more work than it saves, or will it replace character artists and level designers?

      The problem with your question is it is binary. Both situation could occur as AI creates more work for the people that EA did not lay off because EA thinks they could reduce staff that they could not. Remember EA specifically is being bought out in a leveraged buyout. The new owners may only care about the quarterly savings they get when they continuously reduce staff as they sell EA assets piece by piece.

    • It will it replace character artists and level designers, but not as quickly as they want. In the mean time it's a net drag on the company as they learn and adjust.

      Peoples opinions of AI are becoming polarized to the detriment of us all. Either it's going to be the apocalypse and also be completely useless and also take over everyone's jobs or it's going to create utopia and leave everyone fully employed.
  • Such a surprise (Score:5, Interesting)

    by gweihir ( 88907 ) on Monday October 27, 2025 @09:14AM (#65753094)

    The same thing happens for code generation: https://arxiv.org/abs/2507.090... [arxiv.org]
    Devs think they are about 20% faster, when in reality, they are about 20% slower. Decreased code quality and added vulnerabilities are not counted in that.

  • I love reading book reports. They're so unique and personal. How often they emphasize our individuality. Unpretty. But domain and domestication are not the same... Or are they? Now spin the painting 180-degrees and see the true image: FLOPS.
  • by jacks smirking reven ( 909048 ) on Monday October 27, 2025 @09:31AM (#65753128)

    These are artists and programmers, you hired them for their skills so let them use them. If you have AI tools cool, let them integrate them into their workflow as they see fit but as soon as you start dictating to people how they should do their work you're already off the path. Your job as the employer should be defining achievable goals and so long as they are meeting those why do you care so much how they got there? Very sus.

  • by Anonymous Coward

    EA is trying to use an unproven pre-production solution in a production environment to meet actual goals. Not sure what they expected.

  • by Berkyjay ( 1225604 ) on Monday October 27, 2025 @10:59AM (#65753298)

    These are the types of jobs LLMs SHOULD come for.

    • At 50% accuracy it's a perfect fit. However, to game the system and have some old skool fun, flip a coin instead, like they have for millenia already.

  • No AI-induced job market annihilation, at least until the AI bubble collapse...
  • by MpVpRb ( 1423381 ) on Monday October 27, 2025 @01:12PM (#65753662)

    Massively deploying immature tech causes problems
    The better approach
    Start small, measure results
    If results are good, expand a bit, measure results
    Repeat as long as results are good

  • I don't recall this place posting blatant pay wall crap on the main page. I guess it won't really matter. No one here reads the articles anyway...

Many people are unenthusiastic about their work.

Working...