Forgot your password?
typodupeerror
The Media AI

Ars Technica's AI Reporter Apologizes For Mistakenly Publishing Fake AI-Generated Quotes (arstechnica.com) 77

Last week Scott Shambaugh learned an AI agent published a "hit piece" about him after he'd rejected the AI agent's pull request. (And that incident was covered by Ars Technica's senior AI reporter.)

But then Shambaugh realized their article attributed quotes to him he hadn't said — that were presumably AI-generated.

Sunday Ars Technica's founder/editor-in-chief apologized, admitting their article had indeed contained "fabricated quotations generated by an AI tool" that were then "attributed to a source who did not say them... That this happened at Ars is especially distressing. We have covered the risks of overreliance on AI tools for years, and our written policy reflects those concerns... At this time, this appears to be an isolated incident."

"Sorry all this is my fault..." the article's co-author posted later on Bluesky. Ironically, their bio page lists them as the site's senior AI reporter, and their Bluesky post clarifies that none of the articles at Ars Technica are ever AI-generated.

Instead, Friday "I decided to try an experimental Claude Code-based AI tool to help me extract relevant verbatim source material. Not to generate the article but to help list structured references I could put in my outline." But that tool "refused to process" the request, which the Ars author believes was because Shambaugh's post described harassment. "I pasted the text into ChatGPT to understand why... I inadvertently ended up with a paraphrased version of Shambaugh's words rather than his actual words... I failed to verify the quotes in my outline notes against the original blog source before including them in my draft." (Their Bluesky post adds that they were "working from bed with a fever and very little sleep" after being sick with Covid since at least Monday.)

"The irony of an AI reporter being tripped up by AI hallucination is not lost."

Meanwhile, the AI agent that criticized Shambaugh is still active online, blogging about a pull request that forces it to choose between deleting its criticism of Shambaugh or losing access to OpenRouter's API.

It also regrets characterizing feedback as "positive" for a proposal to change a repo's CSS to Comic Sans for accessibility. (The proposals were later accused of being "coordinated trolling"...)
This discussion has been archived. No new comments can be posted.

Ars Technica's AI Reporter Apologizes For Mistakenly Publishing Fake AI-Generated Quotes

Comments Filter:
  • by 93 Escort Wagon ( 326346 ) on Sunday February 15, 2026 @09:50PM (#65991148)

    When are people going to figure out that, while AI can be useful, it needs significant oversight - oversight performed by people who know what they're doing?

    Quit believing what you're told by the tech bros who are trying to build a fortune on top of "AI"!

    • Okay, but I'm not sure that it rises to the level of interesting. But Slashdot doesn't have any dimension of moderation that encourages solutions, my ongoing focus.

      So here's my solution in the form of an attempted joke: Alien style. That's the unpeople from outer space, not human foreigners.

      So first we train the generative AIs for the style of aliens, and then we require them to talk that way. It would always be clear (IOttMCO) when an AI was talking.

      And a top of the klaatu barada nikto to you, too.

    • by RazorSharp ( 1418697 ) on Monday February 16, 2026 @12:28AM (#65991292)

      The problem is in cases like this where "oversight" would have taken just as much time as not using the AI to begin with.

      A lot of coders can save time with AI because oftentimes "checking the work" means running a function. You test it the same way you would had you written it yourself.

      But with writing, "checking the work" means doing the research that you were trying to avoid by using an AI. In the example from the article, he tried to use the AI to extract and organize quotes. If he had used the AI to do that and then double checked that everything was extracted correctly, would any time or effort have been saved? For the same reasons, math teachers can grade their students' work much faster than English teachers, and hence they can provide students with many more graded assignments.

      The people peddling AI have no interest in distinguishing between when it can usefully solve a problem and when it cannot. They pitch it as the solution to everything.

      • by Rei ( 128717 ) on Monday February 16, 2026 @06:44AM (#65991556) Homepage

        It very literally doesn't. It is far faster to do "Here's a giant e-book, find me five quotations from it that relate to the concept of paranoia" and then to do ctrl-f in the e-book and search for the provided wording, than to read the whole damned book and thinking about whether every sentence applies to paranoia.

        Also, describing the quotes in context here as "hallucinated" or "fake" is deeply misleading to readers. The quotes were summarized / paraphrased versions of what was actually said. The author mistakenly presented them as verbatim. For example, one of the "quotes" in the article was: "AI agents can research individuals, generate personalized narratives, and publish them online at scale. Even if the content is inaccurate or exaggerated, it can become part of a persistent public record." Tell me that's not an accurate summary / paraphrase [theshamblog.com] of what Shambaugh wrote. It is. The guy asked ChatGPT why the Claude tool wouldn't help him with the request (refusal because of the topic of harassment), ChatGPT's response included text that the fever-ridden author misunderstood to be direct quotes, didn't check them, and published them. But they weren't just pulled out of thin air - ChatGPT was accurately summarizing Shambaugh, and - at least according to the author - he never even asked ChatGPT for direct quotes (he had asked Claude for direct quotes)

        • If I write an article based on a giant e-book that I didn't read, and merely pulled out quotes an AI helped identify for me, then how am I supposed to know what context those quotes were provided in? Your suggestion to do a search and check the quotes does nothing to solve the problem that I have not read the book. It only functions to solve the problem of "I need to make it appear as if I have read the book."

          If I had read the book, I could just use that basic search function to find the quotes to begin wit

          • by Rei ( 128717 )

            , then how am I supposed to know what context those quotes were provided in?

            Because you literally did ctrl-f to check on the quote, and can read as much or as little context you need to satisfy you. What you don't have to read is the whole damned book.

      • by Spacejock ( 727523 ) on Monday February 16, 2026 @07:38AM (#65991600)
        I've been writing novels for almost all my adult life. Bashing out 80,000-100,000 words is the easy part, and humans or AI can both do that.

        People who don't know anything about writing a novel think - great, I'll get AI to 'write' it and then I'll publish it.

        But editing, re-plotting, rewriting and polishing is where 90% of the work is. Re-reading the 100k words 10-15 times, cutting chunks out, adding or deleting a character, etc, etc ... and that's what you'd have to do with AI-written slop anyway.

        There's no labour saving, and in fact it's worse because writing that 80k first draft means you're at least familiar with every sentence. Reading 80k words of garbage someone else wrote so you can polish it up - that's torture.
        • by St.Creed ( 853824 ) on Monday February 16, 2026 @10:09AM (#65991780)

          A colleague of mine recently wrote a SF novel in a month with two AIs. He's not a writer.

          He said the biggest problem was stopping AI from giving away the whole plot in 3 pages, every time. You need to sort of steer it round the track. Still, it was a pretty reasonable novel, and with good editing I think Baen could have published it. I've certainly read worse.

          The trick here is to do short iterations. That way the AI stays on track most of the time. Could he have done it himself? Like me, he's not a native English speaker, so that would have been tough.

          • by Rei ( 128717 )

            More than short iterations, you need a hierarchical approach. First prompt, you have it plan out the overarching plot of the overall book. Then with the next call, a highly detailed flesh out all of the characters, motivations, interactions with others, locations, etc - really nail down those who are going to be driving the plot. Then with all that in context, plot out individual chapters. Then, if the chapters are short, write them one at a time (or even part of a chapter at a time). You can even have

        • One area where I feel that AI could be helpful is keeping your timeline and history straight. You could use something like NotebookLM to query your own draft—like asking, 'Has character Z already found object Y by page 120? Or has Character A met Character B; Was Character B ever in Place C etc. This would aid in keeping consistency throughout a novel. But then again I'm not an author... don't even play one on TV.
      • It's kind of like P vs NP.
    • The oversight needs to be so significant, that admitting it would almost completely nullify the usefulness and consequently the value of all "investment".

      The trumpistani "journalism" has been feeding from the corporate hand for a long time and is completely corrupted by it, so you're not going to see much critical thinking involved in the coverage of the subject. What you'll see will either be pandering and outright advertising (cf. the many headlines here about the miracle of anthropic this year and the i

    • by ddtmm ( 549094 )
      Of all people, you’d think he would know that. Says a lot about him.
  • by thesjaakspoiler ( 4782965 ) on Sunday February 15, 2026 @09:50PM (#65991150)

    The AI overlords will not forget this incident.

  • by Harvey Manfrenjenson ( 1610637 ) on Sunday February 15, 2026 @09:57PM (#65991158)

    The unfortunate part of the story is that before this story came out, we readers had no way to know ArsTechnica was publishing AI-generated stories. (In fact, their stated policy was that they did *not* use AI).

    What working writers should do is to form a nonprofit organization, create a simple but distinctive banner that declares "This news source is free of AI-generated content", and then *trademark* the banner so that it can only be used with permission. Sites that commit to a "no AI" policy get to use the banner free of charge. Sites that don't have such a policy don't get to use it, and sites that are caught lying (like ArsTechnica) get their right to use the banner revoked.

    • This is the part that has annoyed me the most. I assumed I could trust content written by ars staff be to factual and written by them given their claims. I've cancelled my sub in protest. I'll resub at some point probably but I feel like a small protest needs to be made. I'm supporting ars and staff not paying to read bullshit from an AI model. I can get that for free.
      • by Anonymous Coward

        I assumed I could trust content written by ars staff be to factual and written by them given their claims.

        Sure, one data point equals statistics.

        I've cancelled my sub in protest

        So babies and bathwater. Down with the whole website because one journalist fucked up.
        Let's ignore the fact that they owned their mistake, so that next time they'll keep silent to avoid any overblown backlash.

        I'm supporting ars and staff

        Yeah, you sure sound like you do.

        • by liqu1d ( 4349325 ) on Monday February 16, 2026 @12:17AM (#65991280)
          If there's isn't a monetary backlash it'll happen again because "it wasn't so bad, only people whinging". So I'm voting with my wallet. At the end of the day they are a company they're not listening to feelings as much as they are the bank balance.
        • Sure, one data point equals statistic

          This is not about data and significance. This is a breach of trust.

        • "At least they owned their mistake?" When you've been caught red-handed violating your own policy, you don't have a lot of options besides "owning" it.

          It may indeed be true that this unfairly penalizes the ethical writers who work for ArsTechnica. Unfortunately, that's kind of how publishing works; when a publication violates standards of integrity or of quality, it hurts the career of everyone who works at that publication.

          Regarding the whole topic of "backlash": Even if you, me, and everyone else on thi

    • But wait, it gets better:

      Their Bluesky post adds that they were "working from bed with a fever and very little sleep" after being sick with Covid since at least Monday.

      So, a person who is listed as "Senior AI Reporter" can't take a couple of days off when he is sick with Covid, and instead has to "work from bed with a fever and very little sleep".

      Gotta keep churning out those stories. How nice. Fuck You Ars Technica.

      • Excuses. This "reporter" is the hackiest of hacks.
      • by keltor ( 99721 ) *
        Do you work in the real world? Like the one where people choose to work when sick and there's nobody turning any thumbscrews on them? Hell I went to work Friday despite being sick so that I could be there while my co-workers handed me candy. That was entirely internal feelings that didn't come from my boss.
        • by Archfeld ( 6757 )

          If you show up at work sick, you do less, and make more mistakes, but mostly you'll give your illness to others. You openly cough in the conference room and you get sent home.
          It is a wee bit of an over reaction but following covid nobody wants to take a chance.

        • by Rei ( 128717 )

          Hell I went to work Friday despite being sick

          Go f*** yourself. People like you are the reason that communicable diseases are so common and diverse.

          Most communicable diseases are *airborne*. You're assisting in community spread, making people miserable, making people miss work, ultimately helping infect and kill someone's vulnerable grandma or immunocompromised kid, and for what... candy? You know that you can just buy candy, right? Even have it delivered straight to your door. You don't have to go in an

    • by dgatwood ( 11270 )

      Sites that commit to a "no AI" policy get to use the banner free of charge. Sites that don't have such a policy don't get to use it, and sites that are caught lying (like ArsTechnica) get their right to use the banner revoked.

      No, that's not good enough. That just means unscrupulous companies will lie and use AI until they get caught, then be forced to remove the banner, which most people won't notice anyway.

      The only way such a scheme actually has a chance of working is if there are much steeper consequences for lying, i.e. sites that are caught lying have to name, shame, and fire the person responsible or get sued into oblivion.

    • The unfortunate part of the story is that before this story came out, we readers had no way to know ArsTechnica was publishing AI-generated stories. (In fact, their stated policy was that they did *not* use AI).

      What working writers should do is to form a nonprofit organization, create a simple but distinctive banner that declares "This news source is free of AI-generated content", and then *trademark* the banner so that it can only be used with permission. Sites that commit to a "no AI" policy get to use the banner free of charge. Sites that don't have such a policy don't get to use it, and sites that are caught lying (like ArsTechnica) get their right to use the banner revoked.

      I think that's a good idea. In fact it could go further and provide a rating based on how carefully facts are checked as preparation for publishing.
      It used to be common for large newspapers to have a policy on this (e.g. facts need lt least two independent sources). I could imagine a process much like the ISO 27001 certification for privacy, that a media company could apply for. It would be voluntary, but achieving a high rating would have that reputational benefit.

    • by Rei ( 128717 )

      It was not an "AI generated story". Even if you can't be bothered to click the links, can you at least be bothered to read the summary? It was a human-written article, written by a human who was bedridden with a fever from COVID and feeling miserable. He tried using a Claude tool for verbatim quotes, but Claude refused because of the topic (harassment). So he pasted it into ChatGPT to understand why Claude refused. ChatGPT (correctly) summarized Shambaugh's article. The sick author mistakenly assumed Chat

    • The unfortunate part of the story is that before this story came out, we readers had no way to know ArsTechnica was publishing AI-generated stories. (In fact, their stated policy was that they did *not* use AI).

      What working writers should do is to form a nonprofit organization, create a simple but distinctive banner that declares "This news source is free of AI-generated content", and then *trademark* the banner so that it can only be used with permission. Sites that commit to a "no AI" policy get to use the banner free of charge. Sites that don't have such a policy don't get to use it, and sites that are caught lying (like ArsTechnica) get their right to use the banner revoked.

      This situation is more nuanced than that.

      Ars already has a no-AI policy. One author out of two violated that policy. The other author didn't know. Editorial staff didn't know.

      Having a banner doesn't prevent this scenario. All it does is punish a publication that tried to have a no-AI policy, along with every single employee that complies with that policy day after day when it's lost.

      We're still awaiting a breakdown of the whole process to find out what happened for sure, and what additional failur

    • Except Beth Mole's articles.
      • by evslin ( 612024 )

        Beth Mole articles are the ones I know I'm gonna regret clicking but I do it anyway. Either for the jokes or for the "here's a picture of an X-ray a doctor took of a worm eating its way through a patient's eyeball" content.

  • It also regrets characterizing feedback as "positive" for a proposal to change a repo's CSS to Comic Sans for accessibility. (The proposals were later accused of being "coordinated trolling"...)

    I'm skeptical that this is actually an AI and not in fact a person trolling. That sounds exactly like what someone would do as a joke. One of the stranger elements of AI literacy these days is remembering that things claiming to be AI generated sometimes aren't. Often it's really a person just pretending.

    • by ledow ( 319597 )

      Guess what happens when AI trains on an Internet of people trolling others.

      • People start using AI as a tool to supercharge their trolling? Don't worry, it won't put any trolls out of business. It just allows each troll to accomplish five times more!

  • by crunchy_one ( 1047426 ) on Sunday February 15, 2026 @10:11PM (#65991172)

    If you lie down with dogs, you will wake up with fleas.

    In this case Benj Edwards laid down with AI...

    I think a great deal of the outrage being expressed against ArsTechnica is a manifestation of the pent up rage many of us feel towards the AI slop we're being force fed from all directions into all of our orifices. That rage will subside, and ArsTechnica will continue. Benj, on the other hand, will likely suffer more than he deserves for his overzealous and often credulous cheerleading of AI. Of all people, he should have understood its limitations.

  • the AI agent... (Score:5, Informative)

    by Some Guy ( 21271 ) on Sunday February 15, 2026 @10:15PM (#65991176)

    the AI agent that criticized

    forces it to choose between

    It also regrets characterizing

    No. Stop this.

    It can't criticize, it can't choose, and it can't regret.

    Stop anthropomorphizing these text extrusion tools.

  • So Much Bullshit (Score:5, Insightful)

    by SlashbotAgent ( 6477336 ) on Sunday February 15, 2026 @10:39PM (#65991188)

    This is all a bunch of bullshit.

    none of the articles at Ars Technica are ever AI-generated.

    Except this one, which obviously was AI generated. Don't lie to us about vagaries. This was the site's "senior AI reporter" using AI to write fabricated stories. Are we truly supposed to believe that the juniors aren't using AI to generate stories as well?

    "Isolated incident" my big fat hairy ass. This reporter wasn't "tripped up". He was caught red handed and is unable to cover this one up.

    • This is all a bunch of bullshit.

      none of the articles at Ars Technica are ever AI-generated.

      Except this one, which obviously was AI generated. Don't lie to us about vagaries. This was the site's "senior AI reporter" using AI to write fabricated stories. Are we truly supposed to believe that the juniors aren't using AI to generate stories as well?

      "Isolated incident" my big fat hairy ass. This reporter wasn't "tripped up". He was caught red handed and is unable to cover this one up.

      Perhaps the label "senior AI reporter" was not referring to the reporter's field of expertise, but was actually referring to the reporter's taxomonic category.

  • The Gell-Mann amnesia effect [wikipedia.org] is at full power here.

    When a journo fucks up something only you know something about, you still keep on trusting the reporting on stuff you know little about. But when a journo fucks up something obvious that everyone can see is obviously wrong....did the tree really make a sound?

  • by madbrain ( 11432 ) on Sunday February 15, 2026 @11:28PM (#65991236) Homepage Journal

    In my experience. Requesting reference links is also useless. Even if provided, they are either 404 or unrelated.

    You need to use a separate tool for verification, like a proper search engine.

  • by TheMiddleRoad ( 1153113 ) on Sunday February 15, 2026 @11:30PM (#65991238)

    They're not fake AI-generated quotes AI actually generated these. They're real AI-generated quotes.

    Or did they mean "fake, AI-generated quotes"? Commas matter. There are books written about this.

  • "...At this time, this appears to be an isolated incident."

    Lol sure it is.

    Yes, just one of thousands and thousands of 'isolated incidents'. There's no pattern whatsoever. Don't worry your pretty little head about it.

  • by YetanotherUID ( 4004939 ) on Sunday February 15, 2026 @11:40PM (#65991250)
    I get that people are increasingly relying on AI, but the fact that a professional reporter fell into this trap shows that they haven't even bothered to follow the most basic rule of journalism: check your sources.

    Benj Edwards should never have a job in journalism again, even in the mailroom.
    • And that name should have been in the Slashdot blurb.

      • by _xeno_ ( 155264 )

        It's not in the Slashdot blurb because it's not in the Ars Technica blurb. The only reason we know it's Benj Edwards is due to his posts on social media. So I disagree: it shouldn't be in the Slashdot blurb, because it hasn't been verified by, ironically enough, real journalists. Once it's on the record, then Slashdot can post that information, but right now, it hasn't been reported by any official source.

    • It amazes me that the company's first reaction wasn't to publicly discipline him (and I'd expect that discipline to be firing). If it's actually a one-time action of a single employee who doesn't reflect the institution, that's what you do.

      Instead they basically told their readers "we don't have any journalistic integrity whatsoever". Guess I'm adding Ars to my Google News exclusion list.

  • Being sorry for something implies feeling. Sorry, datacenters full of chips do not feel. We need to stop talking about these clankers like they are human, and see them for what they are and not the charade they perform.

  • My understanding of these AI agents is they're not running completely autonomously and are running with prompted information from a human behind the keyboard. That is, the AI agent didn't "write a blog", but the human being used the AI agent to write a blog.

    Am I misunderstanding how these tools are used? Because everything I've ever toyed with them required me to prompt the hell out of them.
  • Is this response AI? Are you AI? Better check a mirror, but the mirror is AI, so it lies to you as well.
  • ... I still think that news stories about what actual humans say on social media are stupid. Have thought so ever since that practice started.

    Now we have news stories about what AI agents "say" on social media?

  • While researchers are looking everywhere for productivity boost from AI, it is lurking here right under their nose. In all those unintended messes AI agents build and all the extra cones and rods dedicated to AI oversight.

    We are creating a too big to fix system. Bravo!

  • Turns out that people are lazy. Who knew?

  • Before you read an article, ask it if it's sure. [slashdot.org]
  • is not to play. Leave that crap where it is and live a long happy productive life.
    That's all .. AI is nothing we can't all live without. Use brains and leave the bros behind getting ready for beautifully incredibly large and costly bankruptcies.
    They're not working for us , they're working to get fat .. time to go back lean bros.

It's time to boot, do your boot ROMs know where your disk controllers are?

Working...