Will 'AI-Assisted' Journalists Bring Errors and Retractions? (msn.com) 22
Meet the "journalist" who "uploads press releases or analyst notes into AI tools and prompts them to spit out articles that he can edit and publish quickly," according to the Wall Street Journal.
"AI-assisted stories accounted for nearly 20% of Fortune's web traffic in the second half of 2025." And most were written by 42-year-old Nick Lichtenberg, who has now written over 600 AI-assisted stories, producing "more stories in six months than any of his colleagues at Fortune delivered in a year." One Wednesday in February, he cranked out seven. "I'm a bit of a freak," Lichtenberg said... A story by Lichtenberg sometimes starts with a prompt entered into Perplexity or Google's NotebookLM, asking it to write something based on a headline he comes up with. He moves the AI tools' initial drafts into a content-management system and edits the stories before publishing them for Fortune's readers... A piece from earlier that morning about Josh D'Amaro being named Disney CEO took 10 minutes to get online, he said...
Like other journalists, Lichtenberg vets his stories. He refers back to the original documents to confirm the information he's reporting is correct. He reaches out to companies for comment. But he admits his process isn't as thorough as that of magazine fact-checkers.
While Lichtenberg started out saying his stories were co-authored with "Fortune Intelligence", he now typically signs his own name, according to the article, "because he feels the work is mostly his own." (Though his stories "sometimes" disclose generative AI was used as a research tool...) The article asks with he could be "a bellwether for where much of the media business is headed..."
"Much of the content people now consume online is generated by artificial intelligence, with some 9% of newly published newspaper articles either partially or fully AI-generated, according to a 2025 study led by the University of Maryland. The number of AI-generated articles on the web surpassed human-written ones in late 2024, according to research and marketing agency Graphite." Some executives have made full-throated declarations about the threat posed by AI. New York Times publisher A.G. Sulzberger said AI "is almost certainly going to usher in an unprecedented torrent of crap," referencing deepfakes as an example. The NewsGuild of New York, the union representing Fortune employees and journalists at other media outlets, said the people are what makes journalism so powerful. "You simply can't replicate lived experiences, human judgment and expertise," said president Susan DeCarava.
For Chris Quinn, the editor of local publications Cleveland.com and the Plain Dealer, AI tools have helped tame other torrents facing the industry. AI has allowed the outlets to cover counties in Ohio that otherwise might go ignored by scraping information from local websites and sending "tips" to reporters, he said. It has also edited stories and written first drafts so the newsrooms' journalists can focus on the calls, research and reporting needed for their stories.... Newsrooms from the New York Times to The Wall Street Journal are deploying AI in various ways to help reporters and editors work more efficiently....
Not all newsrooms disclose their use of AI, and in some cases have rolled out new tools that resulted in errors or PR gaffes. An October study from the European Broadcasting Union and the BBC, which relied on professional journalists to evaluate the news integrity of more than 3,000 AI responses, found that almost half of all AI responses had at least one significant issue.
Last week the New York Times even issued a correction when a freelance book reviewer using an AI tool unknowingly included "language and details similar to those in a review of the same book published in The Guardian." But it was actually "the second time in a few days that the Times was called out for potential AI plagiarism," according to the American journalist writing The Handbasket newsletter. We must stem the idea being pushed by tech companies and their billionaire funders who've sunk too much into their products to admit defeat that the infiltration of AI into journalism is inevitable; because from my perch as an independent journalist, it simply is not...
Some AI-loving journalists appear to believe that if they're clear enough with the AI program they're using, it will truly understand what they're seeking and not just do what it's made to do: steal shit... If you want to work with machines, get a job that requires it. There are a whole lot more of those than there are writing jobs, so free up space for people who actually want to do the work. You're not doing the world a favor by gifting it your human/AI hybrid. Journalism will not miss you if you leave...
But meanwhile, USA Today recently tried hiring for a new position: AI-Assisted reporter. (The lucky reporter will "support the launch and scaling of AI-assisted local journalism in a major U.S. metro," working with tools including Copilot and Perplexity, pioneering possible future expansions and "AI-enabled newsroom operations that support and augment human-led journalism.") And Google is already sponsoring a "publishing innovation award"...
"AI-assisted stories accounted for nearly 20% of Fortune's web traffic in the second half of 2025." And most were written by 42-year-old Nick Lichtenberg, who has now written over 600 AI-assisted stories, producing "more stories in six months than any of his colleagues at Fortune delivered in a year." One Wednesday in February, he cranked out seven. "I'm a bit of a freak," Lichtenberg said... A story by Lichtenberg sometimes starts with a prompt entered into Perplexity or Google's NotebookLM, asking it to write something based on a headline he comes up with. He moves the AI tools' initial drafts into a content-management system and edits the stories before publishing them for Fortune's readers... A piece from earlier that morning about Josh D'Amaro being named Disney CEO took 10 minutes to get online, he said...
Like other journalists, Lichtenberg vets his stories. He refers back to the original documents to confirm the information he's reporting is correct. He reaches out to companies for comment. But he admits his process isn't as thorough as that of magazine fact-checkers.
While Lichtenberg started out saying his stories were co-authored with "Fortune Intelligence", he now typically signs his own name, according to the article, "because he feels the work is mostly his own." (Though his stories "sometimes" disclose generative AI was used as a research tool...) The article asks with he could be "a bellwether for where much of the media business is headed..."
"Much of the content people now consume online is generated by artificial intelligence, with some 9% of newly published newspaper articles either partially or fully AI-generated, according to a 2025 study led by the University of Maryland. The number of AI-generated articles on the web surpassed human-written ones in late 2024, according to research and marketing agency Graphite." Some executives have made full-throated declarations about the threat posed by AI. New York Times publisher A.G. Sulzberger said AI "is almost certainly going to usher in an unprecedented torrent of crap," referencing deepfakes as an example. The NewsGuild of New York, the union representing Fortune employees and journalists at other media outlets, said the people are what makes journalism so powerful. "You simply can't replicate lived experiences, human judgment and expertise," said president Susan DeCarava.
For Chris Quinn, the editor of local publications Cleveland.com and the Plain Dealer, AI tools have helped tame other torrents facing the industry. AI has allowed the outlets to cover counties in Ohio that otherwise might go ignored by scraping information from local websites and sending "tips" to reporters, he said. It has also edited stories and written first drafts so the newsrooms' journalists can focus on the calls, research and reporting needed for their stories.... Newsrooms from the New York Times to The Wall Street Journal are deploying AI in various ways to help reporters and editors work more efficiently....
Not all newsrooms disclose their use of AI, and in some cases have rolled out new tools that resulted in errors or PR gaffes. An October study from the European Broadcasting Union and the BBC, which relied on professional journalists to evaluate the news integrity of more than 3,000 AI responses, found that almost half of all AI responses had at least one significant issue.
Last week the New York Times even issued a correction when a freelance book reviewer using an AI tool unknowingly included "language and details similar to those in a review of the same book published in The Guardian." But it was actually "the second time in a few days that the Times was called out for potential AI plagiarism," according to the American journalist writing The Handbasket newsletter. We must stem the idea being pushed by tech companies and their billionaire funders who've sunk too much into their products to admit defeat that the infiltration of AI into journalism is inevitable; because from my perch as an independent journalist, it simply is not...
Some AI-loving journalists appear to believe that if they're clear enough with the AI program they're using, it will truly understand what they're seeking and not just do what it's made to do: steal shit... If you want to work with machines, get a job that requires it. There are a whole lot more of those than there are writing jobs, so free up space for people who actually want to do the work. You're not doing the world a favor by gifting it your human/AI hybrid. Journalism will not miss you if you leave...
But meanwhile, USA Today recently tried hiring for a new position: AI-Assisted reporter. (The lucky reporter will "support the launch and scaling of AI-assisted local journalism in a major U.S. metro," working with tools including Copilot and Perplexity, pioneering possible future expansions and "AI-enabled newsroom operations that support and augment human-led journalism.") And Google is already sponsoring a "publishing innovation award"...
No (Score:4, Insightful)
Re: (Score:2)
AI slop articles that get summarized into yet more AI slop. I doubt many humans read this stuff.
Re: (Score:1)
Look on the bright side, it is no longer the sign of a dullard if you do not in fact rtfa. I certainly didn't.
Buisness Idea: (Score:3)
Build a company that runs ai to check legal cases for AI-created errors that could lead to retrials, aquittals, etc - and lawsuits against negligent law firms that fumbled someone's case because they clearly used a lot of ai.
And check newspapers for AI-created errors that could lead to libel cases and lawsuits.
And lawsuits against police agencies that used AI to build a false and misleading case against someone.
There's money to be made I tell ye! THERE"S GOLD IN THEM HILLS!!!!
No, AI is Turning into Journalism into Shit (Score:2)
Re: (Score:3)
You seem to imply that most Journalism wasn't *already* crap.
Re: (Score:2)
I am confident that it will be more likely to use correct grammar.
Unlikely to change anything (Score:3)
"Journalists" have been caught making bold false claims followed by unread retractions for years, now they will blame it on AI just as CEO's blame AI for job cuts.
Unfortunately the media is rewarded by clicks, not truth.
Of course it will (Score:3)
This is a genuine concern (Score:4, Funny)
Forget reporters. Support monks. (Score:3)
If you want to work with machines, get a job that requires it. There are a whole lot more of those than there are writing jobs, so free up space for people who actually want to do the work.
Do you mean scribes? Because I'm pretty sure all other writing jobs involve working with machines.
Oddly, journalism doesn't exist to provide jobs to people who wanted to get writing degrees. It exists to report the news.
The loss of classified advertising and the rise of the internet have drastically reduced funding for traditional journalism. The shift from daily newspapers and the evening news to a 24/7 news cycle have also cut the amount of time journalists have to gather and vet information. If you want to take notes with a pad and pencil and have your photographer take their film to a darkroom, you're simply too inefficient in the 21st century.
The critical missing pieces are editors. In the local paper I subscribe to (San Jose Mercury News), I see obvious grammatical and spelling issues on a daily basis, which implies nobody but the original journalist even read the article before it got published in a print newspaper. And they didn't bother to use a spell-checker, either. That extends to the reporting, too; studies are misquoted, statistics misused, important facts left out, and assertions unsupported.
That's before the use of AI to write drafts. So I don't see how this makes it much worse.
In fact, AI might actually make it better, whether it's writing the draft so the journalist can take on the role of editor, or taking the role of editor after the journalist writes their own draft. Because clearly none of that is happening right now at my local paper. Doesn't matter if it's always right. In fact, it's probably better if it hallucinates 10% of the time, because that way the journalist can't rely too heavily on it.
(Yes, in an ideal world, we'd have real human journalists and editors, and enough time for vetting to take place. Are people willing to pay for that? Evidently not.)
Regurgitated press releases (Score:2)
NO! - As AI's just make stuff up and lie about it. (Score:2)
AI is barely OK if you let rewrite a few paragraphs.
When you ask historic questions it makes stuff up, presents it as fact and argues with you about the false information.
So the longer an AI created article, the more fact checking has to be done. And that makes asking initially on anything important, really not worth it.
A lot of the current data in AI models remains biased. [usc.edu]
And now we know that: Overall, across 1,372 participants and over 9,500 individual trials, the researchers found subjects were willing [arstechnica.com]
Probably... (Score:2)
What is the purpose of a journalist ? (Score:2)
It depends on the audience who will read the articles that s/he writes.
If it is clickbait chasing nonsense about pop singers or film stars latest affair or wardrobe 'malfunction' then the readers are unlikely to be too critical unless you do not have enough pictures of naked flesh. Your editor & publisher will be happiest if you write lots of articles and care little if it is slop.
If you are writing about something supposed to be factual, eg: science; finance; politics; ... then the articles should be w
Not a journalist. (Score:2)
Who knew that the defining characteristics of the "brave" new world would be abject laziness and cowardice?
No, because... (Score:3)
The AI editors and AI "fact checkers" will have been coded by the same people (or, eventually, the same stupid AI programming code) and trained on the same data and will therefore not SPOT the errors, not require the retractions, and almost certainly "fact check" the errors as "true", thereby becoming the obstruction to actual humans correcting things.
AI is likely to produce a new world in which people can believe NOTHING in electronic format, and they need to return to being trustworthy and honest and getting information, and doing transactions, on a handshake with a trusted human, face-to-face.
Congrats to all you people working on stupid large language models and lying to everybody by mis-representing this form of "AI" to the general public as though it were Artificial General Intelligence. You are on the cusp of destroying modernity and forcing society to step backwards 80 years or so. Those of us who worked to bring about the computer revolution INTENDED to build a bright future where computers made everything better, faster, more-efficient, more factual, etc but you are in the process of flushing it all down the giant cosmic toilet. Oh, and before you ask: NO, no additional algorithm can fix this. Algorithms cannot fix human nature, and human nature defaults to abusing every new technology. The current generation of AI is the most-powerful yet least-understood-by-the-public tech to come along. It's already mis-leading people by the millions - just look at the MOUNTAINS of AI slop ruining the YouTube experience already. It only gets worse from here...
What's the value then? (Score:2)
This is the problem: (Score:2)
asking it to write something based on a headline he comes up with.
What happened to "the headline writes itself"?? It seems to me that articles should be written about substance first, with the headline written as summary. Instead it seems we live in a world where the headline is more important than the substance, and that says a lot about the current state of society.