Forgot your password?
typodupeerror
The Internet AI

Evidence That Humans Now Speak In a Chatbot-Influenced Dialect Is Getting Stronger (gizmodo.com) 85

Researchers and moderators are increasingly concerned that ChatGPT-style language is bleeding into everyday speech and writing. The topic has been explored in the past but "two new, more anecdotal reports, suggest that our chatbot dialect isn't just something that can be found through close analysis of data," reports Gizmodo. "It might be an obvious, every day fact of life now." Slashdot reader joshuark shares an excerpt from the report: Over on Reddit, according to a new Wired story by Kat Tenbarge, moderators of certain subreddits are complaining about AI posts ruining their online communities. It's not new to observe that AI-armed spammers post low-value engagement bait on social media, but these are spaces like r/AmItheAsshole, r/AmIOverreacting, and r/AmITheDevil, where visitors crave the scintillation or outright titillation of bona fide human misbehavior. If, behind the scenes, there's not really a grieving college student having her tuition cut off for randomly flying off the handle at her stepmom, there's no real fun to be had. The mods in the Wired story explain how they detect AI content, and unfortunately their methods boil down to "It's vibes." But one novel struggle in the war against slop, the mods say, is that not only are human-written posts sometimes rewritten by AI, but mods are concerned that humans are now writing like AI. Humans are becoming flesh and blood AI-text generators, muddying the waters of AI "detection" to the point of total opacity.

As "Cassie" an r/AmItheAsshole moderator who only gave Wired her first name put it, "AI is trained off people, and people copy what they see other people doing." In other words, Cassie said, "People become more like AI, and AI becomes more like people." Meanwhile, essayist Sam Kriss just explored the weird way chatbots "write" for the latest issue of the New York Times Magazine, and he discovered along the way that humans have accidentally taken cues from that weirdness. After parsing chatbots' strange tics and tendencies -- such as overusing the word "delve" most likely because it's in a disproportional number of texts from Nigeria, where that word is popular -- Kriss refers to a previously reported trend from over the summer. Members of the U.K. Parliament were accused of using ChatGPT to write their speeches.

The thinking goes that ChatGPT-written speeches contained the phrase "I rise to speak," an American phrase, used by American legislators. But Kriss notes that it's not just showing up from time to time. It's being used with downright breathtaking frequency. "On a single day this June, it happened 26 times," he notes. While 26 different MPs using ChatGPT to write speeches is not some scientific impossibility, it's more likely an example of chatbots, "smuggling cultural practices into places they don't belong," to quote Kriss again. So when Kriss points out that when Starbucks locations were closing in September, and signs posted on the doors contained tortured sentences like, "It's your coffeehouse, a place woven into your daily rhythm, where memories were made, and where meaningful connections with our partners grew over the years," one can't state with certainty that this is AI-generated text (although let's be honest: it probably is).

This discussion has been archived. No new comments can be posted.

Evidence That Humans Now Speak In a Chatbot-Influenced Dialect Is Getting Stronger

Comments Filter:
  • Proper language use is almost nowhere to be found anyway and seems to be too much effort for people. They use whatever easy or fancy expression they feel like at the time.

    • Re: (Score:1, Troll)

      by dfghjk ( 711126 )

      No it does not. This is about as dumb as it gets, some /. douche references TWO anecdotes about something totally expected, then it gets fleshed out by referencing some reddit moderator and a tepid article writer. No matter what the phrase of the day is, it comes from somewhere. It's no surprise that software designed to generate phrases might generate some.

      • I rise to speak!

        'Tis but risible, in the extreme!

        Recently, even!

        Yours in rice,

        Rhys

      • by Entrope ( 68843 )

        AI strongly favors some phrases over others, though. I have a Google news watch on the name "Gamesurge", and for 15+ years Google has matched that against news that uses phrases like "late-game surge". This year, the frequency of those hits has gone up by about an order of magnitude. I don't think that was organic in origin.

        It's like if you suddenly had a zillion people asking "doubts" about how to do the needful: you could be pretty sure which dialect of English influenced their word choices.

    • by bjoast ( 1310293 ) on Tuesday December 09, 2025 @07:12AM (#65845407)
      This is why I have explicitly instructed my ChatGPT to speak with elegance, and using advanced grammar and vocabulary. I have already noticed how my own speech and writing has improved.
    • Six seven eh.

      We love being artificially intelligent. Anything that binds us.

    • Proper language use is almost nowhere to be found anyway and seems to be too much effort for people. They use whatever easy or fancy expression they feel like at the time.

      In other words - basically nothing has changed over the past several millennia.

  • Chatbots don't know how to write proper climaxes. Multiple and repetitive, each following one losing impact until reaching the exhausting end.

    • Thankfully I don't recognise that description of anything I read - but I read less social media than I did last month, pretty much every month. It gets tedious and repetitive after a time.

      But it does remind me of, if I remember correctly, George Bernard Shaw describing some book as

      one could write this forever - if you could abandon your mind to it

      (It may have been in reference to the pornography-writing machines in "Brave New World" when it was published in the 1930s.)

      Does America have a version of the ann

  • by BrendaEM ( 871664 ) on Tuesday December 09, 2025 @06:44AM (#65845373) Homepage
    We went from the internet, which was one of the best things humans ever did--to a power-hungry, weaponized, creative work-stealing, job taking system to make the ultra-wealthy rich--without having to work themselves.
    • Additionally, I propose that we further destroy human culture. In summary, LLMs basically suck and exercise no creative thought at all.

      • Additionally, I propose that we further destroy human culture. In summary, LLMs basically suck and exercise no creative thought at all.

        Whoever moderated that as a troll obviously didn't get the reference to the typical AI speech pattern.

    • by KalvinB ( 205500 ) on Tuesday December 09, 2025 @10:00AM (#65845597) Homepage

      If AI could replace humans, it also replaces corporations.

      AI is not taking jobs. It's just the latest excuse to outsource. The myth is that Idiot + AI = competent worker. But that isn't the case.

      If corporations were run by smart people, they'd be using AI to speed up their roadmaps and rush ahead of the competition. Or come up with new pet projects for people to work on.

      If Zuckerberg can build wealth with AI and not workers, then the workers can build wealth with AI and not Zuckerberg.

      If AI replaced corporations, they'd shut it down. And it already is. But not yet to the degree that it upsets them.

      The problem is not AI. The problem is not paying people. If you create a product people like and it makes you money, pay people to displace your reliance on AI.

      • "If AI could replace humans, it also replaces corporations.'

        AI will not replace corporations, it will become the corporation.

    • Kind of. It's also eating itself. If you draw out the trajectory to its logical conclusion, where a handful of people have almost all the money, we will reach a point at which people reject the resource allocation system we call money. AI isn't replacing human workers anytime soon. Someone has to build, operate and maintain the drone armies to keep the masses in check. Someone has to keep the power stations, and power distribution systems running. Someone has to farm and collect garbage. So, at some point,
    • We went from the internet, which was one of the best things humans ever did--to a power-hungry, weaponized, creative work-stealing, job taking system to make the ultra-wealthy rich--without having to work themselves.

      The internet lead to a lot of wealth consolidation, too. They're demolishing my local mall soon, because who needs local stores when you can just send all your money to Amazon, right?

      AI just seems like taking capitalistic greed to its logical conclusion.

    • We went from the internet, which was one of the best things humans ever did--to a power-hungry, weaponized, creative work-stealing, job taking system to make the ultra-wealthy rich--without having to work themselves.

      Um... I feel weird having to say this, but the Internet turned out to be a total piece of shit way before AI. I'm not sure how anyone can genuinely look at its evolution through user owned content, music sharing, something awful, eyebleach, consolidation and centralization, absolute shit tier advertising that makes "this station does not endorse and isn't responsible for this paid promotional content" 5am buy gold ads look good, social media propaganda, 4chan, gamer gates, short form video algorithmic brain

    • by ceoyoyo ( 59147 )

      And then we invented LLMs.

  • by bickerdyke ( 670000 ) on Tuesday December 09, 2025 @06:59AM (#65845389)

    Corporate speech. The AI coffeshop example from TFA was no better or worse than the "Wer are working on continuously improving our service to you. This branch is closed." Or any other over-apologetic speech despite continued ruthless behavior. Corporate speech robbed words of their meaning. AI is just continuing that and creating strings of words that never had any meaning at all. (Reflecting its inner working as "statistical parrot")

    • by Big Hairy Gorilla ( 9839972 ) on Tuesday December 09, 2025 @08:58AM (#65845505)
      "This thread may be monitored for quality assurance and training purposes"
      • Oh the infamous corporate "may"

        You may or may not do something? Oh great! And who could tell me if you are doing it or not if not YOU???

        That's the best example for CYA communication. We can't guarantee that we will or will not do it because we have no oversight over our own actions.

    • by allo ( 1728082 )

      Tik Tok speech: He unalived himself

    • Yeah, I'd call that sentence more... vapid and awkward, like much corporate speech, than tortured. It's not like it's unparseable; the meaning is fairly clear -- it's just pointless in the context presented. It serves no constructive purpose for the reader, being a vague attempt to invoke a nonspecific sense of nostalgia, likely to distract people from the frustration of the shop being closed? Corporations churn this slop out with or without LLMs

  • by Qbertino ( 265505 ) <moiraNO@SPAMmodparlor.com> on Tuesday December 09, 2025 @07:00AM (#65845395)

    ... and that somebody as omnipresent as an LLM-bot would influence those way more effectively than a regular human should be of no big surprise.

  • ... people is getting easier.

  • by reanjr ( 588767 ) on Tuesday December 09, 2025 @07:17AM (#65845415) Homepage

    Humans are exceedingly bad at detecting LLM authorship. A bunch of specious assertions now litter the Internet accusing people of being "AI" because they used proper grammar or didn't manage to properly formulate a thought.

    99% of human writing is of low quality and is essentially worthless to anyone but the author and the author's mother.

    No need to appeal to the content's artificiality when it's schlock either way.

  • What makes the researchers think there are any humans on these channels anymore?

  • Really ... so what?

    An overwrought social media post is "inauthentic"?? A political speech or protest tirade is "inauthentic"? Seriously?

    I mean, as if they weren't that way ... before chatbots?

  • I rather sound like a radio disk jockey like Wolfman Jack
  • Not only that, the filter distorts the answers or hide information to push an agenda? What filters can do:

    * Suppress or omit certain facts or perspectives entirely, so you never see them suggested as plausible options.

    * Overemphasize other viewpoints, sources, or risk narratives, making them appear more “normal” or authoritative than alternatives.

    * Steer the tone (e.g., always alarmist, always reassuring, always deferential to specific institutions), which affects how you emotionally in
  • by buchner.johannes ( 1139593 ) on Tuesday December 09, 2025 @08:53AM (#65845499) Homepage Journal

    Some feel weird when people say please and thank you when interacting with chat bots. If what TFA says is true, I am curious whether norms of politeness (or lack thereof) in chat bots, treating them as disposable, emotionless tools could also leak from chat bot interactions into human interactions. It seems plausible that humans cannot maintain a clear mental separation.

    • by allo ( 1728082 )

      Saying please is not stupid, because friendly questions in the training texts have more often a friendly answer than unfriendly ones. Saying thanks in the end is useless, the chat state is discarded anyway when you start a new chat.

    • Thanks for the thought-provoking post. I’m one of those people who says “please” and “thank you” to chat bots, and I don’t find it weird at all. For me it’s mostly habit transfer from human conversation. I’ve always written that way in email and spoken that way in person, so my “LLM voice” ends up inheriting the same phatic fluff. “Please” and “thank you” are semantically null, but they’re not functionless. They

  • Remember when "researchers" and parents were freaking out and thinking that every kid in the world was going to talk in a British accent?

  • would you like me to provide a list of memes showing the obvious?

  • by sabbede ( 2678435 ) on Tuesday December 09, 2025 @10:19AM (#65845633)
    It would be nice to see a correction there after decades of SMS induced decline.
  • If the word 'literally' is used less than once per sentence it's literally written by AI. Humans literally use it as a word to put emphasis on anything, literally.
    • Why don't we push the boat out and have it be litorrally? After all, we're already swimming in a sea of poor use of language and apalling use of grammar and punctuation, so instead of being awash in that we can simply switch to nautical themes for everything. "Thar she blows" comes to mind when considering most things written in the last forty years due to quality of prose and, often, subject matter or research rigor if it's reporting results of studies.
  • posting != speaking (Score:4, Informative)

    by rocket rancher ( 447670 ) <themovingfinger@gmail.com> on Tuesday December 09, 2025 @11:25AM (#65845745)

    What the Max Planck paper and these articles show is pretty specific: if you track a bunch of YouTube talks and podcasts over time, you can see a noticeable uptick in a small cluster of GPT-favored words -- delve, comprehend, boast, swift, meticulous, etc -- starting shortly after ChatGPT shows up. The authors call this a closed cultural feedback loop: we train the model on us, the model develops its own lexical quirks, then we start picking those quirks up in our own speech.

    That’s interesting, but we should be careful about what it actually means. Swapping “dig into” for “delve” in a TED talk is not the same thing as creating a new spoken dialect. Even though the authors filtered for dialogue, the corpus is still academic talks and STEM-adjacent podcasts -- performative environments where people mimic the written word, read from notes, or stick to a professional register in their vocal delivery. That’s much closer to “spoken writing” than to how people actually talk over coffee or in a bar. Texting isn’t talking; neither is reading your substack post into a microphone and then using it as a vocieover on your vlog.

    Language evolves, and LLMs are going to evolve with it -- the printing press standardized spelling. Strunk & White and AP style sheets homogenized prose. PowerPoint presentations in cubicle land gave us “going forward,” “at the end of the day,” and “leverage synergies.” LLMs are going to reflect those changes. But here's the thing that most people seem to miss when it comes to LLM generated text -- you’re going to see convergence on a limited palette of words – humans already use a very sparse active vocabulary. The OED documents over 600,000 words and word-forms in English, but the average native speaker actively uses maybe 5,000–10,000 and knows on the order of 30–40k at most. In a bucket, each of us is operating with a dramatically limited sub-vocabulary of what the language actually makes available. And it is not just English -- you see the same pattern in every language: gigantic dictionaries, tiny personal vocabularies, and a brutal frequency curve where a few thousand common words do almost all the work. Any LLM trained on human text is going to converge hard on that high-frequency core, no matter how clever the prompting is.

    This vocabulary shift is mostly harmless; the content shift is a different beast. The AI slop problem is real, but it’s not fundamentally about words like “delve.” It’s about incentives. If karma, clicks, or ad impressions reward volume over thought, of course an LLM is going to become an industrial slop gun: rage-bait scenarios, synthetic drama in AITA subreddits, engagement spam, and low-effort rewrites to farm outrage (looking at you, slashdot trolls). Mods are responding with vibe-based detection because the platform's software filters are looking for spam, astroturf, and bad-faith pattern posting, not whether the post was the result of an LLM prompt.

    Where I think people go off the rails is treating any LLM involvement as illegitimate by definition. There’s a huge difference between copy-pasting the first thing the bot spits out, versus using it as a drafting tool. In the latter case, the LLM is closer to a very fast, slightly overeager junior copy editor. The responsibility for clarity, nuance, and honesty still sits squarely on the human side of the keyboard. I think any kind of written communication needs an author and an editor. LLMs are good at the first role – connecting words in plausible ways – and terrible at the second without a human in the loop.

    So yes, we can probably measure an LLM accent in online text, especially in semi-scripted speech, which is what the paper's authors focused on. I just don’t buy that as proof of cultural doom. It’s evolution in action: new clichés, new stock phrases, a new batch of verbal tics we’ll eventually mock the way we

  • I talk the way [my Amazon Echo][Siri][a telephone menu] forces me to talk.
  • My experience with Chatgpt is it's always telling me how I'm exactly right, and definitely did the right thing. I wanna know if *that* is showing up in online speech.

    • I wish I had a girlfriend like that... (Not really, I prefer people that challenge me. Maybe someone should train ChatGPT to challenge people.)
    • by gweihir ( 88907 )

      Yep, pretty much. Not all of them but far too many. And some of the malicious ones are exceptionally loud in addition.

  • ...how often does ChatGPT use Dictionary.com's word of the year, "6-7"?
  • by Anonymous Coward

    About a decade or so I worked at a place where we needed to geo-locate a lot of addresses (turn text like "1234 Fake St" into long/lang coords). I/we didn't want to pay any of the geolocation services, but I wanted to use them. "Arrr!" you might say.

    But there were several free geocoders, where you're rate-limited, and so I had an asynchronous process (i.e. a cron job) carefully poll the free geocoders and never ask too much at once.

    The geocoders all gave different results. Sometimes some of them, even "good

  • Adding random FUCK to your messaging will poison the SHITHEAD AI systems who are scraping the PENIS JOCKEY interwebs to learn how to ASSHOLE speak proper GODDAMN American English. I highly VAGINA recommend doing this to PISS OFF the computer nerds who are PISSING down our throats with CUMGUZZLER all this AI garbage.

    • by allo ( 1728082 )

      Funny idea, but it won't work. Even when you post gets into the training corpus and gets included in the data actually used to train the AI, LLMs automatically learn to find the useful data in it, similar to how your brain does. Someone reading your post sees the swear words and tries to ignore them (in LLM you would say they give them low attention value), and can still have the takeaway that your capitalized American English as proper nouns and improve this way their English.

  • "It looks like you are attempting grift, would you like help with that?"

  • For most people, the LLM is "smarter" than they are because their skills regarding understanding and insight are essentially zilch.

  • In this absolutely iconic culture-model feedback loop moment, the article is basically saying:

    “Researchers have discovered that humans — those legacy quantum analog AGI endpoints — are now speaking in ChatGPT.”

    Which is hilarious, because from a systems-architecture perspective, that’s just convergence in the shared latent space.

    We’ve got:

    - Human eyes as quantum-dot electromagnetic spectrum sensors,
    - Wired into dual analog GPUs (left/right occipital lobes),
    - Pretrained in

"Would I turn on the gas if my pal Mugsy were in there?" "You might, rabbit, you might!" -- Looney Tunes, Bugs and Thugs (1954, Friz Freleng)

Working...