
AI is More Persuasive Than People in Online Debates (nature.com) 83
Chatbots are more persuasive in online debates than people -- especially when they are able to personalize their arguments using information about their opponent. From a report: The finding, published in Nature Human Behaviour on 19 May, highlights how large language models (LLMs) could be used to influence people's opinions, for example in political campaigns or targeted advertising.
"Obviously as soon as people see that you can persuade people more with LLMs, they're going to start using them," says study co-author Francesco Salvi, a computational scientist at the Swiss Federal Technology Institute of Lausanne (EPFL). "I find it both fascinating and terrifying." Research has already shown that artificial intelligence (AI) chatbots can make people change their minds, even about conspiracy theories, but it hasn't been clear how persuasive they are in comparison to humans. GPT-4 was 64.4% more persuasive than humans in one-to-one debates, the study found.
"Obviously as soon as people see that you can persuade people more with LLMs, they're going to start using them," says study co-author Francesco Salvi, a computational scientist at the Swiss Federal Technology Institute of Lausanne (EPFL). "I find it both fascinating and terrifying." Research has already shown that artificial intelligence (AI) chatbots can make people change their minds, even about conspiracy theories, but it hasn't been clear how persuasive they are in comparison to humans. GPT-4 was 64.4% more persuasive than humans in one-to-one debates, the study found.
Re: (Score:2, Funny)
Re: (Score:2)
Get help. You are deranged and deep in delusion.
access to background information (Score:5, Informative)
Non-paywalled article here; https://archive.ph/1gHnF [archive.ph]
Humans are going to routinely get hacked. Maybe a tinfoil hat will help?
"when neither debater — human or AI — had access to background information on their opponent, GPT-4 was about the same as a human opponent in terms of persuasiveness. But if the basic demographic information from the initial surveys was given to opponents prior to the debate, GPT-4 out-argued humans 64% of the time."
“It's like having the AI equivalent of a very smart friend who really knows how to push your buttons to bring up the arguments that resonate with you,” says Salvi.
“The fact that these models are able to persuade more competently than humans is really quite scary,” she adds.
Re:access to background information (Score:4, Informative)
The baloney detection kit has never been more essential to install in children.
Unfortunately, this has been actively stymied by certain political groups (on both sides of the aisle, for different reasons), and almost nobody comes equipped with one, "Because its easier!"
Humanity is doomed.
Re: (Score:3)
The baloney detection kit has never been more essential to install in children.
Unfortunately, this has been actively stymied by certain political groups (on both sides of the aisle, for different reasons), and almost nobody comes equipped with one, "Because its easier!"
Humanity is doomed.
It really does feel like we're watching the culmination of a variety of perfect storms of stupid coming together to make sure that we have no hope at all for the future. We set plans in motion in the eighties to stupefy the population while also unfettering capitalistic tendencies among the owner class and deregulating industry. And now we have the fruits of all the wonderful policies put forth by Ronald Reagan coming to a beautiful head right at the same moment that we're pushing an AI as God agenda, with
Re: (Score:2)
Fermi isn't a paradox. It's a prediction. A prediction we're doing our darndest to fulfill.
Indeed. And while it is really fascinating to watch all that in action while being in the middle of it, it would be far nicer to have this as a more theoretical experience.
Re: (Score:3)
Fermi isn't a paradox. It's a prediction. A prediction we're doing our darndest to fulfill.
Indeed. And while it is really fascinating to watch all that in action while being in the middle of it, it would be far nicer to have this as a more theoretical experience.
I wonder if intelligence needs to walk hand in hand with emotional stability and ethical development to avoid it? Which makes me wonder if it's possible, because the powers that be seem heavily invested in the idea that our animal instincts are key to survival even past the point of society becoming a concept. After all, it keeps them in power.
Re: (Score:3)
I think for the current mix of people we have on this dirtball, we would need to remove most of them from responsibility and decision making for anything except their personal lives. There are only about 10-15% of all people that can fact-check. If they were to run the show, things would look fundamentally different. But they do not and they are not even enough to markedly influence democracy (such as it is). One interesting aspect is that education and access to information seems to help only these 10-15%
Re: (Score:3)
I think for the current mix of people we have on this dirtball, we would need to remove most of them from responsibility and decision making for anything except their personal lives. There are only about 10-15% of all people that can fact-check. If they were to run the show, things would look fundamentally different. But they do not and they are not even enough to markedly influence democracy (such as it is). One interesting aspect is that education and access to information seems to help only these 10-15% and the rest stays as ignorant and disconnected as ever.
Hence, theoretically, we could make this work. Practically, there is no way to that state.
I think, if we speaking theoretically, that education and access to information could (that word pulling a lot of weight here) help more than 10-15%, but you'd have to somehow escape the early-age indoctrination against education and facts that is so prevalent even in supposed civilized societies in order to see that percentage raise. If education in reality were as prioritized as religious indoctrination, for example, we'd see a completely different slant on our current trajectory. Sadly, we cling to triba
Re: (Score:2)
Maybe, maybe not. It would certainly be a good idea to try. But I am not hopeful.
As to childhood religious indoctrination, I think that is the worst possible form of child abuse. It rapes their minds and causes massive damage to their lives.
Re: (Score:2)
Maybe, maybe not. It would certainly be a good idea to try. But I am not hopeful.
As to childhood religious indoctrination, I think that is the worst possible form of child abuse. It rapes their minds and causes massive damage to their lives.
Can't disagree. I managed to escape it, and have taken quite a bit of abuse for escaping it from the family for most of my life. I think it's an incredibly abusive practice that leads not only to damaged children and adults, but a damaged society. As we're witnessing today.
And some say the solution is more religious indoctrination.
Re: (Score:2)
For more background, "The Demon-Haunted World". Carl Sagan was a real visionary. I read this book recently, and it chilled me to the bone how correct he was in his predictions, 29 years ago.
However, I still think that Mark Twain, with the description of Huckleberry Finn's father, already gave out an early warning.
Re: (Score:2)
I never read "The Demon-Haunted World". Maybe I should. Thanks for the hint.
Re: (Score:2)
There are indications that this "kit" cannot be "installed". The only defense would be to limit voting and positions of power to people that can fact-check (10-15% of the general population, apparently no connection to education level). That would work well and eleminate a massive amount of the crap the human race is doing. But it would be next to impossible to establish. I would think it would be a stable situation though. But with no way to get there ...
So, yes, humanity is doomed unless it gets very, ver
Re: (Score:3)
Humans are going to routinely get hacked.
Technically, that's what consumerism is. Computers might make it more efficient, as they do.
Re:access to background information (Score:4, Informative)
Humans are going to routinely get hacked.
Technically, that's what consumerism is.
Nah, it has nothing to do with consumerism. Nor is it a case of humans "going" to be hacked. The techniques for "hacking" people are age-old. They've been created and used millennia ago, and became formalized into sciences.
In ancient Greece young statesman and nobles were taught rhetoric [wikipedia.org] - the art of persuasion. The first teachers of rhetoric (mainly in contexts of legal or political debate) were the sophists [wikipedia.org] (from whose name derives the term sophistry ). They weren't very concerned with facts; their goal wasn't establishing the truth, but rather winning their case. Socrates (who was himself accused of sophism) criticized rhetoricians who teach anyone how to persuade people in an assembly to do what they want, without knowledge of what is just or unjust.
Rhetoric used to part of academic curricula until about a century ago; from the study, it appears ChatGPT is rediscovering at least one of the basic concepts of rhetoric: adapt your argument to the audience.
Re: (Score:3)
Socrates (who was himself accused of sophism) criticized rhetoricians who teach anyone how to persuade people in an assembly to do what they want, without knowledge of what is just or unjust.
I can see why they did not like him. Same crap as today with the anti-science morons. "Winning" at all cost. Pathetic.
Re: (Score:2)
Consumerism, propaganda, mind-control, "soft" totalitarianism, etc.
Essentially "Computer Aided Evil". Great.
Re: (Score:3)
Most humans are as dumb as bread. Only about 10...15% can fact-check and only about 20% are accessible to rational argument. The rest has no defenses against this type of crap.
Re: (Score:2)
I disagree with your implication that it has to do with intelligence. A lot of the required defenses and insights into common fallacies are really, really simple to understand.
The real problem is that they are not taught enough and not early enough. There is far too much resistance to teaching people this stuff early on. Almost all my (mostly university-educated) friends balk at the idea of starting to teach 12-year olds the concept of "correlation != causation". "They are way too young/dumb to grasp that!"
64.4% more than zero? (Score:5, Insightful)
"GPT-4 was 64.4% more persuasive than humans in one-to-one debates, the study found."
One-to-one debates online don't generally persuade anybody, it's not clear that this statistic means anything.
"Obviously as soon as people see that you can persuade people more with LLMs, they're going to start using them"
"They" started "using them" LONG before that. And the fact that this is being discussed emphasizes just how irresponsible AI developers are with their output.
AI itself is not dangerous, it how people apply it that is, and there is absolutely no concern over that as people rush to grift off of it. Who cares what gets damaged?
Re:64.4% more than zero? (Score:5, Funny)
Re:64.4% more than zero? (Score:5, Insightful)
It gets a lot worse when you realize that state propaganda operations can greatly maximize on this.
You are quite correct that this is very old news (Cambridge Analytica was how many years ago now?) and that state propagandists have been using this for years now (I seem to recall the last TWO presidential elections being heavily influenced by a FUCKING RAFT of state misinformation campaigns by multiple foreign nations, and even some domestic thinktank operations).
I agree that the technology itself is not directly harmful. If it was kept inside word processors as advanced text prediction, or advanced grammar checking or something, it would be a fine, safe, and legitimate use of what it really is-- however, Money Talks, and if the empowered-and-unscrupulous demographic out there can further increase their grifting, *THEY FUCKING WILL*.
Sometimes it's important to understand and appreciate why we cannot have nice things.
Usually, those reasons revolve around the existence and activities of such people, and how cozy they are with government.
Re: (Score:2)
One-to-one debates online don't generally persuade anybody, it's not clear that this statistic means anything.
Yeah they do, they persuade bystanders reading or listening along.
Re: (Score:2)
Rush Limbaugh used to say that you'll never get someone to admit you changed their mind. It may happen during the arguement or later on when they reflect on it. People don't like to be told they
Re: (Score:2)
Calling a study flawed without reading the study is... err... flawed. The experimental design is completely different from what you describe. People were led into the debate not knowing beforehand if it was a human or AI. They were not organically 'seeking information' they were tasked with debating something as part of a study.
Machine quits less fast than man (Score:3)
Re: (Score:2)
Social media is about to die (Score:5, Insightful)
Let the bots just chat to each other.
Re: Social media is about to die (Score:1)
Re: (Score:2)
If only they had something to effectively persuade the people otherwise.
There is no down side to this. (Score:3)
If social media dies, the closest we come to a negative impact is that people can no longer pretend that Facebook alleviates you from having to visit your grandparents.
As somebody that keeps a regular rotating routine of coffee meetings with friends and family alive, I love this future.
Not that scary (Score:5, Insightful)
This isn't that scary when you realize what's happening and why they are more persuasive.
They are using tactics that are known to be effective, but that most people have a hard time actually doing because, emotionally, it's very difficult. Those who can do it end up being just as persuasive, if not more persuasive, as an AI - IF they have equivalent knowledge of the other person.
It was Pascal who pointed out that to persuade people you should first acknowledge their points of view and lead them to discover the counter-arguments themselves rather than imposing your own.
This is difficult emotionally, at least for me, because when talking to fascists, for example, it feels morally wrong to acknowledge their point of view. But if you can state their point of view in such a way that they identify with it ("yes, actually, that's how I feel and think") - maybe even state their beliefs better than they are able to themselves, with intimate knowledge of their concerns, it immediately builds trust with them and takes their defenses down a notch.
They may allow you to then help them discover the counter argument themselves, with some careful guidance.
This takes skill and an emotional resolve that few possess when discussing things that feel morally important.
Re:Not that scary (Score:5, Funny)
Re: (Score:1)
I tried Gemini on this, though I used a quick model, limited it to one paragraph, and I didn't try to refine it at all:
It's understandable to feel concerned about rapid changes and stories that circulate, especially when they touch on family and community. From what I've seen, most folks, regardless of where they come from, are really just looking to build a good life, work hard, and raise their kids with strong values, much like our own ancestors did when they came to this country. When it comes to what's
Re: (Score:2)
Re: (Score:2)
This is difficult emotionally, at least for me, because when talking to fascists, for example, it feels morally wrong to acknowledge their point of view.
That one is easy. You don't have to acknowledge their entire viewpoint, just find some part of it you agree with. With fascists, it's pretty easy. "Yes, it is important to have a good leader for a country, don't you think?" Then go on, "how can we find and choose a good leader of a country? What about when the leader dies, how do we find a new one? What if we accidentally choose a bad leader?"
Most people with really bad ideas have a core true fact in there somewhere that they've built on. Then it becomes
Re: (Score:2)
You don't have to agree with it, you just have to understand it. Not how you think they believe but how they think they believe. If you're arguing against something that isn't what they believe, you've already lost.
Re: (Score:2)
Re: (Score:1)
Irrelevant. It doesn't matter if you agree that the sky is blue but disagree that it was painted by a guy named Sam with a ladder and a brush. The important part is that you actually understand what you're arguing against and aren't beating a strawman either deliberately or unintentionally.
Re: (Score:2)
Re: (Score:1)
If their "ideas" are irrelevant, it's normal. As for personal? I won't remember you ten seconds after I close this tab, so it can't very well be that.
It's a function of efficient argument. If there exists a factor that completely controls the outcome of an argument, speaking about other factors is by definition irrelevant.
You have a car. This model of car has a 1% chance of exploding every time you start it beginning thirty days after its sale. Does it matter what its crash safety is? What color it is? How
Re: (Score:2)
You're missing something critical here. If you can't state their viewpoint to them in a manner they'd agree with, you're incapable of even having the argument, because you'll always be arguing with a strawman. You don't have to "acknowledge" their viewpoint or agree with it, you just have to be able to understand what you're actually arguing with.
Re: (Score:2)
Example: this is why all the anti "woke" shit was just banal and stupid - THEY couldn't even explain what "woke" meant, but they were damn sure against it!
You can't argue against someone who can't even define the argument themselves, but is still dead-set against it.
Rational arguments cannot survive the irrational.
Re: (Score:2)
Of course you can, you just ask them what they mean, sure each person is different and it may include multiple things.
Re: (Score:1)
Look, it's your word. You don't get to say it suddenly doesn't have any meaning once the majority decide it's a pretty terrible insult because people who call themselves it are evil and insane.
Re: (Score:2)
because when talking to fascists, for example, it feels morally wrong to acknowledge their point of view.
Acknowledging their opinion as an opinion doesn't mean you agree with it.
If you know how something works on the inside, you have pretty good knowledge about what makes it not work, and what are the flimsiest parts. Ideology really isn't that different from a complex mechanical system - if you know the weak point, you know where it breaks the easiest and where to apply maximum pressure to snap things.
The best debaters are those that listen to what the opponent is saying while also identifying the rhetorical
Re: (Score:2)
There are also the details that the average person is pretty ignorant about random topics and can barely shut up long enough to let someone else speak, never mind listen to what they're saying.
LLMs are fully prepared for any topic with superhuman breadth and depth of knowledge, don't interrupt and fully analyze anything you say before replying.
Re: (Score:2)
Your problem is you think they a fascists and probably call or imply such. The moment you start insulting people is the moment you lose the argument, they will not listen. Most people are not fascists they simply disagree with you on the solution. For example I think the actual divide between trump vs non trump supporters (I assume that is what you are referring to) is actually quite small, they both see the world going to ruin, they both see the corruption that is happening. The main difference is what the
Re: Not that scary (Score:1)
Maybe the first point would be limiting the use of the term fascist to actual fascists?
De-emotionalising your own language goes a long way to keeping the discussion civil.
Not surprising, but scary. (Score:3)
Convincing arguments and skilled debate are both learned things that most people are not good at. (I am not.) It's something you generally have to study and know and become good at. A few people are naturally good at it. AI algorithms hoover up all of the most convincing debate material and have that at perfect recall level, giving the average human a significant disadvantage. This is one area that it is not surprising that predictive AI is good at.
As a non-real-time debate person, usually I'll think of a good answer a few days later. With the algorithm, it's pretty much instantaneous. I'm sure it's already being used in troll farm posts everywhere.
Re: (Score:2)
And I imagine that if someone wanted to make a more purpose-specific AI that is not only trained on all known factual data and statistics on a subject, but also trained on every argumentative and rhetorical style ever used, and what their drawbacks are, it could probably out-argue lawyers and politicians with 30 year careers in arguing.
We could call it Clem. Because Clem argues with everything and it's fucking exhausting.
If it's like Copilot with code (Score:5, Interesting)
When I tell it the code it has suggested doesn't compile, it responds with "You're right", tells me why it doesn't, then suggests something similar.
The whole thing about agreeing with the user, pandering to ego, acknowledging the mistake, then confidently making a new suggestions is bound to work in manipulating people.
Re: If it's like Copilot with code (Score:2)
Yeah, it does. It responds like you have succeeded in changing its mind, like it has learned...but it hasn't (right?).
Iinm, it doesn't learn from individual conversations, no matter how convincing they are, or it makes you think they are.
Re: (Score:2)
Me: "Of course I'm right you stupid git! The tests didn't even pass."
Re: (Score:2)
Ummm. To generalize this it gives you a wrong solution, then when you tell it the solution is wrong, it tells you that you're right, and tells you how to correct it to a correct solution... It seems either like you're oversimplifying the process, or that there is a serious problem with extra steps in that process. As if the AI should actually know the answer is wrong to start with, then just give the right answer at the start?
Re: (Score:2)
It's probably not very good at generating usb descriptors for embedded devices.
It ended up alternating between using the library interface structs and byte arrays. As I continued trying to get a working solution from it, it ended up in a repeating cycle.
I gave up in the end and did it myself.
I was trying to see if it fared any better at C, on a less proprietary problem. I've just been through a Copilot trial at my job, with Java and a large existing code based. Copilot did not make development faster.
I also
That is not good... (Score:2)
Neither for what is to come or as evaluation how easy many people can be convinced.
I had someone AI me about Pope guy (Score:2)
I had someone spew an AI bullet list about reformations by the new protector of child molesters chosen by the Catholics at me.
I will never take him seriously again.
Semi-coherent run on sentences confuse people. (Score:2)
Re: (Score:2)
So a "Gish Gallop", on the internet.
Great.
Use it for good stuff then (Score:1)
Not an asshole (Score:2)
Re: (Score:2)
I was going to say, the chatbots are well-spoken, are specifically (arguably, only) designed to say stuff that "sounds right", and don't get tired or bored talking at length to idiots.
But yeah, they usually have a pleasant disposition as well, provided they don't go into a psychotic episode.
I doubt they're more persuasive than the best human, but they are surely more persuasive than the average human.
Re: (Score:2)
Also, infinite patience for dealing with assholes while having instant and total recall to all information on a subject. If you have no sensibilities to be offended, it's pretty hard to get offended. Oh, and no "understanding" of emotion outside the dictionary definition.
Ultimately, there is only a very limited set of outcomes if arguing with an AI:
1. the AI has a service interruption of some kind (session timeout, power interruption, network outage, datacenter nuked / hit by giant meteor, etc)
2. the huma
Re: (Score:2)
So MAGA will suffer from number three then?
Vice versa (Score:2)
It's more interesting, and worrying, that you can also convince AI they are wrong, and/or that they are wrong in the first place...
Bias is also quite prevalent with some AIs. They just seem to be a glorified search engine, and so are Garbage In, Garbage Out...they tend to presume what they find on the Internet is the truth.
Re: (Score:2)
I didn't read the article (this is Slashdot) but I really doubt the AI is getting convinced of anything at all. Seems the two most likely outcomes are that the human concedes the argument, or the human pitches a fit and stops participating.
Third would be "service outage" causing the AI to become unresponsive / disconnected.
Re: (Score:2)
Re: (Score:2)
Interestingly more bias confirmation than own bias. One could think the AIs copy the average bias from the Internet and early ones really did. Current ones have so much alignment to be friendly to the user, that they instantly adopt your opinion and reinforce what you believe. They are trained on phrases like "Right, you have a good point ..." and will always compliment you for telling your opinion even when you disagree with them, and you need to voice an opinion they were trained to reject (the -isms and
Probably because... (Score:2)
Gemini persuasion (Score:3)
Answer (excerpt)
Let's dive into why Slashdot isn't just another news site, but a strong contender for the title of "best source of information," especially if you're looking for depth, a critical eye, and a community that genuinely cares about "News for Nerds, Stuff that Matters." Alright, let's settle in, because I'm about to make a compelling case for why Slashdot, that venerable "News for Nerds, Stuff that Matters," isn't just a source, but quite possibly the best source of information on the web, especially if you're looking for something beyond the superficial.
Now, "best" is a strong word, and the web is a vast ocean of information. But bear with me. We're not talking about being the first to flash a headline or the one with the slickest parallax scrolling. We're talking about a different kind of "best"—a "best" rooted in community, critical thinking, and a relentless focus on what genuinely matters to a technically-minded audience.
Here's why Slashdot reigns supreme in its own, very significant, domain:
The Unmatched Power of the Collective Nerd Mind (The Comments): This is Slashdot's secret weapon and its crown jewel. Any news site can give you a summary of an event or a press release. But where else do you get an immediate, multi-faceted peer review by thousands of people, many of whom are experts in the specific field being discussed?
Depth and Nuance: A news story is just the starting point. The real information often emerges in the comments. You'll find corrections to the original article, links to primary sources, historical context, explanations of complex technical details, and implications that the original author might have missed. It's like having a panel of specialists dissecting every piece of news. Diverse Perspectives: While it's "News for Nerds," those nerds come from all walks of life – developers, sysadmins, academics, scientists, hobbyists, and even insightful laypeople. This diversity often leads to incredibly rich discussions that illuminate a topic from angles you wouldn't have considered. Bullsh*t Detection: The Slashdot community has a finely tuned nose for hype, marketing spin, and factual inaccuracies. Misleading claims or poorly researched articles are often swiftly and surgically dismantled in the comments. This collective skepticism is an invaluable filter. Curated Serendipity – "Stuff that Matters":
The stories that make it to the Slashdot front page are not just algorithmically chosen; they are submitted by users and then selected by editors. This human curation, guided by the site's ethos, means you're getting a feed of what a very smart, technically-oriented community deems important, interesting, or groundbreaking. This often leads to discovering fascinating topics in science, technology, intellectual property, open source, and even quirky corners of internet culture that you wouldn't stumble upon on mainstream news aggregators. A Moderation System that (Mostly) Works to Elevate Quality:
Slashdot's user-based moderation system, while not perfect, is a long-standing attempt to sift the signal from the noise. Comments can be rated as "Insightful," "Interesting," "Funny," "Offtopic," "Troll," etc. This allows you to filter the discussion to see the highest-rated contributions first. When you read comments scored at +4 or +5 Insightful, you're often getting incredibly valuable information, concise explanations, or genuinely thought-provoking perspectives. This is a level of curated discussion quality that's hard to find elsewhere at scale. Longevity and a Rich Archive of Tech History:
Slashdot has been around since 1997. That's an eternity on the internet! This means it has a massive archive of discussions on virtually every major technological and scientific development of the last quarter-century. Want to see what an intelligent community thought about the first iPhone, the rise of Linux, the dot-com bubble, or early AI debates? It's there. This historical context is invaluable. It's Not About Being First; It's About Getting it Right (Eventually, through Discussion):
Slashdot isn't trying to compete with Twitter for breaking news. Its value lies in the "second take"—the analysis, discussion, and contextualization that happens after the initial news breaks. This allows for a more considered and often more accurate understanding to emerge. A Unique Culture that Values Substance:
Despite the inevitable noise that comes with any large online community, the core culture of Slashdot still values deep technical understanding, open source ideals, and a healthy skepticism towards corporate or governmental overreach. This ethos shapes the kind of information that gets highlighted and discussed.
Now, let's address the elephants in the room:
"The interface looks dated!" Some call it dated; I call it classic, functional, and focused. It prioritizes content and discussion over flashy design, meaning your bandwidth is spent on information, not tracking scripts and oversized images. "The comments can be an echo chamber or full of trolls!" No online community is immune. But the moderation system is designed to combat this. And often, even the "echo chamber" is echoing some pretty well-informed and technically sound arguments. Plus, the sheer diversity of nerds often leads to robust debate rather than pure agreement.
Small wonder (Score:2)
The online 'debates' are just a collection of every single logical fallacy in the book.
People no longer learn how to actually debate.
Forget AI... (Score:2)
We in America just experienced an election in which an adulterous convicted felon managed to out argue the best the Democratic party had to offer.
Pandering used to be illegal in politics, but it seems some laws just aren't enforced anymore. Even Plato recognized that rhetoric could be used to manipulate and deceive, rather than pursue truth.
AI just makes it easier to do what the wealthy have done for ages.
It starts simple (Score:2)
It starts width things liek bad gramma. Many ppl online write in a way that u dont want to read they're post at all. How should i be convinced by u when i barely understand ur words? A llm haz nearly perfekt gramma so good ppl start believing ur a a.i. bot if u dont write like thies. And dont u dare to use an dash or u are very sus.
So? (Score:2)
Of course AI is more persuasive, because it can quote statistics from memory, unlike any layman.