Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Social Networks Science

Social Media Sanctions Hit Conservatives More, But Due to Content Sharing, Study Says (nature.com) 217

A study published in Nature has found that conservative social media users were more likely to face sanctions, but attributes this to their higher propensity to share low-quality news rather than political bias. Researchers analyzed 9,000 Twitter users during the 2020 U.S. election, finding pro-Trump users were 4.4 times more likely to be suspended than pro-Biden users.

However, they also shared significantly more links from sites rated as untrustworthy by both politically balanced groups and Republican-only panels. Similar patterns were observed across multiple datasets spanning 16 countries from 2016 to 2023. The study concludes that asymmetric enforcement can result from neutral policies when behavior differs between groups.
This discussion has been archived. No new comments can be posted.

Social Media Sanctions Hit Conservatives More, But Due to Content Sharing, Study Says

Comments Filter:
  • Isn't it strange that platforms are suspending users at all for misinformation and are acting as the ministry of truth. It doesn't really matter if a source came from something low quality, it shouldn't be policed by other users posting their rebuttals and responses, not by the platform suspending users.
    • by SmaryJerry ( 2759091 ) on Thursday October 03, 2024 @01:55PM (#64837411)
      Clarification - it *should* be policed by other users responses, not by a ministry of truth.
    • by linuxguy ( 98493 ) on Thursday October 03, 2024 @02:27PM (#64837527) Homepage

      This sounds similar to the argument that "markets are always efficient" and would always solve any inefficiencies on their own. The truth isn't all that cut and dry. Markets can be easily manipulated. And people can be easily manipulated as well. We do need some fact checking. But, extremes, in either direction are bad.

      In other words, a world with ministry of truth is bad. But idiocracy is just as bad. And like many things in life, a balanced approach is better.

    • by fropenn ( 1116699 ) on Thursday October 03, 2024 @02:30PM (#64837547)
      So you'd have no problem if, for example, I posted "SmaryJerry is a Nazi who Eats Dogs!" with a link to an AI-generated image of you wearing a swastika and eating a dog sandwich...as long as you had a few other users post rebuttals and responses?

      And what about if then I purchased a bunch of spam bots to like and repost my statement over, and over, and over, and over again, making it impossible for your friends to keep up their rebuttals and responses across my thousands of bots?

      And then - again, hypothetically - what if a political candidate picked up on this social media trend of SmaryJerry being a nazi and eating dogs, and began including it in speeches, and I provided yet MORE links to stories - this time about a political candidate who stated the FACT that SmaryJerry is a nazi and eats dogs, and the forum where the political candidate made this statement - hypothetically - had promised not to "fact check" him, so the statement just hung out there, getting more likes and hits and shares and reposts?

      You'd still say it's up to your friends to address this by posting their rebuttals and responses, or would you want some help from the platform that elevated, multiplied, and profited from untrue (I hope) speech about you?
      • As long as all those thousands or millions of views also get blasted with advertising, thus increasing the platform's revenues, it's all gravy. Want Ketchup on that dog? Try Heinz, the Nazi dog eater's favorite! Or perhaps you're a mustard only Nazi? Heinz Yellow Mustard uber alles!

      • Isn't this exactly what's been going on for the last 7+ years?

      • Easy. Sue you for defamation, no need to get the platform involved - the judge will give YOU a court order to take down the content or face further penalties.

        This has happened multiple times on twitter. There is a famous case in Canada from a guy named Steven Galloway, who was accused of being a rapist, and he went after 'anonymous' twitter accounts who repeated this statement and successfully won huge judgements.
        • by Midnight_Falcon ( 2432802 ) on Thursday October 03, 2024 @06:03PM (#64838201)
          I've dealt with online trolling and harassment before and this is a rare victory. A lot of time organized trolls will be located outside of any jurisdiction they can be sued by Americans in, e.g. Russia, Belarus etc. Or, Americans simply hire Russians to do the trolling and misinformation on people they don't like. I once handled a case where the trolls sent landlords false reports of a tenant throwing parties and disturbing neighbors with crazy shenanigans. Turns out they were sent by Russian members of an online business community the person was in. Good luck suing them.
    • Can I stand on your lawn and display any signs I want and say anything I want? And I mean to be there 24/7.

      • by Pascoea ( 968200 )

        Can I stand on your lawn

        No. It's my lawn, and I politely invite you to get off it. Stand on the curb all you want.

        and say anything I want? And I mean to be there 24/7.

        Depends. Harassment has been ruled to be not a 1A protected activity.

        • > No. It's my lawn, and I politely invite you to get off it.

          Right, the thing OP is calling Censorship is the platform flexing its private property rights. Just like I can't stand on your lawn and do whatever I want, one cannot stand on a Social Media company lawn and do things the platform does not like. Social Media platforms are someone's private property and the property owner has rights.

    • Re: (Score:3, Insightful)

      I don't agree. If some right-wing extremist keeps reposting false stories about Haitians eating peoples' dogs and cats, or zero-to-one-year-old babies being """aborted""" by """liberals""", and all the above have been proved 100% false, then any administrators of a responsibly-operated platform should have the right to remove content of that sort and take action against users who post it.
      Remember that we live in a tine where at least half the so-called 'users' on the Internet aren't even people, they're ro
    • Re: (Score:3, Insightful)

      it shouldn't be policed by other users posting their rebuttals and responses,

      You probably meant *should* be policed. That is what Twitter had until Leon took over and was then outraged when his lies were called out. Or the lies of the right-wing were called out. It's the same reason Vance had fake umbrage his lies were called out during the debate. He was fact-checked about something he's already admitted is a lie but will keep saying it. They don't like it when their lies are called out which is
    • by Ed Tice ( 3732157 ) on Thursday October 03, 2024 @03:57PM (#64837837)
      No, it's not strange at all. The platforms make money by having users engage with the platform and selling advertisements that the users view during that engagement. Few want to engage with misinformation and even fewer want to advertise their product next to misinformation.

      There is more than one social media platform out there and users are welcome to select the ones that they prefer. Truth Social, if I did the math correctly, has 0.05% of the number of daily users as Facebook.

      Nobody is going to engage on a platform littered with completely fabricated conspiracy theories devoid of evidence such as the earth has been sucked into a black hole, the moon landing was faked, the 2020 election was stolen, et cetera. Any platform that is willing to carry such nonsense will lose users at an alarming rate.

      Isn't it strange that, rather than stop sharing completely unsubstantiated nonsense from non-reputable sources, people would instead continue to do it to the point that social media companies would have to suspend their accounts?

    • Uh, no. If you post on a site where the terms of service (which are agreed to be the user) stipulate that maliciously false or libellous posts will result in user suspension or bans, then that is the correct course of action, not user responses.

      You must honour your agreements. And bans are not censorship. If a person is incapable of honesty, they've no business being on the Internet, quite frankly.

      I would point you to the fact that it has ALWAYS been this way. The USENET Death Sentence was used on multiple

  • There's no such thing, and a contributing factor to why this situation exists.
    • by SirSlud ( 67381 )

      It's astoundingly stupid thing to believe that it's not possible to assemble a group that balances a range of political viewpoints. I mean, I get why people believe it, but it's still dumb (and at their own peril/intellectual disservice in my opinion.)

  • Targeting the disease vector is appropriate, but it does put the greatest burden on the infected.

  • from the article: "First, we collected a list of Twitter users who tweeted or retweeted either of the election hashtags #Trump2020 and #VoteBidenHarris2020 on 6 October 2020" why do they think they are measuring actual people?
    • by ukoda ( 537183 )
      While your point is valid does it really matter? If people are following bots that they think they are real people, then when their favorite bot is block the followers will simply assume a real person has been blocked. They will then point to it as being an example of bias with their viewpoint being censored.
    • by rsilvergun ( 571051 ) on Thursday October 03, 2024 @03:11PM (#64837665)
      Any study that uses Twitter as its data set is completely worthless. By all accounts 80% of the people on that site are bots. Leon has pulled back virtually all antibot moderation and it's open season over there. The pussy pics in bio bots or just a tip of a very large very rotten iceberg.

      The far more interesting change is that Facebook has stopped prioritizing right-wing content. Before everyone gets their panties in a bunch no they're not shadow banning anyone they're just not going out of their way to put Ben Shapiro in your feed anymore. And yes that was something they were doing.

      I've yet to hear a good explanation of why they stopped but we know they did because the right wing media sphere is freaking out because their views have collapsed.

      Tim pool, a pretty well-known right winger, had a very amusing and very revealing thing happen where he tried to do a meet up with a large number of his fans and practically no one showed up. With the number of views he gets he should have had at least a couple hundred people but he had about a half dozen. It really shows that his views and his channel are being driven by bots and we knew that before we found out the Russians were funding him
      • Twitter was overrun with bots before Elon took over. Some estimates at the time of the acquisition estimated 90-95% of accounts were bots. If it's down to 80% then that's a improvement.

        • the estimates were closer to 5-10% because of *heavy* moderation.

          Don't confuse "total number of dormant bot accounts" with "accounts users see and interact with".

          FB has very low rates of bots that interact with humans. This isn't for the user's sake. It's because they have to report to advertisers how many bot are viewing their ads and they'll get the ever loving shit sued out of them if they lie. Not just because of the ad buyers, but because the shareholders.

          Twitter has no shareholders, Musk i
    • why do they think they are measuring actual people?

      Because people think they are measuring actual people. It's a distinction without a difference. Users do not go an investigate every like to see what percentage are bots, and neither do algorithms. So if something is re-tweeted X number of times for the purposes of any engagement, be that actual people, or what the algorithm will push to users it is considered to be real actual people.

  • by rta ( 559125 ) on Thursday October 03, 2024 @02:15PM (#64837491)

    They're only classifying trustworthiness by news domain, which is pretty suspect. For example dailymail has some crazy trashy stuff, but also some reasonably good stuff. And while i "trust" the nytimes in terms of actually having fact checkers and quoting like 2 sources per story and not making things up out of whole cloth and that basic meat and potatoes stuff, they are also quite Left leaning in many of their takes , story selection, pov they take, what they DON'T say, etc. So i trust them to get a quote right (though it might be somewhat selective or out of context), but that doesn't mean that i trust them to not be brainwashing unsuspecting people in a certain world view that i only partly agree with.

    • by grmoc ( 57943 ) on Thursday October 03, 2024 @02:28PM (#64837529)

      Sure, but, then shouldn't you support upping the signal of sites which don't have verifiable falsehoods with stuff made up whole cloth?
      Presumably that'd eventually result in the sites whose biases you agree with also having facts?

      • by rta ( 559125 ) on Thursday October 03, 2024 @03:35PM (#64837759)

        Well, my argument is that intrasite variability in quality is high so you have to judge the actual shared stories independently, not just by the site.

        e.g.:
        One of the most egregious provable censorship events was of the NYPost breaking story on Hunter Biden's laptop back 3 weeks before the 2020 election. It literally led to the NYPost's twitter (pre-Musk) account being locked for 16 days starting 3 weeks before an election. (https://variety.com/2020/digital/news/twitter-unblocks-new-york-post-hunter-biden-hacked-materials-1234820449/ ) in addition to various other bans actual and "de-indexing" of sharing any links to it not just on twitter but also on Facebook (see e.g https://www.bbc.com/news/world... [bbc.com] ).

        This was huge and obvious scandal material, literally selfies of the guy smoking crack and years of emails and texts including about business ties to China and Ukraine. Like this isn't subtle allegations of he sad she said... or obscure accounting irregularities in government contracting etc. And STILL the prestige media managed to bury it not just before the election but for a couple of years after.

        So basically you could not find that info in the prestige press at all.

        Unfortunately the nypost also posts vapid bigfoot stories (this one just today)
        https://nypost.com/2024/10/03/... [nypost.com]
        and has daily horoscopes https://nypost.com/horoscopes/ [nypost.com] and other things that are "other than high quality journalism".
        ---

        So just because nypost has bigfoot stories, it doesn't mean that it doesn't also have true stories that the prestige media won't cover and actively suppress.

        (but my main point was that the Nature study's methodology seems kinda weak and i don't think it provides very strong evidence for what they're claiming and especially for the "oh, the big social media aren't _unfairly_ persecuting right wing views" )

        • by grmoc ( 57943 )

          The only real issue I see with a per-item moderation, as opposed to a per channel moderation is that per-item moderation still creates incentives for orgs to lie and make up stuff because it makes you money and gets eyeballs, which eventually gets you more influence too. I tend to agree with that approach. .. however ..

          The problem with your proposed approach for the study is that those sites don't moderate per news item, and they don't demote stuff that is whole-cloth made-up on their own. The folks consumi

        • by Ed Tice ( 3732157 ) on Thursday October 03, 2024 @04:07PM (#64837875)
          If you judge individual stories rather than the source, you end up acting as a ministry of truth. If the Hunter Biden laptop story had landed in the NY Times or even the Wall Street Journal, it probably wouldn't have been so quickly dismissed. It seems the NY Post is somewhat kicking themselves for the trash they used to run and are trying to improve their reputation probably because they are feeling burned about the Hunter Biden laptop story.

          Reputation-based judgment is a normal part of society and, generally, beneficial. For those who don't like the consequences of a negative reputation, there's an easy solution: have a good reputation.

          Republicans who are frustrated that the Hunter Biden Laptop story was dismissed as fake have only their collective selves to blame. If it wasn't for the constant stream of lies Fox News, NY Post, et cetera, the starting credibility would have been positive.

          You're right that it's likely that such a similar bombshell story implicating a Democrat is going to get buried in the same way in the near future. But that's not the fault of the social media companies. That's the fault of those saying the 2020 election was stolen. And it's the fault of any Republican who doesn't end all of their posts with "The 2020 election was free and fair." Because, when something like that happens, the only people who are going to have a chance of being believed are the few Republicans who asserted very forcefully that the 2020 election was free and fair and that Jan. 6th was an inexcusable insurrection. I think that number is like five and might not be enough to get the message across.

        • Considering the many signs of tamper, from an editorial perspective, I would also have killed the story. https://cyberscoop.com/hunter-... [cyberscoop.com]

          • Yeah, then why didn't Hunter's criminal defense claim that it was tampered with, in his omnibus hearing on the admissibility of evidence, and instead take a plea deal admitting to guilt?

  • > higher propensity to share low-quality news

    For anyone reading along, this is just a euphemism of people with low intelligence. It's not as if propensity to share low quality news is some kind of innate human trait that some people have and some people don't. The issue, depending on your perspective is either: 1) people who cannot tell that it is low quality in the first place or 2) a failure of the media's implicit social contract to report truthfully and without bias.

  • by Anonymous Coward on Thursday October 03, 2024 @03:28PM (#64837729)

    Sanctions, bans, and other forms of social media censorship are not even partial solution(s). There is no way that these approaches can't be influenced and/or biased by internal or external force(s).

    I'm a little late posting, but all I see is a whole lot of political posturing, name calling, and the usual ad hominem bullshit.

    What I did not see, and had hoped for was for someone to point out that the news media feeding perpetual (usually out of context) sound or video bites that are clearly biased in "stories". Not part factual news pieces, but editorials if viewed through the lens of how news channels presented content decades ago. Also, editorials weren't always about a perspective, or view as much as asking the audience to consider possible origins, or possible outcomes, other than the most obvious one(s).

    Social media, including news sites, both fake and real, the often self-appointed "journalists", fifth grade-level memes, actual bots, and other attention seekers and manipulators.
    Users that are not following the rule.

    What rule? Well that rule is the most important one for social media, forums, etc:
    It is entertainment. In the school yard.
    Idiots, bullies, posers, liars, and con-men abound. Some of these kids are well connected or smart, and have serious cravings for attention and/or have agendas to push, and they want useful idiots. Some want to fool you into buying stuff you don't need or that is fake. But the full scope of their goals include most every possible scheme that a pseudo-anonymous asshole would use for self gain by using other people.

    Short version of the rule:
    "Don't be an idiot."

  • Oh, I see (Score:2, Interesting)

    The study concludes that asymmetric enforcement can result from neutral policies when behavior differs between groups.

    You don't say?

    Maybe that also applies to ... law enforcement in general?

    What's that ... no? It's all racism?

    • I don't understand your statement. Of course, if certain groups offend more, they will be arrested more. Also, water is wet.

      However, certain policies, such as treating one form of cocaine differently than others clearly were not neutral policies.

      There are clearly racial disparities in dealing with the justice system in the US. Some of those are a result of the types of offenses committed. And some are the result of different racial groups having different experiences with law enforcement. Why wou

  • Because that's what these are.
  • ...extremists are a bunch of witless dumbf**ks, isn't it?

Behind every great computer sits a skinny little geek.

Working...