Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Twitter Social Networks

Twitter Begins To Show Prompts Before People Send 'Mean' Replies (nbcnews.com) 93

Nasty replies on Twitter will require a little more thought to send. From a report: The tech company said it is releasing a feature that automatically detects "mean" replies on its service and prompts people to review the replies before sending them. "Want to review this before Tweeting?" the prompt asks in a sample provided by the San Francisco-based company. Twitter users will have three options in response: tweet as is, edit or delete. The prompts are part of wider efforts at Twitter and other social media companies to rethink how their products are designed and what incentives they may have built in to encourage anger, harassment, jealousy or other bad behavior. Facebook-owned Instagram is testing ways to hide like counts on its service.
This discussion has been archived. No new comments can be posted.

Twitter Begins To Show Prompts Before People Send 'Mean' Replies

Comments Filter:
  • But now they don't seem to and just suspend you without warning. I do remember on several occasions getting warned about posts on FB before posting but almost always they were misunderstood by the AI or included me quoting someone else.
    • Facebook used to block me regularly, they don't seem to like you calling someone a "racist" even though when I used it, the target was making openly racist statements.

      Once you get blocked once, you go on a "naughty list" and they'll ban you more and more regularly - in the end I found it amusing and considered it a "Badge of Honour".

      It's matters nothing to me now, I closed my Facebook account three months ago and am a happier and feel more liberated now that Zuckerberg doesn't track me any more.

    • Hey, they banned a sitting president of the US. Do you really think they won't come for you at one point ???
    • twitter is yo momma now ! lets all get an account again (ha-lolll l l l....)
  • by Robert Goatse ( 984232 ) on Thursday May 06, 2021 @11:24AM (#61354778)
    Who decides what's mean? I wonder if they spun up a whole new department to detect levels of mean-ness.
    Also, what might be mean to you might be perfectly fine to me. How do you judge what's not nice? Maybe there's a snowflake meter where anything posted that could potentially be construed as "wah, my feelings are hurt" gets flagged?
    • Re: (Score:3, Insightful)

      That's what makes my skin crawl every time something like this comes up. Who decides what's fair, what's hate speech, what's mean, what's insensitive, what's dishonest, what's true, etc. In an ideal world society as a whole should come together and agree on certain standards for these things, but now that job seems to have been outsourced to a select few. I don't know about you, but that scares the crap out of me. I may not always agree with what society decides in these cases, but I'd trust them over a
      • That's what makes my skin crawl every time something like this comes up. Who decides what's fair, what's hate speech, what's mean, what's insensitive, what's dishonest, what's true, etc.

        Uhh, in this case? Twitter. *shrugs*

      • When it comes down to it, the phrasing is intentionally vague. Here's why:

        At their heart, Facebook, Google, and Twitter are ad agencies. Their revenue comes from convincing companies to place ads with them.

        It just so happens that 80% of all consumer decisions are made by women. This partnership of men making the money and women spending it has a long history in western culture. Thus, the advertiser who wins over women sells more.

        American companies, instead of making a better product, typically pe

      • If something hurts your feelings, you have the right and freedom to exert your will on that person through the block button.

    • by hey! ( 33014 ) on Thursday May 06, 2021 @11:33AM (#61354822) Homepage Journal

      You decide what's mean. They just use an algorithm to prompt you to make the decision rather than post on autopilot.

    • by invid ( 163714 )
      Hey, you said "snowflake"! That's mean!
      • by gmack ( 197796 )

        Given how easy it is to make people who call others "snowflake" absolutely enraged, maybe flag the account that uses that world and any replies to it get a "this person will be offended by any disagreement" warning.

    • Re: (Score:2, Flamebait)

      If it's just keywords and context, it might not be a bad feature. In other words if it's a generic: "Consider the tone of your message as it may come across a little strong". As long as they don;t include the enitre Wokepedia: "Your post may be offensive to the LGTBQ community, parts may be misunderstood in a bad way by certain groups in southwest Asia, Turks will take exception to your choice of words to describe a certain historical event, and the tone is rather colonialist. Please check your privilege
      • parts may be misunderstood in a bad way by certain groups in southwest Asia, Turks will take exception to your choice of words to describe a certain historical even

        Wow, I would love to have an AI that told me "people may interpret your tweet incorrectly." Whether a subculture or not, it would be wonderful to find out that people are misinterpreting me.

      • "Consider the tone of your message as it may come across a little strong"

        "Piss off."
    • by Anonymous Coward on Thursday May 06, 2021 @11:37AM (#61354852)

      Who decides what's mean?

      Mean means 'average'. Nobody wants to read a bunch of average tweats so it gives you a warning and suggests some ways to spice it up a bit.

    • Who decides what's mean? "AI" of course. It will be fed some garbage biased culture-specific and context-specific data, and they'll let go all inbred with itself and extrapolate. Viva la AI revolution!

    • Re: (Score:3, Funny)

      by Anonymous Coward

      All the autists here wondering how to tell a mean comment

    • by eepok ( 545733 )

      Not everything is SJW white knight committees. This can be a very simple program.

      Posts can be checked for really, really common insults in the same exact way that "bad word filters" are used. Moreover, it would be wise for Twitter to implement a system wherein someone can tag a post as "offensive, but don't report" so that the posts can be crosstabulated for words or phrases in common.

      • Moreover, it would be wise for Twitter to implement a system wherein someone can tag a post as "offensive, but don't report" so that the posts can be crosstabulated for words or phrases in common.

        Yeah, that won't get gamed.

        • by eepok ( 545733 )

          Twitter itself has simply become a tool for manipulating (or gaming) perceived zeitgeist. The fact that you can purchase likes, followers, have more than one account, etc. all leads to gaming every aspect of the system.

          My solution is easy to implement and would have the same flaws that already exist, but no additional ones.

    • Re: (Score:1, Troll)

      Comment removed based on user account deletion
    • It's a NN, further trained by people who select "Tweet As Is" vs "Edit" vs "Don't Send". Starting out, it almost certainly is too aggressive in flagging, because that gives it more ability to train the NN. It's a good idea in general to have a double-check on angry emails/tweets/texts. Maybe you need a moment to reflect

      But I have no idea why you would be upset about being asked if you are sure before doing something. It's pretty common. Heck, for a while /. forced you to preview posts before you submit

    • I haven't encountered the feature. If it's like the /. lameness filter, well, it's kind of lame. But if an AI can say "Hey this tweet might be construed as mean," that gives the author a chance to either rethink their wording, decide that the AI is confused, or decided that they really want to be mean. If we had a FlameBait AI for /., we could take away more Karma for things that were flagged as flamebait but posted anyway!
      • You do realize that people flag things they simply disagree with as FlameBait, right?

        Oops, was that mean?
        • Yes and, in theory, this gets picked up in meta-moderation. But if the AI determined "not Flamebait," the post could be put higher in the meta-moderation queue to adjust the awarding of future mod points. The idea is that the AI becomes a meta-moderation contributor.
  • by Erioll ( 229536 ) on Thursday May 06, 2021 @11:32AM (#61354818)

    There's always the xkcd comic called "Listen to Yourself" [xkcd.com] that did this via hacking YouTube so that it would automatically and require you to hear your comment before allowing you to post it.

  • Always remember: (Score:4, Insightful)

    by BAReFO0t ( 6240524 ) on Thursday May 06, 2021 @11:34AM (#61354826)

    Those who are only nice, are scheming and hating just as much. They just hide it so you really don't see thar back stab coming. Or, even worse, gaslight on you noticing they conspire against you.

    A honest hater is always preferable to a dishonest lover. At least you know where you're at.

    What Twitter does here, will be a psychopath fest. And a bully fest for all bullies who learned to say the meanest things in the most "I'm nice and innocent and you're the meanie" way. Aka the most vicious, dangerous and evil haters in existence.

    • by invid ( 163714 ) on Thursday May 06, 2021 @11:46AM (#61354888)
      You know, there are actually nice people out there who make sincere complements. Not all good things are done out of scheming, hypocritical self-interest.
    • I wonder if slashdot has been testing a similar feature but instead asks "This post appears to be too coherent. Do you want to fix it?", and somehow BAReFO0t has wound up as the person they test it on.

    • The majority of Twitter users are only interested in hearing what they want to hear, anyway. This is just another move to make their target audience feel right at home.

    • Or, it will keep me from just posting some instinctive post before my coffee that I regret later.

    • I think it will be a helpful tool for the "honest haters."

      Twitter Dialog: "Please review this message as we detected some hateful language. Are you sure you really mean the following?"

      Honest Hater: "You're right. I can squeeze in an extra 'fuck off and die' on line 3. Thanks!"

    • by invid ( 163714 )
      This is how the Man keeps his power. Have everyone else in the world hate and distrust each other. Have everyone think that everyone else is only working for their own selfish interests. Keep all of the little people fighting each other, while the Man sits back and smirks.
  • Finally it can verify the spelling of my cuss rants.

  • by Joce640k ( 829181 ) on Thursday May 06, 2021 @11:39AM (#61354860) Homepage

    Let's hope it works better than Slashdot's "ASCII Art" detector.

    (facepalm)

    • by eepok ( 545733 )

      HOLY CRAP! THIS!

      I enjoy writing long insightful posts when I think I have something useful to share. But after writing a couple only to get "ASCII art" warnings and not being able to figure out what could be triggering them, I just delete the post and move on.

      • Ellipsis often trigger it. You can write a thousand word essay, but put three periods in a row somewhere in there, and you get flagged. It's both laughable and horribly depressing that someone hired to code a "tech news" website had the coding skills of a toddler.

    • "You can't sit here. You make Ascii art"
  • by nwaack ( 3482871 ) on Thursday May 06, 2021 @11:39AM (#61354862)
    "We've noticed you're posting on Twitter and are therefore a fool. Are you sure you want to proceed?"
  • Really confuse them, use this prompt.
  • by george14215 ( 929657 ) on Thursday May 06, 2021 @11:55AM (#61354928)
    To provide a platform to say mean things?
  • Man, if Slashdot did that it would light up like a Christmas tree!
  • How about you just block the mean tweets altogether?

    I mean seriously, we all feel strongly about certain things and we can have those opinions, but that doesn't mean we should be allowed to go around attacking other people.

    I know this comment, for example, will draw the ire of the freedom of speech crowd. And while I welcome their opinion and opposition, it is not productive if they just want to insult or degrade me. Certainly we can have a conversation yet have those insults filtered out...

    • Because often, being perceived as "mean" has a lot to do with opinions and not just the choice of words.
      • Isn't the line where it gets personal?

        For example, I could have started with "You're wrong JaredOfEuropa" targeting you specifically, and even though nothing harsh was said, it would have been personal.

        Instead if I leave the personal aspect out and just speak to the opinion... If we both do this we can have an honest discussion of opposing views, maybe agreeing, maybe not, but it takes out all of the animosity and attacks and perhaps the entertainment value of seeing two people try to fight it out on social

    • The AI can't determine 100% if the tweet is mean. The idea is to avoid inadvertent meanness which seems pretty reasonable. Plenty of comments are misconstrued especially in short formats where nuance takes a back seat to conciseness. But it does help those who don't want to be mean recognize that they might be misunderstood. Those who want to be mean can feel good that the AI had validated their intent!
  • This is the internet so there is a significant part of the user base that will see this as a personal challenge / badge of honor to see how many 'your post appears to be mean' screenshots they can grab.

  • Coming to a platform near you.

  • There should also be a timer before you can send very aggressive messages.
    Maybe an hour or so. Enough that you can cool down when you are writing in affect, but short enough that it is part of the discussion when it needs to be said.

    After the timer has expired you can go to your quarantined messages, and see if you still want to say the same thing, and then send it.
    This might also give a boost to politely written responses, since they will arrive first.

    • And if the AI has misconstrued your message, what? An innocent party is put in time out for an hour? I'd rather scroll by hate-spew than have that happen.
      • Obviously it's only a good idea if the AI works reasonably well.

        If it happens rarely it's not likely to have any real negative consequences for the debate. What post is so urgent it can not wait an hour?

  • "It looks like you're thinking of posting something so stupid it looks like Clippy suggested it. Do you want to proceed?"

  • I am so glad that I deleted my account there and removed the app from my devices. Let the site devolve into an echo chamber of like minded people for all I care.

    • Well if by like-minded you mean prefer not to communicate in means ways, yeah, that sounds nice. I might start using Twitter again!
  • Slashdot has obtained an exclusive preview of the proposed prompt:

    "This program posts news to thousands of machines throughout the entire civilized world. Your message will cost the net hundreds if not thousands of dollars to send everywhere. Please be sure you know what you are doing."

  • Everything is more unusable every day.

    Do I get to add things I care about to this list? Like people writing long religious drivel? Or are only their (read slave-morality) interests enforced?

    • You can trust that is will be only a select few with a select view who will get to put their finger in the pudding.
  • ... from Nineteen Eighty-Four.
  • ...get to see my Mean Tweets fix on Jimmy Kimmel?!

No man is an island if he's on at least one mailing list.

Working...