Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Social Networks The Internet Technology

Instagram To Notify Users Comments Might Be Offensive Before They Are Posted (thehill.com) 132

In an effort to curb cyber bullying, Instagram is rolling out a new AI feature that will automatically detect whether comments are offensive and notify users before they are posted. The Hill reports: In an example included in the company release, Instagram shows a user trying to comment "You are so ugly and stupid." Instagram follows up with a message asking the user "Are you sure you want to post this?" with an "undo" button. "From early tests of this feature, we have found that it encourages some people to undo their comment and share something less hurtful once they have had a chance to reflect," Instagram said.

To further help protect users from unwanted interactions, Instagram said it will start testing a new "restrict" feature. Restricting a user will make it so the user's comments are only visible to that person; a user will be able to choose whether or not to make that the restricted person's comments available to others by approving them. Restricted users also will not be able to see when an account is active or when a person has read their direct messages.

This discussion has been archived. No new comments can be posted.

Instagram To Notify Users Comments Might Be Offensive Before They Are Posted

Comments Filter:
  • by Anonymous Coward

    if you're a little whiny bitch

    • by Anonymous Coward

      Instagram shows a user trying to comment "You are so ugly and stupid." Instagram follows up with a message asking the user "Are you sure you want to post this?" with an "undo" button.

      A better message would be: "Are you sure you want to post this you whiny little bitch?"

    • Re: (Score:2, Insightful)

      by Anonymous Coward

      You sound triggered

  • by Tablizer ( 95088 ) on Monday July 08, 2019 @07:55PM (#58893750) Journal

    Somebody will just make an app to get around it. Example:

    Original:

    You are full of horse-shit, you clueless idiot!"

    Filter-Workaround:

    Your ability to perceive the world around you accurately is clearly suboptimal in a way that can be compared to the result product of the equine digestive tract.

    • by Z80a ( 971949 )

      Unless they go ahead and do some evil shit, the point is just to alert you instead of blocking you, so the work around is press the "yep, i'm posting this".

      • ... and it never occured to you that they might gather stats on your "yep" clicks and use it in some way either now or in the future? Do you even database?
    • Also someone will write a script to show the bias in it.

      Once they automate the bias, it can be mined to reveal a bias that exists with very high statistical certainty.
    • To be fair your use of poly-syllabic words means you just lost 90% of the population that would be affected by your comment, rendering the whole thing moot.
    • Not a bad analysis (from Tablizer), which is why I have long suggested that this sort of problem should be approached from the perspective of helping the offensive person (AKA troll AKA OTI) render himself less and less visible. Let him insist on posting the offensive comment, but reduce his visibility for it, preferably without telling him.

      The version I've been advocating is slightly different. It would be based on enhanced karma (AKA MEPR) that would be used to let people more easily ignore people who hav

      • The version I've been advocating is slightly different. It would be based on enhanced karma (AKA MEPR) that would be used to let people more easily ignore people who haven't earned positive reputations.

        Dunno. I'm not sure how well karma really works. I kind of suspect that it's often assigned on impulse by people who hold strong views one way or another, so can't or don't assess the post on it's merits.

        e.g. Here's a video that I think is very good, but which has (at the time of writing) slightly more disl

        • by AmiMoJo ( 196126 )

          Standard conspiracy nut tactic. Pile in with pages of irrelevant copy/paste crap, throwing so many arguments at them that they can't possible hope to respond meaningfully. Ideally have a mob/botnet do it so it looks like there is widespread support for your BS.

          • Standard conspiracy nut tactic. Pile in with pages of irrelevant copy/paste crap, throwing so many arguments at them that they can't possible hope to respond meaningfully.

            Yes, although I think these kinds of tactics are actually often used by people on both sides of heated arguments. As a social experiment, try finding an online argument that you have a position on, and respectfully querying an unsupported claim made by someone who holds the same position as you, without disclosing that you hold that posit

            • Adding to this, I kind of suspect that for ratings to be fair, they would have to be impartial--they would have to be assigned by people who don't have an interest in the discussion, so probably wouldn't bother reading it.
          • by shanen ( 462549 )

            I don't know. I thought his [james_gnz's] comment was probably supposed to be some kind of joke, and I just didn't get it. My primary options when I first saw it were to ask about his transition of ideas or wait to see if the moderation would clarify it. I decided to wait, but your [AmMoJo's] reply doesn't really clarify it.

            However, I can respond by reverting to the original context of my comment, where I referred to MEPR (Multidimensional Earned Public Reputation). Improved (and symmetric) karma (AKA MEPR)

            • I don't know. I thought his [james_gnz's] comment was probably supposed to be some kind of joke, and I just didn't get it.

              I am being serious, although I'm not sure I explained it very well.

              I've tried to clarify my position in other posts, but basically I think people who choose to follow a discussion (whether they post in it themselves or not) probably have an interest in the discussion, and aren't impartial, and I think impartiality is a crucial factor in fair assessment of arguments.

              • by shanen ( 462549 )

                I am being serious, although I'm not sure I explained it very well.

                Well, I am sure that you did not explain it well. My new vague theory is that you tried to reference a controversial video to show that some people agree with it.

                If you are actually attempting to challenge some part of my earlier comment on grounds of my personal bias, then you need to make it clear what part your were challenging and what bias you think I have. I certainly would NOT claim to be any sort of neutral observer, though I try to make appropriate allowances for my own biases. In that context, I w

                • Well, I am sure that you did not explain it well.

                  I'll cede that.

                  My new vague theory is that you tried to reference a controversial video to show that some people agree with it.

                  More or less. It was to show that (although only by a small margin) the majority of voters downvoted it, unfairly, I think, because I think it's a very good video. (It's a video debunking an aspect of World Trade Center conspiracy theory, not promoting it.) My point was just that I think online discussion ratings are often very unf

    • by dcw3 ( 649211 )

      Reminds me of a phrase from the show Outlander that could be used...

      "Sheep fucker" translates to "Shagger of wee creatures"

  • Good. (Score:5, Insightful)

    by Z80a ( 971949 ) on Monday July 08, 2019 @07:55PM (#58893752)

    Good to see someone trying something other than just banning shit until there's no one left.
    Also will save the butt of a few people that trust too much on autocorrect.

    • by Anonymous Coward

      Internet trolling is a serious problem. From the stats I have read, nearly all of the vitriol comes from a very small percentage of users, who just post prolificly (and use bots to post). These people seriously ruin the forums for everyone else.

      The problem is hard to solve technologically, but that doesn't mean we shouldn't try. No solution has to be perfect in order to be good enough.

      • by Z80a ( 971949 )

        Giving the banhammer to random hired people is just as bad if not worse than the trolls, because you're quite possibly giving a troll the power to literally remove any people they dislike.

      • Re: (Score:3, Insightful)

        by Dunbal ( 464142 ) *

        Internet trolling is a serious problem.

        How serious? Let's have some stats. In fact, let's start with a concrete definition of what a "trolling" comment actually is. Other than "something I don't like", that is.

      • Trolling has been around for decades now. It did exist before the dawn of Eternal September, but back then it was just funny (and in some instances it was considered a high art form - that is, the art of luring people into doing and saying things that are ridiculous or humorous). Nowadays, it's just crude and lame, and way out of hand (most of the political sites are infested with it to the point of uselessness, but the sites don't care - each visit is an ad impression regardless.)

        It's a hard nut to crack.

      • I suggest doing nothing. Let people fight if they want, let others skip the posts they don't like. Maybe give people tools to be able to not see certain user's posts. If the comment thread turns into a flame fest, so be it. It's just a comment thread on an instagram post. Who cares, really?
    • by lgw ( 121541 )

      There is an agressive patent troll around this sort of thing (or at least there used to be - maybe the patent has expired). I guess Instagram will find out.

      I want a filter that adds insults to everything I post!

    • Solipism: where everyone's been shadow banned and everyone can only see their own posts.
  • by Anonymous Coward

    So the future of technology is to treat everyone like an imbecile that can't make their own decisions? No need for anyone to have any responsibility. Big Brother will keep you in line at all times. It will dictate to you what's acceptable and not. It will ignore sarcasm and assume the worst.

    Rather than teaching people that words are just words and to not be so offended by everything, we're going to bubble wrap the internet and censor. We might as well remove 90% of the dictionary while we're at it.

    • by Anonymous Coward

      And to add to this, once you start removing ways for people to vent verbally, they start venting in more violent ways if you keep their pent up anger bottled up until it all ruptures.

    • So the future of technology is to treat everyone like an imbecile that can't make their own decisions?

      I think you're massively overgeneralizing. This is the future—no wait: this is the short-term possible future—of some website. (And I suspect the website in question is one that most people on this site aren't even familiar with.)

      Technology gives no fucks about some suit's confirmation-dialog-idea-of-the-day, unless it just happens to somehow turn out to be an amazing success. Is that really yo

  • Now I'll be able to post offensive comments that are approved by Instagram.

  • by malkavian ( 9512 ) on Monday July 08, 2019 @08:24PM (#58893858)

    But get back to teaching people it's not about what rights they can claim to speak how they want to who they want, and crow about abusing that privilege.
    It's about interacting people and getting on with people (like most people do in every day life). One thing I've always stood by is framing in my mind that what I put down in type is actually going to be read at a screen, by a human, who has the same kind of feelings I have (give or take, we're all different, some edge cases quite radically so).
    That "humanising" in the abstract puts me in the same frame of mind I'd be in if I were to wander round in public. Most people out in public have a good natter, and by and large send positive signals around (making eye contact and giving a quick smile is a great 'message' that I find endemic while walking round). Occasionally, someone will chat, and that can be nice too.

    Reason: People have been taught how to behave in public. We don't need some "big brother" arbitrarily watching everything, and pointing out every possible incorrectness.

    Why different for online? The only way to successfully interact with large bodies of people is to use etiquette. There are sometimes things I really would love to say, but I know full well that it's only because I'm irritated, and when I'm back to my usual self, I'd hate having said it. So I don't. There are things that I'm aware if I say them would breach an amiable social contract. So I don't do that, as I like amiable conversation. If the mental self discipline to be courteous is not taught, then no amount of prods and "This may be offensive" is going to stop the behaviour.
    And honestly, the offloading of that self awareness of civility and ethics, and assuming it no longer needs to be taught is highly likely to mean that people become less ethical and reserved (relying on filters to tell them something they should know implicitly).

    Unless this etiquette comes willingly from individuals, trained to think rationally, critically and with empathy without having to rely on completely arbitrary 'warning filters', I don't see a way for it to ever be foisted on people in a way that doesn't truly scare me. If you can train a mass to obey when an arbitrary entity telling them "this is incorrect thought", then I'm pretty sure at some point, some bright spark will use that exact (now trusted) mechanism to change the narrative. Orwell would really have been proud.

    Like I keep telling people; free speech is really only free if you're sure it's your idea you're conveying.

  • by DNS-and-BIND ( 461968 ) on Monday July 08, 2019 @08:40PM (#58893930) Homepage

    These three misguided principles ("Great Untruths") form the foundation of the new moral culture we are seeing Big Tech enforce on the rest of us:

    • The Untruth of Fragility: What doesn't kill you makes you weaker.
    • The Untruth of Emotional Reasoning: Always trust your feelings.
    • The Untruth of Us Versus Them: Life is a battle between good people and evil people.

    Read more on the topic here [quillette.com]. Those who are easily triggered, consider this your trigger warning. The rest of you, keep on truckin'.

  • I predict there will be lots and lots of testing taking place. Just to be able to give the middle finger to Instagram.

    "Your mother was a hamster and your father smelt of elderberries, you vagg1nelle bl00d pharrt!"

  • Because the problem with trolls is they don't know they're trolls.

    I can't decide at this point if Instagram honestly isn't aware of the scope and cause of the problem, or if they're well aware and just want to be seen as doing "something".
    • by radja ( 58949 )

      having talked to some trolls face to face, some of them are definately aware they are trolling.

  • it would be better if they notify people when their comments are probably boring or stupid.
  • Religion is a lie [are you sure you really want to post this] YES!
  • by Anonymous Coward

    Instagram told me my post was offensive and I said I want a second opinion. Instagram said okay, you're ugly too.

  • by Anonymous Coward

    Somebody, somewhere, will take offense at pretty much anything.

  • by Anonymous Coward

    https://xkcd.com/481/

  • by Chas ( 5144 ) on Tuesday July 09, 2019 @01:31AM (#58894502) Homepage Journal

    GOOD!
    POST IT TEN MORE TIMES FOR EFFECT!

    The only proper response to "I'm offended." Is "fuck off".

    • Re: (Score:2, Insightful)

      by AmiMoJo ( 196126 )

      Except that if you go around offending everyone it will be you who has to fuck off, because no-one will want to be near you.

      That's basically what the whiney freeze peach crowd have found. They get booted from Twitter and demonetized on YouTube and then get offended that no-one wants to listen to for fund them. Guess what, if you are a guest at someone else's venue, it's a bad idea to piss them off!

      • by Chas ( 5144 )

        Sorry, that's assuming that I'm TRYING to go around being offensive.

        But, more and more these days, things like SIMPLE STATEMENT OF SCIENTIFIC FACT has people falling over in conniptions and speaking in tongues, bleeding from the ears.

        Or simple blunt honesty to cut someone's bullshit.

        It's not like I'm going around referring to everyone as "Hey! You! Dickface!"

        There IS a difference.

        One is deliberate offensiveness.
        The other is a deliberate attempt to crybully someone who doesn't agree with you.

    • GOOD!
      POST IT TEN MORE TIMES FOR EFFECT!

      "be the best you can be" technically applies to anything including being an arsehole and this will certainly be a valuable tool to help you in that particular quest. I have to ask though, why?

      The only proper response to "I'm offended." Is "fuck off".

      Indeed, if you've offended your partner/SO/friend/boss/HR department then telling them to "fuck off" will certainly solve the problem of having an offended partner/SO/friend/boss/HR department.

      • by AmiMoJo ( 196126 )

        Indeed, if you've offended your partner/SO/friend/boss/HR department then telling them to "fuck off" will certainly solve the problem of having an offended partner/SO/friend/boss/HR department.

        Exactly. They will know you are a free-thinking, classical liberal badass straight out of the intellectual dark web, and will respect you for it. Well, you boss might demonetize your job, but only because those damn non-classical liberals forced him too.

      • by Chas ( 5144 )

        Girlfriend?

        Dude! I'm a basement-dwelling, life-free NERD!

        What is this "girlfriend" of which you speak?

        =)

      • by Chas ( 5144 )

        Seriously though, there's a world of difference between being socially polite and some artificial interface going "Do you REALLY wanna do that! You might hurt someone's fee-fees! That wouldn't be nice now would it?"

        Especially when what's being said isn't being said with intent to offend.
        Remember, we have people who are offended by simple, factual TRUTH these days.
        And I refuse to self-censor simply because they're going to scream and cry and call me names.

        • Seriously though, there's a world of difference between being socially polite and some artificial interface going "Do you REALLY wanna do that! You might hurt someone's fee-fees! That wouldn't be nice now would it?"

          So the correct response to "I'm offended" isn't to tell the person to fuck off? Make up your mind!

          Remember, we have people who are offended by simple, factual TRUTH these days.

          People have always been offended by facts that contradict their worldview, today is nothing new.

          And I refuse to self-cens

  • Wait until the lawyers get their hands on the data. It'll be used as evidence in "hate crime" cases. Sorry, no Instagram for me...ever.

  • Did it.
  • shit, piss, fuck, cunt, cocksucker, motherfucker, and tits
  • Can we take it easy towards complete thoughtcrime regulation?

  • Except that Qualcomm Eudora had this exact feature for email back in the 90's. It was called 'MoodWatch', depicted by varying amounts of chili peppers.

    It would either warn you or delay sending of the message when queueing/sending.

    1 Chili: Message seems it might be offensive.
    2 Chilis: Message is probably offensive.
    3 Chilis: Message is on fire!
  • I've had one occurrence of Instagram immediately telling me my comment could not be posted because it looks like it violates their guidelines.

    I was disagreeing with someone about something but the comment itself was very respectful and had no insults or profanity whatsoever.

    So this is interesting. Perhaps it was experiment and got too many false positives.

    Political Bias aside, this is a big problem with AI tools like auto-mod (Twitch). Like every single chat of mine that has been auto-modded on Twitch was a

  • Instagram could EASILY monetize that kind of data ("what percentage of the time does someone TRY to post something offensive, but we warn them and stop (or don't stop) them?")

    They'd just have to come up with some marketing term for it - "potential employee toxicity score" (PETS) - and package it up for job screeners, dating sites, etc. (who would pay a lot to avoid undesirable candidates.)

In the future, you're going to get computers as prizes in breakfast cereals. You'll throw them out because your house will be littered with them.

Working...