Researchers Developing An Algorithm That Can Detect Internet Trolls 279
An anonymous reader writes Researchers at Cornell University claim to be able to identify a forum or comment-thread troll within the first ten posts after the user joins with more than 80% accuracy, leading the way to the possibility of methods to automatically ban persistently anti-social posters. The study observed 10,000 new users at cnn.com, breitbart.com and ign.com, and characterizes an FBU (Future Banned User) as entering a new community with below-average literacy or communications skill, and that the low standard is likely to drop shortly before a permanent ban. It also observes that higher rates of community intolerance are likely to foster the anti-social behavior and speed the ban.
This, if true, will utterly destroy (Score:2, Redundant)
Disqus, and the comment section at The Atlantic.
Re: (Score:3)
Re: (Score:3, Funny)
Yes, well, they should definitely ban people who can't point a fucking camera, and probably have them arrested
Re: (Score:2, Insightful)
TURN
YOUR PHONE
SIDEWAYS
I've refrained from formatting this post (any more) obnoxiously vertical to emphasize.
Re: (Score:2)
The camera doesn't cut anything off.
Re: (Score:2)
Curse you rectangle, you win again! That's it, just put square fucking sensors on all the cameras from now on.
Re: (Score:2)
Circular would be even better. That way you can rotate the camera in any angle.
Re: (Score:2)
>> The world does not exist solely in a horizontal plane.
Exactly - which is why you should hold your phone vertically when photographing a vertically-oriented scene and horizontally the rest of the time to avoid having most of the picture be of empty ground and sky. The camera's long axis must be aligned with the screen's for optimal image preview, and the phone must be held vertically to properly operate as a phone. When used as a camera it should be held like a camera - horizontally. How often do
Re: (Score:2)
It would really be interesting to see where it would take us, but I worry about false positives in high-profile issues.
Re:This, if true, will utterly destroy (Score:5, Funny)
Nonsense. Next you'll be suggesting that the technology might be intentionally abused to silently bias "unmoderated" conversations about [REDACTED], which would be a frightful step toward &^%- - -
[REMAINDER OF COMMENT DISCARDED AS TROLLING]
Re: (Score:2)
But still: why would CBS corporate want that to persist, for years, regardless of the reason why their forum is a hang out place for intolerant morons?
Because their advertisers don't mind paying to show their products to ignorant racists?
Re: (Score:2)
because advertisers never have a problem associating themselves with behavior that might damage their image, of course
Re: (Score:2)
Have you seen the ads here on /.? Trolls have money too, and you are arguing that "...modern capitalism isn't that ruthlessly profit-focused." -Randall Monroe
Re: (Score:2)
are you honestly trying to tell me that advertisers don't work hard to keep their image clean?
Re: (Score:2)
Sure, but how does it harm their image to be seen by trolls in a forum de-facto dedicated to trolling? Maybe if a substantial portion of reasonable people were present they might get somewhat besmirched by association, but the reasonable people mostly read the articles and depart, knowing that the forums are dedicated to pointless vitriol, so the advertisers will only be besmirched by their association with CBS, whose reputation is scarcely that much worse than pretty much every other mass-media outlet.
Re: (Score:2)
any advertiser would rightfully not want their ad to appear alongside a low iq hate filled racist screed
and CBS News is not "a forum de-facto dedicated to trolling"
Re: (Score:3)
Right, CBS news proper is not. So where's the problem? You go into an unmoderated forum on their site, see a bunch of user-generated vitriol posted alongside an ad - are you really going to hold it against the advertised brand/product?
Now it would be different if we were talking about an ad on HateFilledRacism.com, but even there the advertiser would get a substantial benefit from the echo-chamber effect - how often is anyone who would hold their choice of advertising venue against them going to actually
Re:This, if true, will utterly destroy (Score:5, Insightful)
https://en.wikipedia.org/wiki/Internet_troll
Re: (Score:3)
yes, and now we get into the same sort of pointless useless territory as arguing about what "hacking" means
no one owns the definition of a word, and meaning changes over time. the common perception of the term "hacking" and the technically and historically more accurate usage of the term "hacking" are separate and equally valid. not because i say so, but because of the authority of common use
likewise, the strict historical definition of "troll" and the more common meaning of any asocial hate filled speech b
Re:This, if true, will utterly destroy (Score:4, Insightful)
yes, and now we get into the same sort of pointless useless territory as arguing about what "hacking" means
Because both "troll" and "hacking" have been made into pointless useless words through the magic of "common use" by common people who have no clue what they were supposed to mean.
"An algorithm that can detect trolls" is a meaningless statement. If it is an algorithm, it needs a definition to work from. That definition is not going to be based on historical or accurate usage of the term. In fact, the summary gives you a good idea what it will be based on:
So, the "definition" of "troll" is going to be "people who display unpopular or angry behavior when confronted by an intolerant social media environment." Gee, anyone slashdotted recently? "Community intolerance" is not the problem, I guess, it's the reaction of people in a supposedly open forum to that intolerance.
There will be no direct definition as such. It will be an empirical model based on correlation between use of angry or unpopular phrases and the subsequent ban of the poster. That's the new "troll". Say enough stuff that people don't like, you're a troll.
In other words (Score:5, Insightful)
Automated censorship. Eh, saves us the trouble, I guess
Re:In other words (Score:5, Informative)
The original paper doesn't seem to be about automatic banning at all; that seems to have been added to the headline and the article linked to here (and therefore the summary). The paper says this: "automatic, early identification of users who are likely to be banned in the future."
While that identification could be used for automatic banning, I think it would be more likely to be used to flag potential problem users, which could be very useful in determining which reported posts to investigate first rather than dealing with all of the "I don't like this post so I'm reporting it" instances.
Re: (Score:2)
Yeah, pretty sure they already made a movie [imdb.com] about the societal failure represented in our trust of an automated system to pre-recognize deviant behavior and how those systems break down.
Re: (Score:2)
Re: (Score:2)
Funny side note: I mentioned a similar system to Reddit because they have huge problems with mod abuse now. I said, "Hey, just make mod deletions pseudo-deletions, so they're hidden unless you want to see them, so people can check mods and report abuses to admins."
They didn't even reply back. No politically-cor
Re: (Score:3, Insightful)
It would be much better to have a system that HIDES users content by default, than to delete it. Then, people scrolling all posts (including hidden) would be able to report mistakes in the system.
From my experience if you delete content or ban a troll, it just encourages them to troll more using a different account, usually from a different IP address.
The most effective way I found to deal with problem users is to make their bad comments only visible to them. That way it appears to them that they've had their say and no one responded to it. Without feedback to encourage them, trolls either quickly get bored and go elsewhere or sometimes they'll surprise you and produce better quality comments.
This is fucking stupid. (Score:5, Insightful)
Trolls are usually above average literacy and trying to skilfully cause a fight. It's easy to identify "illiterate" people and humans are way too quick to judge someone who cannot spell as having nothing to contribute or (worse) malicious, but these are not trolls. This is just another classist meme where the person is judged positively by the overcomplexity of their language and convolution of their sentences, as this must mean they have been educamated right.
BTW I went to a £30k/year British boarding school, so I have no axe to grind, nor insecurity about describing things as they are.
Re: (Score:3)
As you note here, a sophisticated troll is not easily detectable via AI.
Re:This is fucking stupid. (Score:5, Informative)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
I suppose you run your email servers without a spam filter, too? I mean, they're never 100% accurate.
Re: (Score:2)
It is stupid to me because it does not solve a problem. Detecting trolls is certainly not a problem, dealing with them is. They need to work on algorithm for that.
How about an algorithm for developing thicker skin?
Internet trolls only have the power you give them; many sites have an "ignore this douchebag" button anyway, so it's really a moot point.
Re:This is fucking stupid. (Score:5, Insightful)
While I believe that people who are less sensitive tend to thrive more than others, I don't agree that "thicker skin" is a workable solution. Too many people have fragile emotional states and simply don't have the neural hardware psychological capacity required to dismiss the hate and insults that often happen on line. There have been some high-profile suicides among teens who were attacked online, and who knows how many people remove themselves from public comment because of the hate they've received? For safety reasons I don't think society should completely abrogate the forums to the trolls.
Does that not mean some people are overly sensitive? Sure. But just as we shouldn't velour-line the internet to cater to absolutely every person with a psychological disorder; we also don't have to tolerate the diarrhea that spews forth from the trolls. We don't have to draw a hard-and-fast line on the ground, either, and define "these words are always 100% bad in 100% of situations". Instead, we should be welcoming humans in the loop, asking them to pass judgment when needed. That gets us to a more fluid state than full automation. It also lets the user choose. Don't like the judgment process on Slashdot? Don't hang out on Slashdot.
I know full automated filtering is the holy grail of internet forum moderation, but as soon as you deploy a filter it becomes a pass/fail test for the trolls, who quickly learn to adapt and evade it. Human judges can adapt, too, and are about the only thing that can; there are simply too few for the volume of trolls out there. A tool like this might help them scale this effort to YouTube volumes.
Re: (Score:3)
Thickness of skin has nothing to do with it, I'm pretty much impossible to offend or seriously piss off. The real problem with trolls is that they're a huge waste of everyone's time, even if you can ban/ignore them, you still have to read their posts at least once first.
Re:This is fucking stupid. (Score:5, Funny)
No need. Remove anonymity from said sites and the problem is solved. Better yet, don't bother having user comments everywhere.
posted by an AC. LOL.
"Old" vs "new" trolling (Score:5, Interesting)
Your mistake is in using the "classic" definition of "troll" - somebody who sets out to deliberately cause fights on a forum. Trawl through the archives of Slashdot and you will find many instances of this kind of trolling - and yes, the people doing it are often highly literate (and, when they do it right, sometimes very funny with hindsight).
But the term "trolling" has gone political these days and is routinely used to describe any form of online behaviour that the speaker doesn't approve of. So everything from outright criminal behaviour (eg. threats of immediate violence) at one end of the scale through to disagreeing with a forum's established groupthink (however respectfully) at the other.
And yes, it has become a favourite term of the intellectually insecure, whenever they want to shout down an opposing point of view without engaging with it. In fact, conflating those two extremes I mention above under the same term is outright beneficial for the easily offended, as it allows them to group polite dissenters together with the mouth-foaming loons.
Re: (Score:2)
But the term "trolling" has gone political these days and is routinely used to describe any form of online behaviour that the speaker doesn't approve of. So everything from outright criminal behaviour (eg. threats of immediate violence) at one end of the scale through to disagreeing with a forum's established groupthink (however respectfully) at the other.
Bravo sir! You have summed it up perfectly.
Re:"Old" vs "new" trolling (Score:5, Interesting)
Back in my day, trolling meant something!
Ten plus years ago I used to troll /. as "Fux the Penguin" (some [slashdot.org] of [slashdot.org] my [slashdot.org] favs [slashdot.org]) and it was great fun. The system was:
1) Get in early on a new story. You don't want to get buried under 100 comments. /.ers would hate, like running Windows, or requiring government approval for encryption technologies.
2) Lists and quotes are good. Everybody stops to read something with HTML formatting.
3) Start reasonable. The first paragraph should sound rational.
4) The next paragraph should include minor errors of fact or logic, but still be mostly reasonable. Just...wrong.
5) The minor errors of fact and logic in the middle paragraphs should lead to a completely ridiculous conclusion that
6) Watch the post go to +5 insightful because mods don't actually read comments.
7) lulz at people who write 8 paragraphs dissecting all my mistakes.
8) -1 Troll.
9) +5 Funny.
Today the media conflates "trolling" with "abusive asshole." I think they misunderstand the word "troll." "Trolling" meant "fishing." To dangle bait for newbs to take and work themselves into a lather, and then laugh at those who don't get the joke. It was performance art. Today they think "troll" is referring to monsters who live under bridges. But no, people who stalk others on the Internet and hurl insults at them (or worse) are not "trolls." They are abusive assholes. It's sad.
And it requires no skill. Trolling is a art.
Re: (Score:2)
I see this in car forums a lot; the troll will post up seemingly naive but insidiously wrong posts, usually with a username and pic of a woman. The old guys in the forum hit that bait like a large mouth bass.
Re: (Score:2)
[quote] I think they misunderstand the word "troll." "Trolling" meant "fishing."[/quote]
This must be the part of the post where you start to introduce factual errors. I think the word you're thinking of is "trawling" which though similar isn't even a homophone.
Re:"Old" vs "new" trolling (Score:4)
Nice try, troll.
http://en.wikipedia.org/wiki/T... [wikipedia.org]
Re: (Score:3, Interesting)
Back in my day, trolling meant nothing!
Twenty plus years ago, I used to hang out on alt.religion.kibology, where trolling was invented. Someone would post bait (hence the word troll derived from "trolling for newbies") to a newsgroup, adding an "audience" group such as alt.religion.kibology to the newsgroups header line. Stuff like mentioning "Majel Barrett Shatner" on a star-trek group, or intentional misspellings of whatever the group was obsessed with. Then you just sit back and enjoy your popcorn while
Re: (Score:2)
Later, cross-group trolling was added, where a message would be posted to two or more groups plus the audience group. If you picked your groups right, they would flame each other quite nicely, and it would be time to get another bag of marshmallows.
Which is why the gamergate thing will never end. It's too easy to troll.
1) Pretend to be neckbeard.
2) Say something to piss of SJWs.
3) lulz.
4) Goto 1.
Re: (Score:2)
*Sigh*. For as literate and educated (generally speaking, though not as much as they believe) as Slashdot is, one basic concept seems to continually elude them - words and their meanings do change. In this case, the so-called "classical
Re: (Score:3)
The "new" meaning (an individual that's deliberately abusive or deliberately fans the flames)
Actually, the new meaning is: someone who holds an unpopular opinion.
Re: (Score:2)
I completely understand that words change. But what word would you use today for traditional trolling? That is, "non-malicious posts meant to tease and entertain?"
Re: (Score:2)
The last one about centralized email is prophetic, if you change "penis enlargement" to "terrorism".
Trolling vs. Different Viewpoint (Score:4, Interesting)
Unfortunately, many people think that if you express a different viewpoint or opinion than the masses that you're trying to start an argument or a fight. Why is society so hell-bent on crushing dissenting opinions? And not merely silencing them, but villifying them?
I've often been tagged as "trolling" because I don't agree with the crowd. If you knew me personally, you'd know very well that I'm not trying to start a fight, just expressing my opinion. Just because it is not the popular viewpoint doesn't mean my views aren't valid.
Here on Slashdot, I often see people flagged as being trolls just because they don't follow the masses. You'd think a site full of outcasts and oddballs like programmers and technologists would be more accepting of alternative views, but the exact opposite seems to be the case.
Re:This is fucking stupid. (Score:5, Funny)
> Trolls are usually above average literacy.
Your right.
Re: (Score:2)
> Trolls are usually above average literacy. Your right.
Yore wright also.
Re: (Score:2)
Re:This is fucking stupid. (Score:4, Funny)
Douché.
Re: (Score:2)
humans are way too quick to judge someone who cannot spell as having nothing to contribute
I'd sooner judge grammar than spelling. Poor grammar indicates a fundamental misunderstanding of the language. Poor spelling only indicates carelessness or memorization issues.
For example, I've noticed a recent trend (in American English) where native speakers are confusing adverbs with adjectives -- namely, by dropping the -ly suffix that defines most adverbs in English. For example, "I responded appropriate." This is
Dear algorithm (Score:5, Funny)
I don't want to talk to you no more, you empty-headed animal food trough wiper! I fart in your general direction! Your mother was a hamster and your father smelt of elderberries!
Re:Dear algorithm (Score:4)
[beep] [boop] [churn] [beep] User 2766669 identified as Python quoter. All further posts automatically accepted. Add automatic +1 Funny for ID ending in 69. [beep] [whistle]
Unicorns, skittles, rainbows, etc. (Score:5, Insightful)
So, this algorithm only needs nine more posts than a troll will actually make per throwaway account, then?
That's some mighty fine police work there, Lou!
Comment removed (Score:4, Funny)
just what we need (Score:2)
Internet pre-crime.
Poof! (Score:5, Funny)
There goes Gawker.
Re: (Score:2)
Lets hope they don't try to automate it (Score:2)
It also observes that higher rates of community intolerance are likely to foster the anti-social behavior and speed the ban.
If automated an intolerant core could try to get users expressing opinions that they don't like banned. The fact that they are subjected to intolerance would make the algorithm more likely to ban them.
Re: (Score:2)
Personally I'm curious how it would function on a site like foxnews or huffpo - in the case of the latter, would it flag the one person posting pro-2nd Amendment comments, or would it flag everyone else when they pile one the aforementioned poster with mountains of venomous hatred?
Can't Fight the Future (Score:2)
It might be useful to inform an admin to look at suspicious postings, especially if they can get the accuracy higher. BUT I hope no one uses such algorithms to automatically stop suspected trolls. This can only lead to unforeseen consequences and stifling of free speech (unless of course stifling is not an unforeseen consequence, but an intended one).
Many Slashdotters already complain about the Lameness-Filter, this has the potential to be a hundred times worse.
The technology will of course be developed,
Re: (Score:2)
It might be useful to inform an admin to look at suspicious postings, especially if they can get the accuracy higher. BUT I hope no one uses such algorithms to automatically stop suspected trolls. This can only lead to unforeseen consequences and stifling of free speech (unless of course stifling is not an unforeseen consequence, but an intended one).
Moderation at a privately owned/operated site can be freely used to filter anything they don't want their users to see, even if it creates a slant. However, the odds that they will start filtering specifically subversive content is pretty low, since it's those kind of posts that generate hundreds of follow-ups of disagreement, bolstering even more traffic. More likely, they will filter the truly atrocious (bland death threats, etc) that add little in terms of desirable content.
troll? (Score:2)
The article defines a troll as someone who has been banned from an online group.
You can be banned from a website such as redstate for being an Obama supporter. People are often banned from websites solely for having minority viewpoints.
Re: (Score:2)
My question would be: How would they identify this?
Say I sign up to Red State as ObamaForever2016 and post heavily pro-Obama links/comments. I quickly get banned. Now, I sign up to Pro Tea Party Forums as BObamaFan and post different pro-Obama links/comments. How would the algorithm determine that those two accounts were the same person (banned from one site) and not two different people with similar political views?
Research (Score:5, Funny)
Re: (Score:2)
Yo dawg!
it takes two to troll (Score:2)
What they identify isn't people who "troll", it's people who get mobbed and ostracized by a community. There's a big difference between the two. That's not a question of "false positives", it's a question of whether people lose themselves completely in group think.
Of course, in practice, there is little chance this will actually go anywhere. Although content creators and ideologically biased readers frequently denounce as "trolss" anybody who disagrees with them, sites actually like controversy because it i
Internet precrime (Score:2)
Does the algorithm account for the fact that the Troll designation is applied by some specific person who (a) has mod points, (b) strongly disagrees with a given post and (c) is in many cases part of a group who is looking for antagonists to some cause that group really believes in.
Hmmm .... (Score:2)
If (internet) then troll_present = true;
Done, just that easy.
Oblig XKCD (Score:3, Funny)
https://xkcd.com/810/
Seems relevant.
Uh Oh... (Score:4, Funny)
marking category cannot be used properly (Score:3)
The word troll is a pointless word which is misused by people who mainly want to villify those who disagree with them, and excuse for people who do not want anyone else to be able to express opinions except for the ones they approve, to censor anyone elses opinions they do not like. Thus, the marking in almost all cases is abused and has no real purpose except for censorship. Obviously, since a message board should be a place for discussion and expressing of differing views and opinions, such is contrary to the purpose of message boards to begin with, to express ones views and to debate subjects.
The fact is, expressing a view someone else disagree with is not something we should censor, and the tr*** accusation is just an excuse for censorship. As long as the poster honestly believes in what they are posting, its not a tro**, their are posting their view to express their position for the sake of the issue itself, rather than to annoy anyone. Maybe, a tr*** might be someone who posts things they do not agree with for the sole purpose to annoy. However, since it is impossible for anyone to know whether or not someone posting a message honestly believes in what they say, it is impossible to determine if a message is a tr***, or not. it is also impossible to know if someone is posting a view just because they are interested in a subject and have a view on it, rather than trying to annoy anyone.
The fact is, if someone is annoyed by something, the person responsible for being annoyed is the person who is annoyed, its all in the eye of the beholder, some people will agree with something and others will disagree, you have to allow for a difference of opinions and views. It is always the case that someone will disagree with someone else says, it does not mean that the message was posted with the sole intent to annoy, but the reader of a message may still misconstrue or assume that even though it is impossible for them to truly know that. It is okay and important for people to be able to post messages they know will annoy others, because, anything can annoy anyone, its impossible to post a view or position on anything if one has to fear annoying someone.
The tr*** thing could only apply to messages written with the sole intent to annoy, But as I said, its impossible for anyone to know if that was the sole intent, to be the sole intent, the person would have to not honestly believe in what they say, otherwise they are posting because they believe in what they say and think that its important.
That is why the marking on a message cannot be used legitimately and fairly, there is impossible for anyone to know if a message is a tr8**. Thats why, we should remove the marking from messaging and bullitin board systems. As I said before, in 100% of cases the marking is abused, it cannot be used in any proper, fair way, because it is a fundamentally flawed feature.
It would be best policy on these matters is that bullitin boards should have a rule against computer generated and mass posted advertising, but thats about it.
I run a feminist forum (Score:4, Insightful)
and volunteer to help test. We have a steady stream of trolls available for review, a truly endless supply.
Reinforcing Echo Chambers (Score:2)
So, posters on message boards deemed "anti-social" or that have views that are not tolerated by the community are now the definition of troll? Wow, that's a good way to make sure opposing viewpoints never get heard. The "algorithm" will just drop any message that goes against the "party line".
I'd imagine there are plenty of places where if you stand up for your individual rights and privacy you'd be marked a subversive and the community wouldn't tolerate your presence. How about speaking of the value of
How about state-sponsored trolling? (Score:4, Insightful)
I can see, how this may defeat (ab)users trolling for fun and not suspecting automated detection before it hits them (though, with only 80% accuracy, I dread the thought of the methods expanding out of the virtual realm).
But what about people "trolling" professionally — paid and/or otherwise compelled into it by a state or corporate actor [forbes.com] pretending there to exist some kind of "grass-roots" movement? How would it deal with thousands of fake accounts [dailymail.co.uk] mounting a coordinated assault, posting (while "liking" and "following" each other)?
Some times you may be able to catch accounts posting identical things at the same exact time [pp.vk.me] (and ban them all in bulk), but Russians seem to have fixed that bug in their bots now...
This is turning into another battle like that, in which spammers have fought the best Information Technology minds into a standstill [itsecurity.com]. I doubt, progress against forum-spammers will be much better than that — not when mere technology, however clever, is up against interests of a reasonably powerful state.
I'd probably start with... (Score:2)
... something like this:
int is_troll( const char* username ) {
if( !whitelisted(username) ) {
return 1;
}
}
Similar problem to spam filtering (Score:2)
Comment removed (Score:3)
I've been called a "very talented troll before".. (Score:2)
Simplified DFTT algorithm (Score:2)
article = new nonsensefilledstory();
article.addStrife();
article.addContraversy();
article.stoketribalisim();
article.allowAnonymousComments(true);
stack_of_trolls *users = article.create();
forall users as user (
if (user.isTroll() == false && user.respondsToTrolls() == true)
(globalBanList.addUser(user));
)
Re:What is a 'troll'? (Score:5, Interesting)
Anybody who tells the truth that the scum in power don't want you to hear, apparently...
In days when someone can be attested for quoting from a published book by Winston Churchill [dailymail.co.uk] I have to agree.
Re: (Score:2, Insightful)
Re: (Score:2)
That was religious disparaging, not racial.
Re: (Score:2)
Re: (Score:2)
Re: (Score:3)
i would describe a troll as the opposite:
1. bullying of common folk
2. and carefully anonymous, as a rule
there's nothing wrong with whistleblowers, criticism, against leaders of industry, government bureaucrats and police, politicians, etc. but this isn't trolling. this is hiding to avoid *unfair* repercussions. there's nothing wrong with going after power as you allude to
but hiding and attacking a regular guy is trying to avoid *fair* repercussions
they spew venom only when they know there are no repercussio
Re: (Score:2)
i defined my terms clearly as bullying common people from hiding. bullying someone is not a vague mystical concept, it's pretty blatant and obvious, and objectively apparent: "i disagree with your opinion and here's why..." vs "u fag, get back to sucking cock"
i have no problem with attacking common people and using your real name: you stand behind your words
i have no problem with attacking authority and using a fake name: you're avoiding *unfair* repercussions
but if you hide and hate on regular people, you'
Re: (Score:2)
Where do they anything about censorship? They said only that attacking common people from behind a cloak of anonymity is dishonorable trolling.
And personally I agree with the dishonorable part at least, and that it's potentially worthy of being silenced - if you want to shout vitriol at common folk you can damned well do so without hiding behind a mask, the same way your great-grandparents and everyone that came before did. Permitting the anonymous persecution of common individuals tends to lead to the so
Re: (Score:2)
If I were to rank the publicly-accessible online forums I participate in these days, from most civil to least civil, Slashdot would be top of the pack by a long, long way. Seriously, that's how bad it is now.
The unholy trinity of culture wars, console wars and overbearing admins have ruined many other discussion sites that were perfectly good 3 years or so ago.
Re: (Score:2)
I have a personal rule not to read comment sections. There are a small number of exceptions, but, in general, whenever I ignore the rule and browse the comments, I invariably encounter some insanely stupid comments that make me want to bang my head into my desk repeatedly. Too many people seem to be able to operate their brain or their fingers/mouth, but not both at once.
Re: (Score:2)
Re: (Score:2)
Me too!
Re: (Score:2)
A persistant "Whoosh"ing noise seems to be getting worse accompanying Slashdot comments. I wonder what it is?
Re: (Score:2)
I can't wait til the anti-bullying crowd lobby for something like this...
... not once realizing the irony of their actions.
Re: (Score:2)
In other words, it's not so much a "troll detector," but rather a groupthink protection mechanism.
Better patent that bad boy, gonna be in high demand...
Re: (Score:2)
Hmm, phone decided to post the response to the wrong comment...I blame Verizon. Maybe Google.
Re: (Score:2)