Can Tech Firms Prevent Violent Videos Circulating on the Internet? (theguardian.com) 116
This week New York's attorney general announced they're officially "launching investigations into the social media companies that the Buffalo shooter used to plan, promote, and stream his terror attack." Slashdot reader echo123 points out that Discord confirmed that roughly 30 minutes before the attack a "small group" was invited to join the shooter's server. "None of the people he invited to review his writings appeared to have alerted law enforcement," reports the New York Times., "and the massacre played out much as envisioned."
But meanwhile, another Times article tells a tangentially-related story from 2019 about what ultimately happened to "a partial recording of a livestream by a gunman while he murdered 51 people that day at two mosques in Christchurch, New Zealand." For more than three years, the video has remained undisturbed on Facebook, cropped to a square and slowed down in parts. About three-quarters of the way through the video, text pops up urging the audience to "Share THIS...." Online writings apparently connected to the 18-year-old man accused of killing 10 people at a Buffalo, New York, grocery store Saturday said that he drew inspiration for a livestreamed attack from the Christchurch shooting. The clip on Facebook — one of dozens that are online, even after years of work to remove them — may have been part of the reason that the Christchurch gunman's tactics were so easy to emulate.
In a search spanning 24 hours this week, The New York Times identified more than 50 clips and online links with the Christchurch gunman's 2019 footage. They were on at least nine platforms and websites, including Reddit, Twitter, Telegram, 4chan and the video site Rumble, according to the Times' review. Three of the videos had been uploaded to Facebook as far back as the day of the killings, according to the Tech Transparency Project, an industry watchdog group, while others were posted as recently as this week. The clips and links were not difficult to find, even though Facebook, Twitter and other platforms pledged in 2019 to eradicate the footage, pushed partly by public outrage over the incident and by world governments. In the aftermath, tech companies and governments banded together, forming coalitions to crack down on terrorist and violent extremist content online. Yet even as Facebook expunged 4.5 million pieces of content related to the Christchurch attack within six months of the killings, what the Times found this week shows that a mass killer's video has an enduring — and potentially everlasting — afterlife on the internet.
"It is clear some progress has been made since Christchurch, but we also live in a kind of world where these videos will never be scrubbed completely from the internet," said Brian Fishman, a former director of counterterrorism at Facebook who helped lead the effort to identify and remove the Christchurch videos from the site in 2019....
Facebook, which is owned by Meta, said that for every 10,000 views of content on the platform, only an estimated five were of terrorism-related material. Rumble and Reddit said the Christchurch videos violated their rules and they were continuing to remove them. Twitter, 4chan and Telegram did not respond to requests for comment
For what it's worth, this week CNN also republished an email they'd received in 2016 from 4chan's current owner, Hiroyuki Nishimura. The gist of the email? "If I liked censorship, I would have already done that."
But Slashdot reader Bruce66423 also shares an interesting observation from The Guardian's senior tech reporter about the major tech platforms. "According to Hany Farid, a professor of computer science at UC Berkeley, there is a tech solution to this uniquely tech problem. Tech companies just aren't financially motivated to invest resources into developing it." Farid's work includes research into robust hashing, a tool that creates a fingerprint for videos that allows platforms to find them and their copies as soon as they are uploaded...
Farid: It's not as hard a problem as the technology sector will have you believe... The core technology to stop redistribution is called "hashing" or "robust hashing" or "perceptual hashing". The basic idea is quite simple: you have a piece of content that is not allowed on your service either because it violated terms of service, it's illegal or for whatever reason, you reach into that content, and extract a digital signature, or a hash as it's called.... That's actually pretty easy to do. We've been able to do this for a long time. The second part is that the signature should be stable even if the content is being modified, when somebody changes say the size or the color or adds text. The last thing is you should be able to extract and compare signatures very quickly.
So if we had a technology that satisfied all of those criteria, Twitch would say, we've identified a terror attack that's being live-streamed. We're going to grab that video. We're going to extract the hash and we are going to share it with the industry. And then every time a video is uploaded with the hash, the signature is compared against this database, which is being updated almost instantaneously. And then you stop the redistribution.
It's a problem of collaboration across the industry and it's a problem of the underlying technology. And if this was the first time it happened, I'd understand. But this is not, this is not the 10th time. It's not the 20th time. I want to emphasize: no technology's going to be perfect. It's battling an inherently adversarial system. But this is not a few things slipping through the cracks.... This is a complete catastrophic failure to contain this material. And in my opinion, as it was with New Zealand and as it was the one before then, it is inexcusable from a technological standpoint.
"These are now trillion-dollar companies we are talking about collectively," Farid points out later. "How is it that their hashing technology is so bad?
But meanwhile, another Times article tells a tangentially-related story from 2019 about what ultimately happened to "a partial recording of a livestream by a gunman while he murdered 51 people that day at two mosques in Christchurch, New Zealand." For more than three years, the video has remained undisturbed on Facebook, cropped to a square and slowed down in parts. About three-quarters of the way through the video, text pops up urging the audience to "Share THIS...." Online writings apparently connected to the 18-year-old man accused of killing 10 people at a Buffalo, New York, grocery store Saturday said that he drew inspiration for a livestreamed attack from the Christchurch shooting. The clip on Facebook — one of dozens that are online, even after years of work to remove them — may have been part of the reason that the Christchurch gunman's tactics were so easy to emulate.
In a search spanning 24 hours this week, The New York Times identified more than 50 clips and online links with the Christchurch gunman's 2019 footage. They were on at least nine platforms and websites, including Reddit, Twitter, Telegram, 4chan and the video site Rumble, according to the Times' review. Three of the videos had been uploaded to Facebook as far back as the day of the killings, according to the Tech Transparency Project, an industry watchdog group, while others were posted as recently as this week. The clips and links were not difficult to find, even though Facebook, Twitter and other platforms pledged in 2019 to eradicate the footage, pushed partly by public outrage over the incident and by world governments. In the aftermath, tech companies and governments banded together, forming coalitions to crack down on terrorist and violent extremist content online. Yet even as Facebook expunged 4.5 million pieces of content related to the Christchurch attack within six months of the killings, what the Times found this week shows that a mass killer's video has an enduring — and potentially everlasting — afterlife on the internet.
"It is clear some progress has been made since Christchurch, but we also live in a kind of world where these videos will never be scrubbed completely from the internet," said Brian Fishman, a former director of counterterrorism at Facebook who helped lead the effort to identify and remove the Christchurch videos from the site in 2019....
Facebook, which is owned by Meta, said that for every 10,000 views of content on the platform, only an estimated five were of terrorism-related material. Rumble and Reddit said the Christchurch videos violated their rules and they were continuing to remove them. Twitter, 4chan and Telegram did not respond to requests for comment
For what it's worth, this week CNN also republished an email they'd received in 2016 from 4chan's current owner, Hiroyuki Nishimura. The gist of the email? "If I liked censorship, I would have already done that."
But Slashdot reader Bruce66423 also shares an interesting observation from The Guardian's senior tech reporter about the major tech platforms. "According to Hany Farid, a professor of computer science at UC Berkeley, there is a tech solution to this uniquely tech problem. Tech companies just aren't financially motivated to invest resources into developing it." Farid's work includes research into robust hashing, a tool that creates a fingerprint for videos that allows platforms to find them and their copies as soon as they are uploaded...
Farid: It's not as hard a problem as the technology sector will have you believe... The core technology to stop redistribution is called "hashing" or "robust hashing" or "perceptual hashing". The basic idea is quite simple: you have a piece of content that is not allowed on your service either because it violated terms of service, it's illegal or for whatever reason, you reach into that content, and extract a digital signature, or a hash as it's called.... That's actually pretty easy to do. We've been able to do this for a long time. The second part is that the signature should be stable even if the content is being modified, when somebody changes say the size or the color or adds text. The last thing is you should be able to extract and compare signatures very quickly.
So if we had a technology that satisfied all of those criteria, Twitch would say, we've identified a terror attack that's being live-streamed. We're going to grab that video. We're going to extract the hash and we are going to share it with the industry. And then every time a video is uploaded with the hash, the signature is compared against this database, which is being updated almost instantaneously. And then you stop the redistribution.
It's a problem of collaboration across the industry and it's a problem of the underlying technology. And if this was the first time it happened, I'd understand. But this is not, this is not the 10th time. It's not the 20th time. I want to emphasize: no technology's going to be perfect. It's battling an inherently adversarial system. But this is not a few things slipping through the cracks.... This is a complete catastrophic failure to contain this material. And in my opinion, as it was with New Zealand and as it was the one before then, it is inexcusable from a technological standpoint.
"These are now trillion-dollar companies we are talking about collectively," Farid points out later. "How is it that their hashing technology is so bad?
The Bitter Irony (Score:5, Insightful)
That there are violent video games, violent movies, and violent war footage, and even plenty of violent crime footage on the internet.
So why is it that only one type of violent content is denied whereas this is, and ultimately isn't it up to the electorate to see the true unsugar coated reality of the world, for example when violent jihadists decide to execute innocent people, or when white supremacists kill innocent people, are we to be saved from the disgust, or from understanding the grievances of the other side that is lashing out.
Everyone has seen pictures of 9/11, but nobody has bothered to learn the lesson of 9/11.
Re: (Score:3)
Even Rock music and many other things have been blamed for whatever. So there's not really a double standard here (just seemingly equally shitty scapegoating).
Over time behavioural science looked into the issues and found no significant links between these forms of media and behavioural changes, even though in individual cases they might ha
Re: (Score:3)
I don't have the answer in this case.
Me neither, but the burden of proof is on those advocating censorship, not those defending free speech.
Re: The Bitter Irony (Score:3, Insightful)
Re: (Score:2)
The answer is self-censorship. If you do not want to be exposed to that kind of content, then do not seek it out.
It's not so easy though. Even if you don't explicitly seek some type of content, you may still be exposed to it - via social pressure, family education, circle of friends, maybe as part of some unrelated content you do seek out, and so on. For example, it's easy to say "If you're not religious, do not go to church", but if you're a kid in a religious family you don't have much choice. Or, if you're a Russian in Russia, you don't need to seek out Putinist propaganda to be drenched in it.
Re: The Bitter Irony (Score:2, Troll)
Re: (Score:2)
The answer is self-censorship.
The censors don't want to control what they see. They want to control what YOU see.
Re: (Score:2)
Because words cannot be weaponized? The pen is not, as they say, mightier than the sword?
"But if thought corrupts language, language can also corrupt thought." --George Orwell, Politics and the English Language (1946)
That sounds like a form of mental or psychological damage. In other words, an assault.
We saw this same kind of mental assault happen during WWII [wikipedia.org] and we see it happen now on platforms such as 8chan [vox.com].
There's your proof. But I suspect you will
Re: (Score:3)
Its not that words can't be weaponized sure they can, but if everyone has the ability to speak than you can weigh the arguments and decide for your self. Sure there will be people that take an example from violence, but the question is where not already on the brink to begin with?
Your example of take the example Nazis, that is on an entirely different scale, it was the government giving their lies, and stopping everybody else from speaking. This is exactly why I oppose censorship, it is too easy for a gover
Re: (Score:2)
Re:The Bitter Irony (Score:5, Insightful)
"understanding the grievances of the other side"
We don't need to "understand" a racist. We already know it's a learned condition and they hate people of a different color. It can be unlearned, but that requires them to want to unlearn it. Education is the key in defeating racism, however, it seems republicans are hellbent on stopping education discussing racism.
Re: (Score:1, Insightful)
You're wrong. Everybody is not "racist". Everybody has a xenophobic instinct that is the result of evolution, stranger danger and all that. Anyone who applies a minimum of rational thought knows that skin colour, ethnicity, religion. sexual orientation and all do not matter, what matters is the character of the person. It's no different from many other instincts that have been completely made irrelevant by civilization. People like C.P. Ellis managed to understand this, and he was no Nobel Prize. Hatred and spite directed towards a group only because that group is not like our own is unscientific and illogic. Except if directed toward nerds, but that is another matter because nerds are not humans.
xenophobic = racist. And races are even segregated by choice. So if people are segregating, by choice the fact that they are "whatever" and a person of "character" within a segregated community is trivial.
Re: Leftist claptrap (Score:3)
Re: (Score:3)
I agree when you say culture tells us who to consider our foes but I'm not sure there's an evolutionary basis for that, in the context of inter species conflict yes, but intra species I'm not seeing it, not anyway beyound our ability to have culture, education and beliefs.
Re: (Score:2)
Its an evolutionary necessity, I treat and favor my children much more than my neighbors, because I want them to succeed more, if I didn't then the children of the parents who did favor their children would have an advantage. Same with groups you favor your group because if you didn't you group would immediately be at a disadvantage to groups that did favor themselves.
The nature vs nurture is irrelevant and hard to distinguish since nurture can override nature to some extent. The fact is any group that tru
Re: (Score:2, Insightful)
Where are Democrats "Pushing" CRT?
Re: (Score:3, Informative)
The story misses the bigger picture here. There are good reason to limit the use of your platform for violent and quite possibly illegal videos, but the real danger is them becoming part of a pipeline that draws people into extremism.
The Christchurch terrorist told viewers of his live stream to "subscribe to PewDiePie". Mr. PewDiePie isn't a terrorist and he doesn't promote the terrorist's far right ideology, so why say that? It's because PewDiePie's channel is the gateway, the entrance to the rabbit hole t
Re: (Score:1)
Lacking a technical solution, the only option is trusting individuals to regulate their own behavior and that is not going to happen anywhere but in very small groups that instantly become segregated and abused but the ignorant majority.
Re: (Score:2)
How reasonable are these automatic recommendation algorithms? Since that is an open question that hasn't seen sufficient discussion yet, I'd much rather talk about that than rehash the free speech issue again, on yet again garbled framing.
How "reasonable" the recommendation algorithms are depends on what you think their purpose should be. Obviously, Google thinks their purpose is to keep people on the site - and subjected to advertising - as long as possible, regardless of social harm. And since it's Google's platform, that's what we get.
This is why I keep saying that when a new technology becomes so pervasive and inescapable that it's effectively part of societal infrastructure, it needs to be taken from private hands. Otherwise, democracy
Re: (Score:1)
Re: (Score:2, Insightful)
H. Bomberguy made a great video about it: https://youtu.be/GjNILjFters [youtu.be]
PewDiePie flirts with fascism to be edgy. It even lost him business in the past. It's all a joke to him, but actual fascists like that terrorist love it because it normalizes their ideology and gets people interested in it.
Re: (Score:2)
well if you are going to slogan while rioting and buying expensive crap with other people's money then that is just what you believe in. If people are going slogan while peacefully protesting then that is what they believe in. This country, the US allows people to differentiate based solely on intent, likely more than any other country.
That's why we need to ban BLM and CRT: They are gateway theories to rioting and arson and insurrection.
See how easy this is?
This is an action of class privilege and people's insistence that they get to define purpose and intent for everyone else.
Re: (Score:1)
That's why we need to ban BLM and CRT: They are gateway theories to rioting and arson and insurrection.
See how easy this is?
This is an action of class privilege and people's insistence that they get to define purpose and intent for everyone else.
You do realise that expressing support for BLM and CRT themselves is just as much "an action of class privilege and people's insistence that they get to define purpose and intent for everyone else", don't you?
What makes their privilege special?
And of course how is that different from deciding that "violent" videos must be banned through technical means? That too would be "an action of class privilege and people's insistence that they get to define purpose and intent for everyone else."
The point was exac
Re: (Score:2)
That's not a great video about it. This is: The PewDiePipeline: how edgy humor leads to violence [youtube.com]
Re: (Score:2)
Thanks, I'll take a look at that one.
Re: (Score:1)
Re: The Bitter Irony (Score:2)
Re: The Bitter Irony (Score:1)
The violence of the world isn't a part of life, any more than Sessame Street is. It's a choice. A choice made because of graphic violence.
I'd argue that videos of real-life killing should not be on any social media, whether it's by law enforcement, a military or a wannabe Manson.
With one and only one exception. If the facts are denied by the perp, all restrictions should be lifted for that video. It should be whitelisted.
What about violence in games? There is currently no evidence it dehumanises, the way re
Re: (Score:2)
There's a difference between glorifying media which creates actual victims Vs fake entertainment (however sick it may be). There's a reason in many countries child pornography is illegal but drawing cartoons of children pornography is not.
The best way of fighting these extremist fucks is to make sure they will be forgotten rather than immortalised.
Re: (Score:2)
The best way of fighting these extremist fucks is to make sure they will be forgotten rather than immortalised.
Meanwhile, there's a 130-foot tall statue of Genghis Khan in Mongolia.
Re: (Score:2)
Because there are real victims to the real violence. This shit isn't entertainment, it's a tragedy.
My best guess... (Score:1, Flamebait)
Uhm, no.
Any more stupid questions?
robust hashing? (Score:2)
Re: robust hashing? (Score:1)
You're right. Known snuff porn (because that's what it is, let's be real about motives) should be removed and purveyors psychologically tested. Not treated, just tested.
Since upbringing and society are overwhelmingly responsible for a person's behaviour, identifying and eliminating the reasons for seeking snuff would seem much more credible as an approach.
Re: (Score:2)
Re: (Score:2)
I agree with much of what you say. Off-hand, I would say all of it. If there's a way to use such horror educationally, then it would fit with my less complete understanding of how to deal with such material, in that that's obviously a legitimate case and any rules should allow for it.
In the end, it's all about looking for ways to benefit individuals and society, hence my idea of an exception for fighting falsehoods. Education is a benefit and if there's a way to use it so that it helps people to become bett
Something is wrong with you (Score:5, Insightful)
If you see a problem and the first solution you think of is "censorship!", then something is wrong with you.
Re: (Score:2)
There's plenty of censorship on the internet: No-one's complaining about naked schoolgirls or online prostitutes. If anything, countries that accept everyone as a sexual identity, are complaining about the cultural imperialism of Google, Facebook, Twitter, etc.
Re:Something is wrong with you (Score:5, Insightful)
So, basically, what this "professor of computer science at UC Berkeley" is advocating for is that all the tech companies which host content should get together and form a cartel to censor/ban/suppress information they don't like, such that if one of them identifies a piece of information via a hash, they can quickly and effectively get everyone else to ban it as well.
Yeah, can't see how that could possibly go wrong...
Re: (Score:2)
How did this guy end up as a professor? He thinks "tech companies" forms the internet, and that they could control the internet if only they cooperated. That's not how the internet works.
Re: (Score:2)
What makes you think that just because we're talking about censorship that it is the "first solution" that people thought of?
Re: (Score:2)
If it isn't the first solution you thought of, then you're braindead because there are plenty of other options.
Pure BS (Score:5, Insightful)
Violent content on the screen doesn't make you violent and make you commit hideous crimes, your genes, upbringing, peers, society, peers and primary religion/indoctrination do. It's so easy to blame the Internet while forgetting about the real deal. There are now multiple psycho-sociological studies which prove that beyond reasonable doubt, yet we are here at it again. The only question who and how is profiting from this fear mongering. I guess it's down to control/power specially in authoritarian states.
People must be taught critical thinking, the scientific method, (the dangers and pitfalls of) groupthink, the psychology of masses/tribes (we are still extremely tribal by our nature) - this is the only way to reduce the amount of violence, hatred, etc. but this is not good for politicians and authorities because this makes people a lot less gullible and controllable.
Re: (Score:3)
Re: (Score:3)
If you see those videos and think "oh boy! that's great" instead of getting horrified, you're probably beyond saving.
War footage didn't quite made people more prone to do wars, if anything, we got stuff like vietnam.
Also most people from "those sites" think this shooter is some sort of CIA operation because he could very well go into things they actually hate like the soros foundation that was actually close to him than the random grocery store.
Re: (Score:2)
I don't think anyone is arguing that monkey see, monkey do is the issue.
White supremacists like the Buffalo murderer believe that a race war is coming, and that in time they will be seen as heroes who saw it coming and helped make the masses aware of it. It's a little bit ironic that they complain about being being woke, while simultaneously wanting people to wake up to "real threat".
Anyway, for people like that getting their videos and manifestos out is very important. If they can no longer do that then it
Re: (Score:1)
What, like your neighbour lynched a black man and no-one complained, so you can do it too? That's an argument for reducing the violence we see.
Just like everyone should be taught calculus or coding. Abstraction is a high-level skill that most people can't do very often and having to analyze every message from some anonymous 'Paul Revere', only causes exhaustion, making nothing better.
The first problem is childhood education: It is purely a download, there is no thinking beyond deciding which rule-set wi
Re: (Score:2)
upbringing, peers, society, peers and primary religion/indoctrination do.
But you contradict yourself. There's a reason violent media content is used in indoctrination by peers and society. There's also plenty of scientific evidence that exposure to endless amount of violent *real* content makes you desensitized.
You sound like you're bringing in the whole violent video games argument in. In that case you would be correct. We have plenty of evidence that *fake* violence does not desensitize people to violence or make them violent.
Real content is different. You did after all acknow
Re: (Score:2)
Desensitize you to violence and make you be more violent it are not the same thing.
No (Score:2)
No, they can't, unless the Internet becomes a single platform, aka becomes cable TV.
Case in point:
I run my own XMPP server. No way they could stop that there, if it were happening.
All is OMEMO encrypted, even uploads. Even if I wanted to comply, Hashing would have to be done on the client. I'd just switch to a different client...
I run my own mail server. They have no power there.
Re: (Score:2)
Do you also provide your own internet access and power supply?
Depends on the platform (Score:2)
Something that might work for Twitch is they audit a person's machine setup. Most of these live streamers are sitting in front of a computer & webcam. It should be possible to run software that fingerprint that setup and the backdrop and only enable
The technology is already there (Score:2)
The capability is already there on these platform; it’s being used to identify audio and video for the purposes of protecting IP and generating and directing advertising revenue.
So the question is not wether it’s technically possible but whether these providers can be made to behave responsibly.
Re: (Score:2)
Simple hashing is easy, but only detects identical files. Hashing that matches through editing is not actually already there. (Unless you have a source I haven't seen?)
Re: (Score:2)
I should say, for video. Audio matching is pretty much a thing (e.g. Shazam app, Youtube audio matching, etc.)
Re: (Score:2)
I don't agree with tech company censorship.. (Score:2)
...but the idea that YouTube can pretty consistently identify/flag copyright material in seconds after uploading belies that this is technologically impossible or even hard in 2022.
Is it perfect? No. But I agree with him that it doesn't seem to be "a few heavily edited versions making through."
I lost my appetite for watching people die on video after seeing about 5 mins of a Faces Of Death video when I was an 8th grader around 1983. Watching, sharing, and promoting that shit is a sign you are fucked in
Re: (Score:2)
Just because we can, does it mean we should? (Score:3, Insightful)
Just because we can, does it mean we should?
Building this infrastructure allows for non-benign parties to rapidly shut down any content they desire, for any reason, without public disclosure.
Tyranny is always first dressed up as something 'for our own good'.
Re: (Score:3)
Oh horseshit. All these companies already block media they don't like it on other company's request constantly. Of you're worried about the liberty of uploading a video, that ship sailed long ago when you outsourced the hosting to a for profit company who doesn't give a shit about freedom of speech and has marketing teams and share holders to appease.
It's like asking in 2022, given the level of violence should we be inventing guns or investing any R&D in then at all, completely irrelevant at this point.
Re: (Score:2)
No they block pre-existing videos that violate copyright, not a new live stream which is totally original footage. You would have to distinguish this from say a movie where the killing is fake, since that is totally appropriate.
Re: (Score:2)
No they block pre-existing videos that violate copyright, not a new live stream which is totally original footage.
False. You clearly have never seen or heard of the phenomenon of people live streaming themselves watching a football game, or live streaming a game with licensed music content (literally there's an option in some games now that disables licensed content so that their Twitch streamers don't get cut-off mid stream). This is to say nothing of those people attempting to rebroadcast sport in realtime on Facebook or other platforms.
No sorry, there is very much enforcement and blocking occurring of *live* footage
Re: (Score:3)
It's just a question of adding to the hash table of CSAM. Less ubiquitous but still common is copyright filters.
So any arguments based on principles are right out the window, which is something a lot of people are missing. We've *already* decided that if a particular type of picture/video is harmful enough, it can be banned and major companies forced into screening uploads for it. There's some
Re: (Score:2)
Which is why CSAM should not be illegal either. It's fine if a subset of tech companies remove it voluntarily, but in a free society, no idea or thought should be illegal, nor the ability to communicate those from one person to the next as long as they're both willing participants. At first it's CSAM, then it's hate speech, then it's fake news, then it's real news that disagree with the ruling party's narrative. Pretty soon it'll be "we have always been at war with Eurasia".
Some might think this is a slippe
Censorship is treated as damage. (Score:3, Insightful)
The internet was built to hold up to all kinds of attacks to the system, and censorship is seen as damage and is routed around.
Take your pick on videos that someone decides others should not see and people find ways to get these videos out. I'm thinking of videos that were critical of government approved treatments for COVID-19. When a video of a couple of physicians talking about the effectiveness of some drugs on the treatment of symptoms of COVID-19 got blocked then people complained and the noise about the blocked videos brought more viewers than it would likely would have got if they did nothing.
People are still figuring out how the internet works. Big companies thought they could collude to control what information was available but that brought out people with money and a desire for freedom of communication to buy existing communication platforms or build new ones.
If tech firms control information in ways that people don't like then they will stop using their services and go elsewhere. Some people like to live in an information bubble. Some like to create an information bubble for their children. Most adults prefer that they not be lied to about world events. Private companies may have the right to remove information that they don't like from their communication systems but then that should open them up to lawsuits if they abuse this right. They have an obligation to allow people to communicate freely, even if they don't like what people say. If a company acts on behalf of the government to hide or distort information then they are an agent of the government, and the government is prohibited from restricting people from communicating.
Elon Musk was right when he pointed out that free speech is people you don't like saying things you don't like. If people are only free to say what someone else approves of then there is no free speech.
Can tech firms stop videos from circulating? Maybe they have the ability but to do so is ethically and/or legally questionable. The harder they try to keep something quiet the harder people work to find out what is being hidden from them.
Re: (Score:3)
Oh this meme again. Sorry kid but the internet doesn't route around shit. Not since we concentrated hosting to a few megacorps, not since countries took control of puppiesy in and out of their borders.
Sure technically you're free to host anything you want, but what's the point, the world won't see it. The fundamental protocols of the internet don't work around your twitter ban it get you your 500000 subscribers back.
Re: (Score:2)
Lol autocorrect. Countries took control of pipes not puppies.
False positives are inescapable for lossy hashing. (Score:2)
The real problem is there will be innocent victims of the censorship too, as we've already seen rampantly across YouTube. There's no practical way to do this without either missing some or hitting extras - the likely outcome is doing some of both.
Re: (Score:3)
Definitions (Score:2)
Define "violent". In an unmistakable, concise, logical, machine describable way that leaves no room for interpretation.
Of course no such description is possible, as such no perfect way to block it exists.
The definitions and vocabulary changing every year doesn't help matters either. "Silence is violence" - yeah code that into a bayesian filter.
Re: (Score:2)
No, we don't even need to define "violent" to know such censorship is wrong. I would say it is more wrong than banning "anti-vax" just because one shows their distrust on recent jabs that haven't sufficiently and objectively tested out their side-effect with a long enough period.
Why? Because there are various kinds of video of violence. Some may be self-promotion of terrorists. But many more could be whistleblowing of crimes done by rich and powerful or tyrannical states. A proper handling of violent vid
Is that guy really a CS Prof? (Score:2)
It sounds like he is more a BS Prof. It is so simple, just account for all those variations and lossy things and stuff.... So simple. I am sure if it is so simple he would have produced and patented such a simple and totally real hashing algorithm?
No (Score:2)
Re: (Score:2)
circulating "on the internet" (Score:3)
"Right Think" Will Not Perfect "Wrong People" (Score:3)
Inherent in the idea that suppressing violent video content is that belief that evil, bad people can be prevented from doing evil, bad things but not being able to look at evil, bad images.
This, my friends, is the philosophical conceit of our age. We believe that if you control the symbol, you've controlled the reality.
For every problem... (Score:2)
I believe the old saying is, "For every problem there is a solution that is simple, cheap and wrong"
I watched an autopsy on youtube (Score:2)
Now every night I go out and find someone to turn inside out.
No (Score:2)
The world has the right to see the ugly side of the world. The most repressive regimes of the world fill their peasants with visions of unicorns and fluffy clouds and then truly gruesome things happen.
The only exception would be the case of a shooter livestreaming his massacre, but this is a very hard thing to stop. It could be over with well before the Net Cops arrive at the scene.
Re: (Score:2)
It's just more of the asymmetry between how we view sex vs violence. Personally, I don't see a good argument for allowing one but not the other. We're talking about videos of victims who don't necessarily want others seeing that happen to them in both situations and in
No, they can't. (Score:2)
It's not as hard a problem as .... (Score:2)
Two problems and the solution.
Problem 1: fingerprinting videos ...across colour shifts and sizing, yes, that's easy. ...across adding [small amounts of] text, yes, that's feasible. ...across mirroring the entire video, tough but yes that's feasible. ...across all three, that's not even close to feasible. ...across the six more commonly-used IP-dodging techniques, it's impossible. ...across the fifty new techniques invented the day after you solve the first nine, what a waste of effort.
Problem 2: the global
why? (Score:1)
Why would they want to?
Would we even want to, if we could? (Score:2)
Twitter under Musk (Score:1)