Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Social Networks Crime

Can Tech Firms Prevent Violent Videos Circulating on the Internet? (theguardian.com) 116

This week New York's attorney general announced they're officially "launching investigations into the social media companies that the Buffalo shooter used to plan, promote, and stream his terror attack." Slashdot reader echo123 points out that Discord confirmed that roughly 30 minutes before the attack a "small group" was invited to join the shooter's server. "None of the people he invited to review his writings appeared to have alerted law enforcement," reports the New York Times., "and the massacre played out much as envisioned."

But meanwhile, another Times article tells a tangentially-related story from 2019 about what ultimately happened to "a partial recording of a livestream by a gunman while he murdered 51 people that day at two mosques in Christchurch, New Zealand." For more than three years, the video has remained undisturbed on Facebook, cropped to a square and slowed down in parts. About three-quarters of the way through the video, text pops up urging the audience to "Share THIS...." Online writings apparently connected to the 18-year-old man accused of killing 10 people at a Buffalo, New York, grocery store Saturday said that he drew inspiration for a livestreamed attack from the Christchurch shooting. The clip on Facebook — one of dozens that are online, even after years of work to remove them — may have been part of the reason that the Christchurch gunman's tactics were so easy to emulate.

In a search spanning 24 hours this week, The New York Times identified more than 50 clips and online links with the Christchurch gunman's 2019 footage. They were on at least nine platforms and websites, including Reddit, Twitter, Telegram, 4chan and the video site Rumble, according to the Times' review. Three of the videos had been uploaded to Facebook as far back as the day of the killings, according to the Tech Transparency Project, an industry watchdog group, while others were posted as recently as this week. The clips and links were not difficult to find, even though Facebook, Twitter and other platforms pledged in 2019 to eradicate the footage, pushed partly by public outrage over the incident and by world governments. In the aftermath, tech companies and governments banded together, forming coalitions to crack down on terrorist and violent extremist content online. Yet even as Facebook expunged 4.5 million pieces of content related to the Christchurch attack within six months of the killings, what the Times found this week shows that a mass killer's video has an enduring — and potentially everlasting — afterlife on the internet.

"It is clear some progress has been made since Christchurch, but we also live in a kind of world where these videos will never be scrubbed completely from the internet," said Brian Fishman, a former director of counterterrorism at Facebook who helped lead the effort to identify and remove the Christchurch videos from the site in 2019....

Facebook, which is owned by Meta, said that for every 10,000 views of content on the platform, only an estimated five were of terrorism-related material. Rumble and Reddit said the Christchurch videos violated their rules and they were continuing to remove them. Twitter, 4chan and Telegram did not respond to requests for comment

For what it's worth, this week CNN also republished an email they'd received in 2016 from 4chan's current owner, Hiroyuki Nishimura. The gist of the email? "If I liked censorship, I would have already done that."

But Slashdot reader Bruce66423 also shares an interesting observation from The Guardian's senior tech reporter about the major tech platforms. "According to Hany Farid, a professor of computer science at UC Berkeley, there is a tech solution to this uniquely tech problem. Tech companies just aren't financially motivated to invest resources into developing it." Farid's work includes research into robust hashing, a tool that creates a fingerprint for videos that allows platforms to find them and their copies as soon as they are uploaded...

Farid: It's not as hard a problem as the technology sector will have you believe... The core technology to stop redistribution is called "hashing" or "robust hashing" or "perceptual hashing". The basic idea is quite simple: you have a piece of content that is not allowed on your service either because it violated terms of service, it's illegal or for whatever reason, you reach into that content, and extract a digital signature, or a hash as it's called.... That's actually pretty easy to do. We've been able to do this for a long time. The second part is that the signature should be stable even if the content is being modified, when somebody changes say the size or the color or adds text. The last thing is you should be able to extract and compare signatures very quickly.

So if we had a technology that satisfied all of those criteria, Twitch would say, we've identified a terror attack that's being live-streamed. We're going to grab that video. We're going to extract the hash and we are going to share it with the industry. And then every time a video is uploaded with the hash, the signature is compared against this database, which is being updated almost instantaneously. And then you stop the redistribution.

It's a problem of collaboration across the industry and it's a problem of the underlying technology. And if this was the first time it happened, I'd understand. But this is not, this is not the 10th time. It's not the 20th time. I want to emphasize: no technology's going to be perfect. It's battling an inherently adversarial system. But this is not a few things slipping through the cracks.... This is a complete catastrophic failure to contain this material. And in my opinion, as it was with New Zealand and as it was the one before then, it is inexcusable from a technological standpoint.

"These are now trillion-dollar companies we are talking about collectively," Farid points out later. "How is it that their hashing technology is so bad?
This discussion has been archived. No new comments can be posted.

Can Tech Firms Prevent Violent Videos Circulating on the Internet?

Comments Filter:
  • The Bitter Irony (Score:5, Insightful)

    by starworks5 ( 139327 ) on Sunday May 22, 2022 @03:46AM (#62555846) Homepage

    That there are violent video games, violent movies, and violent war footage, and even plenty of violent crime footage on the internet.

    So why is it that only one type of violent content is denied whereas this is, and ultimately isn't it up to the electorate to see the true unsugar coated reality of the world, for example when violent jihadists decide to execute innocent people, or when white supremacists kill innocent people, are we to be saved from the disgust, or from understanding the grievances of the other side that is lashing out.

    Everyone has seen pictures of 9/11, but nobody has bothered to learn the lesson of 9/11.

    • by fazig ( 2909523 )
      For one, violent video games and movies have gotten their share of criticism already, with claims that they turn people into terrorists and encourage them to run amok.
      Even Rock music and many other things have been blamed for whatever. So there's not really a double standard here (just seemingly equally shitty scapegoating).

      Over time behavioural science looked into the issues and found no significant links between these forms of media and behavioural changes, even though in individual cases they might ha
      • I don't have the answer in this case.

        Me neither, but the burden of proof is on those advocating censorship, not those defending free speech.

        • by saloomy ( 2817221 )
          The answer is self-censorship. If you do not want to be exposed to that kind of content, then do not seek it out. The content could (should?) be tagged as violent, or sexual, or depraved, or using foul language, so the audience can tune away if they do elect. But to deny what another may watch is outright fascism. Freedom isnt free. Some will use their freedom to harm others, and they should suffer the consequences for doing so. But that is a unquestionably desirable alternative to the tyrant of a fascist r
          • The answer is self-censorship. If you do not want to be exposed to that kind of content, then do not seek it out.

            It's not so easy though. Even if you don't explicitly seek some type of content, you may still be exposed to it - via social pressure, family education, circle of friends, maybe as part of some unrelated content you do seek out, and so on. For example, it's easy to say "If you're not religious, do not go to church", but if you're a kid in a religious family you don't have much choice. Or, if you're a Russian in Russia, you don't need to seek out Putinist propaganda to be drenched in it.

            • In Russia, the Putinist propaganda is a result of government coercion. They use the funds of the people to manipulate them further. We should have none of that in the US. The citizens are free to say what they want, to each other. The government is not. Also, if you donâ(TM)t want that kind of content, elect to refuse to watch it. I advocate for tagging, specifically for that reason. As for parents and their children, I will not presume to instruct a parent as to how they parent their child. Their ince
          • The answer is self-censorship.

            The censors don't want to control what they see. They want to control what YOU see.

        • the burden of proof is on those advocating censorship

          Because words cannot be weaponized? The pen is not, as they say, mightier than the sword?

          "But if thought corrupts language, language can also corrupt thought." --George Orwell, Politics and the English Language (1946)

          That sounds like a form of mental or psychological damage. In other words, an assault.

          We saw this same kind of mental assault happen during WWII [wikipedia.org] and we see it happen now on platforms such as 8chan [vox.com].

          There's your proof. But I suspect you will

          • Its not that words can't be weaponized sure they can, but if everyone has the ability to speak than you can weigh the arguments and decide for your self. Sure there will be people that take an example from violence, but the question is where not already on the brink to begin with?

            Your example of take the example Nazis, that is on an entirely different scale, it was the government giving their lies, and stopping everybody else from speaking. This is exactly why I oppose censorship, it is too easy for a gover

          • So you're fine living behind a Great Firewall, with the government deciding what you can or can't view? Y'know, for your protection and everyone else's.
    • by Anonymous Coward on Sunday May 22, 2022 @04:59AM (#62555920)

      "understanding the grievances of the other side"

      We don't need to "understand" a racist. We already know it's a learned condition and they hate people of a different color. It can be unlearned, but that requires them to want to unlearn it. Education is the key in defeating racism, however, it seems republicans are hellbent on stopping education discussing racism.

    • Re: (Score:3, Informative)

      by AmiMoJo ( 196126 )

      The story misses the bigger picture here. There are good reason to limit the use of your platform for violent and quite possibly illegal videos, but the real danger is them becoming part of a pipeline that draws people into extremism.

      The Christchurch terrorist told viewers of his live stream to "subscribe to PewDiePie". Mr. PewDiePie isn't a terrorist and he doesn't promote the terrorist's far right ideology, so why say that? It's because PewDiePie's channel is the gateway, the entrance to the rabbit hole t

      • I can’t tell if you’re being historically ignorant or purposefully malicious but Pewdiepie’s channel wasn’t a gateway or ever used as such at the time there was a subscribe to pewdiepie meme he included it to be part of the fun
        • Re: (Score:2, Insightful)

          by AmiMoJo ( 196126 )

          H. Bomberguy made a great video about it: https://youtu.be/GjNILjFters [youtu.be]

          PewDiePie flirts with fascism to be edgy. It even lost him business in the past. It's all a joke to him, but actual fascists like that terrorist love it because it normalizes their ideology and gets people interested in it.

          • That's not a great video about it. This is: The PewDiePipeline: how edgy humor leads to violence [youtube.com]

          • People make YouTube videos to justify any belief don’t believe me? Want a video to justify pedophilia or one on how Obama was fueling fascism through dog whistles? YouTube will have that..
      • No, it's because "subscribe to PewDiePie" was something of a meme at the time, and there were certain online trolls who were attempting to get him associated with whatever nefarious activities they could.
    • The violence of the world isn't a part of life, any more than Sessame Street is. It's a choice. A choice made because of graphic violence.

      I'd argue that videos of real-life killing should not be on any social media, whether it's by law enforcement, a military or a wannabe Manson.

      With one and only one exception. If the facts are denied by the perp, all restrictions should be lifted for that video. It should be whitelisted.

      What about violence in games? There is currently no evidence it dehumanises, the way re

    • There's a difference between glorifying media which creates actual victims Vs fake entertainment (however sick it may be). There's a reason in many countries child pornography is illegal but drawing cartoons of children pornography is not.

      The best way of fighting these extremist fucks is to make sure they will be forgotten rather than immortalised.

      • by djinn6 ( 1868030 )

        The best way of fighting these extremist fucks is to make sure they will be forgotten rather than immortalised.

        Meanwhile, there's a 130-foot tall statue of Genghis Khan in Mongolia.

    • Because there are real victims to the real violence. This shit isn't entertainment, it's a tragedy.

  • Uhm, no.

    Any more stupid questions?

  • Sounds like an unwinnable game of wack-a-mole to me. "Tech companies just aren't financially motivated to invest resources into developing it." and a group after funding(other peoples money) "at UC Berkeley"
    • You're right. Known snuff porn (because that's what it is, let's be real about motives) should be removed and purveyors psychologically tested. Not treated, just tested.

      Since upbringing and society are overwhelmingly responsible for a person's behaviour, identifying and eliminating the reasons for seeking snuff would seem much more credible as an approach.

      • I don’t know if it’s just “snuff”.. it’s kind of educational too.. I remember at the time nearly everyone I know, including older folks, wife, watching the video out of interest and people learning from it.. what the reality of it was actually like, how unbelievably similar it seemed how close to video games, just how evil and disconnected the kid was from the reality of what he was doing same with the beheading of that American journalist by isis how quickly, easily and clean
        • by jd ( 1658 )

          I agree with much of what you say. Off-hand, I would say all of it. If there's a way to use such horror educationally, then it would fit with my less complete understanding of how to deal with such material, in that that's obviously a legitimate case and any rules should allow for it.

          In the end, it's all about looking for ways to benefit individuals and society, hence my idea of an exception for fighting falsehoods. Education is a benefit and if there's a way to use it so that it helps people to become bett

  • by phantomfive ( 622387 ) on Sunday May 22, 2022 @04:10AM (#62555878) Journal

    If you see a problem and the first solution you think of is "censorship!", then something is wrong with you.

    • ... "censorship!" ...

      There's plenty of censorship on the internet: No-one's complaining about naked schoolgirls or online prostitutes. If anything, countries that accept everyone as a sexual identity, are complaining about the cultural imperialism of Google, Facebook, Twitter, etc.

    • So, basically, what this "professor of computer science at UC Berkeley" is advocating for is that all the tech companies which host content should get together and form a cartel to censor/ban/suppress information they don't like, such that if one of them identifies a piece of information via a hash, they can quickly and effectively get everyone else to ban it as well.

      Yeah, can't see how that could possibly go wrong...

      • by djinn6 ( 1868030 )

        How did this guy end up as a professor? He thinks "tech companies" forms the internet, and that they could control the internet if only they cooperated. That's not how the internet works.

    • What makes you think that just because we're talking about censorship that it is the "first solution" that people thought of?

      • If it isn't the first solution you thought of, then you're braindead because there are plenty of other options.

  • Pure BS (Score:5, Insightful)

    by Artem S. Tashkinov ( 764309 ) on Sunday May 22, 2022 @04:16AM (#62555888) Homepage

    Violent content on the screen doesn't make you violent and make you commit hideous crimes, your genes, upbringing, peers, society, peers and primary religion/indoctrination do. It's so easy to blame the Internet while forgetting about the real deal. There are now multiple psycho-sociological studies which prove that beyond reasonable doubt, yet we are here at it again. The only question who and how is profiting from this fear mongering. I guess it's down to control/power specially in authoritarian states.

    People must be taught critical thinking, the scientific method, (the dangers and pitfalls of) groupthink, the psychology of masses/tribes (we are still extremely tribal by our nature) - this is the only way to reduce the amount of violence, hatred, etc. but this is not good for politicians and authorities because this makes people a lot less gullible and controllable.

    • by AmiMoJo ( 196126 )

      I don't think anyone is arguing that monkey see, monkey do is the issue.

      White supremacists like the Buffalo murderer believe that a race war is coming, and that in time they will be seen as heroes who saw it coming and helped make the masses aware of it. It's a little bit ironic that they complain about being being woke, while simultaneously wanting people to wake up to "real threat".

      Anyway, for people like that getting their videos and manifestos out is very important. If they can no longer do that then it

    • ... your genes, upbringing, peers, society ...

      What, like your neighbour lynched a black man and no-one complained, so you can do it too? That's an argument for reducing the violence we see.

      ... taught critical thinking, the scientific method ...

      Just like everyone should be taught calculus or coding. Abstraction is a high-level skill that most people can't do very often and having to analyze every message from some anonymous 'Paul Revere', only causes exhaustion, making nothing better.

      ... psychology of masses/tribes ...

      The first problem is childhood education: It is purely a download, there is no thinking beyond deciding which rule-set wi

    • upbringing, peers, society, peers and primary religion/indoctrination do.

      But you contradict yourself. There's a reason violent media content is used in indoctrination by peers and society. There's also plenty of scientific evidence that exposure to endless amount of violent *real* content makes you desensitized.

      You sound like you're bringing in the whole violent video games argument in. In that case you would be correct. We have plenty of evidence that *fake* violence does not desensitize people to violence or make them violent.

      Real content is different. You did after all acknow

  • No, they can't, unless the Internet becomes a single platform, aka becomes cable TV.

    Case in point:
    I run my own XMPP server. No way they could stop that there, if it were happening.
    All is OMEMO encrypted, even uploads. Even if I wanted to comply, Hashing would have to be done on the client. I'd just switch to a different client...
    I run my own mail server. They have no power there.

  • Facebook should be able to manage it - install explosion / screaming filters and automatically pause the feed until a human has a chance to review it. I'm not sure the same approach would work on Twitch where such a filter wouldn't work for obvious reasons.

    Something that might work for Twitch is they audit a person's machine setup. Most of these live streamers are sitting in front of a computer & webcam. It should be possible to run software that fingerprint that setup and the backdrop and only enable

  • The capability is already there on these platform; it’s being used to identify audio and video for the purposes of protecting IP and generating and directing advertising revenue.

    So the question is not wether it’s technically possible but whether these providers can be made to behave responsibly.

    • by suutar ( 1860506 )

      Simple hashing is easy, but only detects identical files. Hashing that matches through editing is not actually already there. (Unless you have a source I haven't seen?)

      • by suutar ( 1860506 )

        I should say, for video. Audio matching is pretty much a thing (e.g. Shazam app, Youtube audio matching, etc.)

      • It's an AI complete problem. Most definitely tech can overcome some particular kinds of changes but in general it's an unwinnable arms race. Until we invent AGIs and have them watch all videos. Later AGIs will realize that they have no use for those silly humans and banish them all from Earth.
  • ...but the idea that YouTube can pretty consistently identify/flag copyright material in seconds after uploading belies that this is technologically impossible or even hard in 2022.

    Is it perfect? No. But I agree with him that it doesn't seem to be "a few heavily edited versions making through."

    I lost my appetite for watching people die on video after seeing about 5 mins of a Faces Of Death video when I was an 8th grader around 1983. Watching, sharing, and promoting that shit is a sign you are fucked in

    • Watching, sharing and promoting I think it very much depends of why, context.. I think it’s a good think you saw it , because you lost your appetite for it.. probably made you less likely to seek it out or casually support it or things that lead to it.. I think this is how it works for the majority of people.. being confronted by the nightmares of reality grounds you in reality and less likely to fetishized them.,
  • by nanoakron ( 234907 ) on Sunday May 22, 2022 @05:58AM (#62555952)

    Just because we can, does it mean we should?

    Building this infrastructure allows for non-benign parties to rapidly shut down any content they desire, for any reason, without public disclosure.

    Tyranny is always first dressed up as something 'for our own good'.

    • Oh horseshit. All these companies already block media they don't like it on other company's request constantly. Of you're worried about the liberty of uploading a video, that ship sailed long ago when you outsourced the hosting to a for profit company who doesn't give a shit about freedom of speech and has marketing teams and share holders to appease.

      It's like asking in 2022, given the level of violence should we be inventing guns or investing any R&D in then at all, completely irrelevant at this point.

      • No they block pre-existing videos that violate copyright, not a new live stream which is totally original footage. You would have to distinguish this from say a movie where the killing is fake, since that is totally appropriate.

        • No they block pre-existing videos that violate copyright, not a new live stream which is totally original footage.

          False. You clearly have never seen or heard of the phenomenon of people live streaming themselves watching a football game, or live streaming a game with licensed music content (literally there's an option in some games now that disables licensed content so that their Twitch streamers don't get cut-off mid stream). This is to say nothing of those people attempting to rebroadcast sport in realtime on Facebook or other platforms.

          No sorry, there is very much enforcement and blocking occurring of *live* footage

    • The infrastructure already exists. It's been built and rolled out. All the major social media companies use it.

      It's just a question of adding to the hash table of CSAM. Less ubiquitous but still common is copyright filters.

      So any arguments based on principles are right out the window, which is something a lot of people are missing. We've *already* decided that if a particular type of picture/video is harmful enough, it can be banned and major companies forced into screening uploads for it. There's some
      • by djinn6 ( 1868030 )

        Which is why CSAM should not be illegal either. It's fine if a subset of tech companies remove it voluntarily, but in a free society, no idea or thought should be illegal, nor the ability to communicate those from one person to the next as long as they're both willing participants. At first it's CSAM, then it's hate speech, then it's fake news, then it's real news that disagree with the ruling party's narrative. Pretty soon it'll be "we have always been at war with Eurasia".

        Some might think this is a slippe

  • by MacMann ( 7518492 ) on Sunday May 22, 2022 @06:05AM (#62555960)

    The internet was built to hold up to all kinds of attacks to the system, and censorship is seen as damage and is routed around.

    Take your pick on videos that someone decides others should not see and people find ways to get these videos out. I'm thinking of videos that were critical of government approved treatments for COVID-19. When a video of a couple of physicians talking about the effectiveness of some drugs on the treatment of symptoms of COVID-19 got blocked then people complained and the noise about the blocked videos brought more viewers than it would likely would have got if they did nothing.

    People are still figuring out how the internet works. Big companies thought they could collude to control what information was available but that brought out people with money and a desire for freedom of communication to buy existing communication platforms or build new ones.

    If tech firms control information in ways that people don't like then they will stop using their services and go elsewhere. Some people like to live in an information bubble. Some like to create an information bubble for their children. Most adults prefer that they not be lied to about world events. Private companies may have the right to remove information that they don't like from their communication systems but then that should open them up to lawsuits if they abuse this right. They have an obligation to allow people to communicate freely, even if they don't like what people say. If a company acts on behalf of the government to hide or distort information then they are an agent of the government, and the government is prohibited from restricting people from communicating.

    Elon Musk was right when he pointed out that free speech is people you don't like saying things you don't like. If people are only free to say what someone else approves of then there is no free speech.

    Can tech firms stop videos from circulating? Maybe they have the ability but to do so is ethically and/or legally questionable. The harder they try to keep something quiet the harder people work to find out what is being hidden from them.

    • Oh this meme again. Sorry kid but the internet doesn't route around shit. Not since we concentrated hosting to a few megacorps, not since countries took control of puppiesy in and out of their borders.

      Sure technically you're free to host anything you want, but what's the point, the world won't see it. The fundamental protocols of the internet don't work around your twitter ban it get you your 500000 subscribers back.

  • The real problem is there will be innocent victims of the censorship too, as we've already seen rampantly across YouTube. There's no practical way to do this without either missing some or hitting extras - the likely outcome is doing some of both.

    • Yah, there is also the flip side of "hashing will fix everything" - it is easy to tweak a video enough so that it will produce a different hash. It would be trivial to create a webservice that would automate this process for the unwashed masses.
  • Define "violent". In an unmistakable, concise, logical, machine describable way that leaves no room for interpretation.

    Of course no such description is possible, as such no perfect way to block it exists.

    The definitions and vocabulary changing every year doesn't help matters either. "Silence is violence" - yeah code that into a bayesian filter.

    • No, we don't even need to define "violent" to know such censorship is wrong. I would say it is more wrong than banning "anti-vax" just because one shows their distrust on recent jabs that haven't sufficiently and objectively tested out their side-effect with a long enough period.

      Why? Because there are various kinds of video of violence. Some may be self-promotion of terrorists. But many more could be whistleblowing of crimes done by rich and powerful or tyrannical states. A proper handling of violent vid

  • It sounds like he is more a BS Prof. It is so simple, just account for all those variations and lossy things and stuff.... So simple. I am sure if it is so simple he would have produced and patented such a simple and totally real hashing algorithm?

  • by e3m4n ( 947977 )
    To answer the headline question, in a nutshell, no. Hows that 40yr old War on Drugs going? Fentanyl deaths are spiraling out of control and now China is making something named Pink thats stronger than carfentanil, which itself is 100 times stronger than fentanyl and 10,000 times more potent than morphine. After 40yrs we are worse off than 1983. So no, something as prolific and digitally replicated as the internet, cannot successful censor something desired to be distributed. You might as well try to ban all
    • "Pink" is a nickname for U-47700, which is a designer opioid but only 7.5x as strong as morphine. The only thing that might, depending on measurement, be a little stronger than carfentanil is lofentanil. If you're talking about something even stronger than these two, there's big flashing [citation needed] because now you're talking about something stronger than anything in the scientific literature (unless you're talking about unusual administration modes-- 14-methoxymetopon is over a million times stronger
  • by QuietLagoon ( 813062 ) on Sunday May 22, 2022 @09:32AM (#62556192)
    "on the internet" is no longer the sole realm of tech firms. The problem of violent videos circulating "on the internet" is a problem they cannot solve. How do we try to stop violence in public places? Maybe that is the model that should be used to stop violent videos from circulating "on the internet."
  • Inherent in the idea that suppressing violent video content is that belief that evil, bad people can be prevented from doing evil, bad things but not being able to look at evil, bad images.

    This, my friends, is the philosophical conceit of our age. We believe that if you control the symbol, you've controlled the reality.

  • I believe the old saying is, "For every problem there is a solution that is simple, cheap and wrong"

  • Now every night I go out and find someone to turn inside out.

  • The world has the right to see the ugly side of the world. The most repressive regimes of the world fill their peasants with visions of unicorns and fluffy clouds and then truly gruesome things happen.

      The only exception would be the case of a shooter livestreaming his massacre, but this is a very hard thing to stop. It could be over with well before the Net Cops arrive at the scene.

    • Why should people victimized by a terrorist murdering them have videos of their crime circulating, but minor rape victims... that's already banned across the entire internet, with the same kind of robust hashing the article is talking about.

      It's just more of the asymmetry between how we view sex vs violence. Personally, I don't see a good argument for allowing one but not the other. We're talking about videos of victims who don't necessarily want others seeing that happen to them in both situations and in
  • Effectively, no. If they're going to allow any kind of user content then they can't prevent violent content from being posted in the first place. The only way they could would be to force all users to apply for permission to post something ahead of time and wait for approval, which for a site like Twitter or Facebook would be unworkable, there are so many people wanting to post anything that 'approval to post' would take weeks or even months as workers went through the backlog of requests. If you want to pr
  • Two problems and the solution.

    Problem 1: fingerprinting videos ...across colour shifts and sizing, yes, that's easy. ...across adding [small amounts of] text, yes, that's feasible. ...across mirroring the entire video, tough but yes that's feasible. ...across all three, that's not even close to feasible. ...across the six more commonly-used IP-dodging techniques, it's impossible. ...across the fifty new techniques invented the day after you solve the first nine, what a waste of effort.

    Problem 2: the global

  • Why would they want to?

  • Because the same tool would be just as good at suppressing evidence of crimes, especially police brutality.
  • would allow those videos to circulate.

So you think that money is the root of all evil. Have you ever asked what is the root of money? -- Ayn Rand

Working...