Bennett Haselton has another article on offer for us today, this time looking at the implications of a Canadian initiative to protect children online. Bennet writes: "Cybertip.ca, a Canadian clearinghouse for providing information to law enforcement about online child luring and child pornography, has announced that a group of major ISPs will begin blocking access to URLs on Cybertip's list of known child pornography sites. A Cybertip spokesperson says that the list fluctuates between 500 and 800 sites at any given time." Read on for the rest of his analysis.The system is named after a similar filtering system used by service provider BT in the UK. It is also reminiscent of a law passed in Pennsylvania in 2002 requiring ISPs to block URLs on a list of known child pornography sites; the law was struck down in 2004 on First Amendment grounds. Although child pornography is of course not protected by the First Amendment, the law was struck down partly because the ISPs were blocking entire servers and IP address ranges, hundreds of thousands of non-child-pornography sites were also being blocked.
Under the implementation of the Cleanfeed system, representatives from Sasktel, Bell Canada, and Telus claim that only exact URLs will be filtered, not sites hosted at the same IP address. (Although conventional Internet filtering programs sold to parents and schools have also made the same claims, only to turn out to be filtering sites by IP address after all, so we'll have to wait until the filtering is implemented before we know for sure.) The other difference of course is that the Cleanfeed system is not the law, so there's nothing to "strike down" in court. Cybertip did acknowledge that this means customers can get around the filtering for now by switching to a non-participating service provider, although they are encouraging more providers to sign up. Cybertip declined to say whether any providers had simply refused to participate. But of course it's much easier than that to get around the filter, since filter circumvention sites like Anonymouse and StupidCensorship will not be blocked.
So, if it's that easy to circumvent, does it do any good? Even respected Canadian academic and columnist Michael Geist, hardly a friend of censorship in other forms, has spoken out in favor of the plan. I'm going to go out on a limb and say that it doesn't accomplish anything meaningful, and may set a horrible precedent that could make it much easier to block other content in the future.
First of all, it seems that it obviously won't stop anyone who is deliberately looking for child porn. Empirically there's no way to tell -- we don't whether systems like Cleanfeed in the UK have prevented people from accessing child pornography on purpose. Even if the providers are counting the number of blocked accesses to known child porn sites, nobody knows what people have been looking at instead through proxy sites like Anonymouse. All we can do is ask, logically, whether it is likely to work. I think purely logical arguments are frustrating when there is no empirical data to act as a referee, but let's face it, users are not going to self-report on their success at finding child pornography, and there's no way to see what users are accessing through encrypted circumvention sites. Logic is all we have.
So, consider people who are deliberately looking for child pornography. Such people are likely to be resourceful to begin with (since real child porn -- remember, non-sexual pictures of naked children do not count -- is vastly less common than regular porn; Cybertip claims after all that they "only" have about 800 sites on their list, compared to millions of regular porn sites). Virtually all such people would be aware of circumvention sites like Anonymouse, or of peer-to-peer networks, which Cybertip says they have no plans to block. So nothing is blocked from people who want to get around the filter.
The only scenario where the filters could make a difference is the case where someone accidentally accesses a child porn site. Now when I first read the Cybertip press release announcing that the filter would aim to stop "accidental" exposure to child porn, I thought that was just a tactfully sarcastic way of referring to the people who get caught accessing child porn and claim it was just a mistake. But Cybertip.ca claims they've received over 10,000 reports since January 2005 from people who accessed child porn by accident. Even though that only works out to about 15 per day, I have to concede in those cases it almost certainly was a bona fide mistake, for the simple reason that nobody would voluntarily report accessing a child pornography URL that they visited on purpose. But even so, there's the question: What have you accomplished by blocking accidental exposure?
I would argue that the harm done by child pornography is to the minors coerced into the production of it, not to the people who view it. (This, by the way, corresponds with current U.S. jurisprudence; the U.S. Supreme Court ruled in 2002 that a law banning fake child porn was unconstitutional, even when the viewer can't tell the difference.) Obviously you prevent the most damage by stopping child porn at the production stage, but if it's too late for that, you can try to stop people from obtaining it willfully. This lowers the demand and decreases the incentive for people to produce more in the future.
But how would it lower demand if you block people from accessing it accidentally? If those people weren't going to proceed to buy or download more pictures anyway, then they're not fueling the demand. You can block them from accessing the pictures, but the pictures are still out there, and the people who really are fueling the demand can still access them.
So it seems that by blocking someone from accidentally viewing child porn, all you've really accomplished is to avoid offending their sensibilities. Now I don't mean that mockingly, I'm certainly not disagreeing with anyone whose sensibilities are offended by child porn. But there are lots of graphic pictures on the Internet that could offend someone's sensibilities, which are outside of Cleanfeed's mandate. Consider a photo of a 16-year-old having sex, versus a photo of an adult woman fellating a horse; even though the former is illegal to possess and the latter isn't, I think most people would be more grossed out by the second one. (I would even argue that there was more harm to the participants in the making of the second one, and in this case the law's priorities are a bit screwed up. Poor horse!)
So, why block 1% of the content that would offend someone's sensibilities, when 99% of the content that would still offend that person would still be out there? The fact that the 1% is illegal doesn't answer the question; even if it's illegal, you don't have to block it, so what have you accomplished if you do?
Possibly law enforcement is sick of people using the "I accidentally clicked on it" excuse when they get caught accessing child pornography, and wants to remove that as a defense. But couldn't someone just as easily claim that they "accidentally" accessed child pornography through a circumvention site like Anonymouse? They could claim that they thought they were accessing a regular porn site, they were using a circumventor to protect their privacy, and they didn't know that the site carried child porn and didn't find out until they'd already accessed it. So it doesn't seem like the filtering would remove the "accidental" defense.
So, I don't think the filtering accomplishes much at all, but it could set a very bad precedent once the filters are in place. Once Internet users have accepted the precedent that ISPs should block content that is "probably" illegal, what's to stop organizations and lawmakers from demanding that ISPs block access to overseas sites that violate copyright, for example, as the RIAA did in 2002? The technical means will already be in place, and more importantly, people will have gotten used to the idea that legally "questionable" content should be blocked. And with lobbyists claiming that 90% of content on peer-to-peer networks violates copyright laws, wouldn't it follow logically to block peer-to-peer traffic as well?
In a legislative climate where lawmakers have proposed everything from jail time for p2p developers to letting the RIAA hack people's PCs for distributing copyrighted files, we should resist any kind of content-based blocking that would let them get their foot in the door. That includes even well-intentioned efforts like Cleanfeed.