Lauren Weinstein writes: Recent claims by some (mostly nontechnical) observers that it would be "simple" for services like YouTube to automatically block "terrorist" videos, in the manner that various major services currently detect child porn images are nonsensical. One major difference is that those still images are detected via data "fingerprinting" techniques that are relatively effective on known still images compared against a known database, but are relatively useless outside the realm of still images, especially for videos of varied origins that are routinely manipulated by uploaders specifically to avoid detection. Two completely different worlds. So are there practical ways to at least help to limit the worst of the violent videos, the ones that most directly portray, promote, and incite terrorism or other violent acts? I believe there are.