Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Google Businesses The Internet Government The Courts News

Google's Cache Ruled Fair Use 213

jbarr writes "An EFF Article states that: 'A district court in Nevada has ruled that the Google Cache is a fair use ... the Google Cache feature does not violate copyright law.' Notable is the basis that 'The Google Cache qualifies for the DMCA's 512(b) caching 'safe harbor' for online service providers.'" From the article: "The district court found that Mr. Field 'attempted to manufacture a claim for copyright infringement against Google in hopes of making money from Google's standard [caching] practice.' Google responded that its Google Cache feature, which allows Google users to link to an archival copy of websites indexed by Google, does not violate copyright law."
This discussion has been archived. No new comments can be posted.

Google's Cache Ruled Fair Use

Comments Filter:
  • by imoou ( 949576 ) on Thursday January 26, 2006 @04:24PM (#14571981) Homepage
    So if someone created a search engine which automatically, randomly and non-volitionally searches and caches MP3 files from websites which do not have "no archive" metatag, it's not breaking the law?

    When those searched websites disappeared, this search engine may still serve those cached MP3 files for archival purposes?
    • by Ninwa ( 583633 ) <jbleau@gmail.com> on Thursday January 26, 2006 @04:26PM (#14572007) Homepage Journal
      Maybe... but there's a difference. That difference is that the items cached were already in violation of copy right law, most likely. Interesting though... and doesn't archive.org archive files? I know they've archived several small programs I've written that were linked on my site at one point in time.
    • Comment removed (Score:5, Insightful)

      by account_deleted ( 4530225 ) on Thursday January 26, 2006 @04:34PM (#14572125)
      Comment removed based on user account deletion
  • Most browers have a built in cache. They don't violate copyright law
    do they?
  • Good news (Score:4, Insightful)

    by Eightyford ( 893696 ) on Thursday January 26, 2006 @04:25PM (#14571996) Homepage
    This is good news for the Wayback Machine at archive.org. I think the case against google images, and especially google video is a little stronger, however.
    • I was going to say the same thing. Then I read the last sentence of TFA: "The decision is replete with interesting findings that could have important consequences for the search engine industry, the Internet Archive, the Google Library Project lawsuit, RSS republishing, and a host of other online activities." It should be interesting to see how far 512(b) goes.
    • I don't think Google Video caches videos from the WWW. I think it just allows copyright holders to upload their own videos to Google Video.

      Google Images is just a version of the normal Google SE that only brings up images. It doesn't do any more caching than Google does anyway (when caching both text and images), not that whether the content is stored as an image or text is relevant.

      • Google Images is just a version of the normal Google SE that only brings up images. It doesn't do any more caching than Google does anyway (when caching both text and images), not that whether the content is stored as an image or text is relevant.

        But Google Images serves up pictures without the advertisements that support the content creators. This the same argument as PVRs and skipping commercials. Of course, an argument can be made for using a robots.txt file, but it could be argued that this shouldn
        • Re:Good news (Score:3, Informative)

          by T-Ranger ( 10520 )
          HTTP explicitly allows for caches - its part of the protocol. Associated standards have been published to restrict what can be cached, and indexed by e.g. search engines.

          You can no more complain that something is caching, or indexing, your pages you have published with HTTP, then you can complain that someone is accessing them at all.
          • You can no more complain that something is caching, or indexing, your pages you have published with HTTP, then you can complain that someone is accessing them at all.

            I can complain about whatever I want, but that's not the point. Google Image Search is different than mere caching. It's like giving away taped television episodes but replacing the commercials with your own ads. I, personally, don't think Google's Image Search should be illegal, but that doesn't mean the issue is as clear-cut as you make
            • have you actually used GIS before?

              it gives you a page of tiny thumbnails which link back to a framed version of the source page and a link to the original image.
            • Actually, it's more like handing out the covers to the relevant DVDs and an index saying what shelf you can find them on.
      • Google Video allows anyone to upload, not just the copyright holder. Pretty sure Topgear didnt upload this gem [google.com].
    • by Anonymous Brave Guy ( 457657 ) on Thursday January 26, 2006 @05:55PM (#14573148)
      This is good news for the Wayback Machine at archive.org.

      I'm not sure about that. Although the result of this case seems fair and clearly indicated on several counts, there's a lot that might not apply to archives more generally, so I'm not sure how much of a precedent has been set.

      In particular, the case was brought by someone who practically admitted trying to set Google up: he knew about mechanisms like META tags and robots.txt, knew that Google was caching his site, made no attempt to stop them, and indeed actually set up robots.txt explicitly to allow bots to crawl his site. This supports Google's first two defences here, having an implied licence and estoppel.

      The most interesting discussion, IMHO, is on the fair use defence. The court considered in a lot of detail whether the use made by Google qualifies as fair use. On the first criteria (how the material is being used), it was found significant that the material was being used for different purposes in the cache than on the original site: the latter was presumed artistic, while the former allowed access to the material when the original site was down, historical comparisons of the site content, highlighting of search terms that made a page relevant to the user's search, etc. Hence the court concludes as follows:

      Because Google serves different and socially important purposes in offering access to copyrighted works through "Cached" links and does not merely supersede the objectives of the original creations, the Court concludes that Google's alleged copying and distribution of Field's Web pages containing copyrighted works was transformative.

      The court also noted that Google made no attempt to profit from the display of the material, did not attach advertisements, made clear that the copy could be out of date, and linked clearly to the original source. (I wonder whether that non-profit, no-ads observation will come back to kick Google later...)

      The other fair use discussion is less interesting, although the fact that the plaintiff had made his works available for free and not made any other attempt to profit from them was important, because this meant the market value of the original hadn't been damaged. One interesting tidbit is that apparently the SCOTUS has ruled that the fourth fair use factor (any damage to the market/value of the original work) can't be used to argue that the copyright holder could have licensed an otherwise fair use (such as the caching here) and thus the use can't be fair.

      Some of the DMCA defence stuff could have quite significant implications. In particular, the fact that Google caches material only for a fairly short time (14-20 days is mentioned) is relevant, since a prior ruling about Usenet servers could be used.

      In summary, Google would basically have won out on four different defences here, even without the fact that the original use might not qualify as direct copyright infringement (since the plaintiff went after the downloading done automatically in response to users; he didn't go after GoogleBot's initial copying process that caches the site on Google's system). It doesn't seem at all clear that a lot of the arguments would apply to other caching services, though: amongst other things, Google's cache in this case is temporary; known to the plaintiff, who had not tried to stop it and actually encouraged it; not for direct profit nor carrying any advertising; and clearly not damaging the market value of the original works.

  • by gasmonso ( 929871 ) on Thursday January 26, 2006 @04:26PM (#14572001) Homepage

    Google's cache is often times the only way to read an article posted on here. Also, its a good resource for people behind firewalls and can only pull up a cached version. It's a good win for Google.

    http://religiousfreaks.com/ [religiousfreaks.com]
  • Good judgement (Score:3, Insightful)

    by karolgajewski ( 515082 ) on Thursday January 26, 2006 @04:26PM (#14572009) Journal
    Finally, a frivolous lawsuit that got its just desserts. We can only hope that will herald a new age, where the insanely stupid lawsuits are going to fianlly the death they so rightly deserve.
    • "Finally, a frivolous lawsuit that got its just desserts. We can only hope that will herald a new age, where the insanely stupid lawsuits are going to fianlly the death they so rightly deserve."

      They could have said he was right, but then not awarded any monetary damages. This sets a bad precedent. Copyright used to be automatic. Now, if I don't put the right tag in my html, I forfeit copyright to search engines.
      • Now, if I don't put the right tag in my html, I forfeit copyright to search engines.

        How so? The search engine just invokes delay in the exposure of the data you already exposed publicly. It's an information-distribution detail for something you've already distributed without qualification. What's been forfeited? You can't control the other delays or other modifications involved in distribution, either. For example, people record all kinds of broadcast media for review at a later time that's conven
    • http://www.snopes.com/language/notthink/deserts.ht m [snopes.com]

      Slashdot requires you to wait at least 15 seconds before blablabla :| *eyes watch*
  • by Jim in Buffalo ( 939861 ) on Thursday January 26, 2006 @04:26PM (#14572011)
    The judge then left the bench, walked over, and whacked the plaintiff and his council on the head with a salami.
  • by digitaldc ( 879047 ) * on Thursday January 26, 2006 @04:26PM (#14572013)
    avoid a lawsuit
  • by grungebox ( 578982 ) on Thursday January 26, 2006 @04:27PM (#14572024) Homepage
    So who has a link to the Google cache of the article?
  • by mumblestheclown ( 569987 ) on Thursday January 26, 2006 @04:31PM (#14572081)
    Intellectually, I don't like this ruling one bit. "Fair Use" is broadly supposed to have minimal to nil financial effects on the copyrightholder and in general the "fair user" is doing the using for personal reasons. Google's cache is basically a large-scale financial transfer from the copyrightholders (who serve to benefit from the ads they serve and other interaction they get from end-users visitng their site) to google, who benefits directly by keeping people longer on google's site and thus, basically, shucks them more ads. Rememeber folks, in terms of the cache here, we're referring to google's ability to serve content IN ITS ENTIRETY to end-users - we're not talking about those tiny snippets needed to make search engine results useful.

    Those of you who do the "yesbutNOCACHEtag" dance have got it backwards to: it's not the responsibility of the copyrightholder to sing to the tune of whatever the latest fad is. Rather, it's the other way around - google should convince people that it's in their interest to put a "CACHEME!" tag.

    • It seems a stretch to argue that Google is providing this service for financial gain. For starters, when a user pulls up a Google cached page, they don't get Google's ads on that page. They get, as you noted, the actual site as it was when Google cached it, complete with that site's ads, if there are any.

      In addition, the cached version is never anything more than a poor substitute for the actual site. Text comes through, but much of the site's look and feel is lost. If Google wanted to hijack this site,
      • It seems a stretch to argue that Google is providing this service for financial gain. For starters, when a user pulls up a Google cached page, they don't get Google's ads on that page.

        Nonsense. Google gains by drawing attention to itself and its other offerings. Let me ask you this simple question: if google doesn't gain financially, even if indirectly, why do they do it? Goodwill? Poppycock.

        They get, as you noted, the actual site as it was when Google cached it, complete with that site's ads, if t

        • The moment one decides to put something on the Internet, he loses a large chunk of control over that content. Caching is an inherent, and necessary, component of Internet technology. Searching as a whole does not work without it.

          Your original post was that the original site owner was entitled to relief because of lost financial gain (due to users viewing Google's ads rather than his own). You now present a new argument: content control. However, posting any content on the Internet is entails a conscious
        • Well said. I can't believe that Google have been getting away with it for as long as they have and this ruling seems totally back to front.

        • explain to me again why if this is fair use, why i can't make photocopies of every book in the bookstore

          I'd say let's not be ridiculous but it looks like we're far too late for that. Books are "Products," aka items sold in exchange for access to intellectual or artistic property. If you don't own the book you can't give others access to it in the first place.

          Maybe you didn't get the memo, but the ENTIRE INTERNET was designed to promote the free dissemination of publicly available information. Sites are n
        • by dasil003 ( 907363 ) on Thursday January 26, 2006 @05:33PM (#14572847) Homepage
          Allow me to quote your original message:

          Google's cache is basically a large-scale financial transfer from the copyrightholders (who serve to benefit from the ads they serve and other interaction they get from end-users visitng their site) to google, who benefits directly by keeping people longer on google's site and thus, basically, shucks them more ads.

          While I don't think it's an open and closed case, you won't convince anyone by spouting such hyperbole.

          First of all, you're ignoring the fact that the main link goes to the actual website, and the only way to get to the cache is through a smaller link labeled "cache", which is unlikely to be clicked by someone who doesn't already know what it means. Furthermore, once they do click they get a very clear and prominent message in a frame stating that it is in fact a cache, and even explains what the cache is and how it works. So to say that it is a 'large-scale financial transfer' is just silly. At worst it's an extremely small-scale 'financial transfer', and that's a very dubious point to begin with.

          So then in your response you say:

          Nonsense. Google gains by drawing attention to itself and its other offerings. Let me ask you this simple question: if google doesn't gain financially, even if indirectly, why do they do it? Goodwill? Poppycock.

          The same could be said about search in general. They are profiting off of other people's content, because without it there'd be nothing to search. I'm sorry, but there's no evidence that the cache hurts websites financially. And there is certainly plenty of anecdotal evidence where it helps (eg. site goes down, customers go to cache to get phone # or contact email). Note that I'm not using that as support of Google's right to cache, merely as a counterargument against your baseless assertions.

          You are basically saying that GOOGLE has the right to display YOUR content as GOOGLE sees fit.

          Again with the hyberbole. No one believe Google has the right to display your content any way they want. What some people are arguing is that Google has the right to display your content as they do which is quite reasonable, and not deceptive in the least.

          In the google cache situation, the owner effectively loses that right.

          No, the page will disappear from Google once it is permanently down, it just takes some time. This may be a valid point about the internet archive, but then again information that finds its way on to the internet anywhere is likely to stick around for a long time.

          In other news, explain to me again why if this is fair use, why i can't make photocopies of every book in the bookstore (for example, without photos) and offer them for free reading in my coffeeshop?

          Um, because books cost money?
        • bah, you logic is flawed, and the copyright laws are outdated. Once you release something into the wild you lose control. The laws need to be updated to reflect the value of the source, the value of the information and not the value of the copy.

          Claiming authorship of information is the only real grievance once can have. Distribution of copy should be a separate issue. Inherently owning the rights of distribution of information you publish is hogwash. Exclusivity of distribution is artificial value that dese
          • authorship and disitribution rights are inexorably linked. otherwise, authorship has no value other than for the ego.

            You might think that copyright laws are outdated, but the judge should be bound by them. remember that bit about the role of the judicial branch?

    • by schon ( 31600 ) on Thursday January 26, 2006 @04:58PM (#14572453)
      "Fair Use" is broadly supposed to have minimal to nil financial effects on the copyrightholder

      Before we address this (false) assumption, here's what's happening:

      Copyright holder makes a web page available *FOR FREE* to the general public. Google caches it. Please explain how Google's cache financially hurts the copyright holder. Providing something *FOR FREE* that is available *FOR FREE* would seem to have a "nil financial effect on the copyright holders", no?

      Google's cache is basically a large-scale financial transfer from the copyrightholders

      Sorry, WHAT ?!!??!

      Financial == monetary matters. I haven't checked in the past 5 minutes, but prior to that, someone visting your (free, publically accessible) website doesn't move money from your bank account to theirs.
      • Copyright holder makes a web page available *FOR FREE* to the general public. Google caches it. Please explain how Google's cache financially hurts the copyright holder. Providing something *FOR FREE* that is available *FOR FREE* would seem to have a "nil financial effect on the copyright holders", no?

        This is a really interesting topic, for me at least. I mirror a lot of files for Australian users on various websites over here. One thing I've found is that the vast majority of sites (at least the big ones

    • by pthisis ( 27352 ) on Thursday January 26, 2006 @05:09PM (#14572583) Homepage Journal
      "Fair Use" is broadly supposed to have minimal to nil financial effects on the copyrightholder and in general the "fair user" is doing the using for personal reasons.

      This is not true. Ebert & Roeper can show a movie clip, do some scathing commentary, decrease the film's box office take, and make money in the process, and that's clearly fair use. Protecting negative criticism is one of the core philosophical reasons behind fair use (originally, anyway) and it clearly has the potential to have devastating financial effects on the copyright holder.
      • "This is not true. Ebert & Roeper can show a movie clip, do some scathing commentary, decrease the film's box office take, and make money in the process, and that's clearly fair use."

        you've misinterpreted the issue. Epert and Roeper can decrease the box office's take by making a negative review. The basic balancing test required to ensure that use is fair use says otherwise to what you state.

        I encourage you to read the following: Definition of Fair Use [auburn.edu] (pay special attention to points 3 and 4 ther

        • 1. That has little to do with the post I responded to which claimed that Fair Use" is broadly supposed to have minimal to nil financial effects on the copyrightholder and in general the "fair user" is doing the using for personal reasons.

          Maybe you can dance around and try to claim that actually the use itself didn't affect the film's box office take but the reviews did (though in many cases of parodies the fair use and the criticism are intimately connected). It's certainly not for personal reasons. And,
      • Ebert & Roeper can show a movie clip
        ...but they can't show the whole movie to point out which parts they (dis)like.
        • ...but they can't show the whole movie to point out which parts they (dis)like.

          Maybe, maybe not. Depends on how it's presented. There are complete critical copies of copyrighted works along the lines of Roland Barthes' S/Z that have been protected. While they incorporate literally the entire work, the amount of critical material bastly outweighs the amount of source material. Reproducing a painting along with criticism thereof is commonplace and sometimes protected. There is no bright-line standard for
    • One thing I've noticed about Google Cache is it still loads the original images from the original server, so the ads are still there for those people, and with the highlighted crap it's not efficient or pretty to look at the google cache most of the time as well.
      • One thing I've noticed about Google Cache is it still loads the original images from the original server, so the ads are still there for those people

        This is one of the ethical concerns I have for the whole Google Cache system. If the original site relies on advertising revenue, then caching it without generating the equivalent behaviour for the ads will be inherently damaging. Google's cache still hits the original site for ads, which might avoid this problem as long as the original site is up, but nega

    • I'm sure you're right. I'm sure you're much smarter than the judge, and have a much better grasp of the law. What a pity that you weren't the one judging the case!

      Sorry, mumbles, but I don't buy it. My money's on the judge being right, and you being a loudmouth with too much time to post over and over, as you have in reply to everybody who argued with you.

      And to try to actually contribute something to the debate: So you don't like the "fair use" part of the decision. Fair enough; though as I said,

      • you're right. judges are always right and always make the smartest decisions. not only that, but all legal scholars agree with all decisions that judges make, because, again, judges are always right.

        I type faster than you. get over it. total time spent on this thread 10 minutes while doing other work. i happen to think it's an interesting topic needing debate - i will call bad arguments bad (and there have been several bad arguments here), but i am willing to listen to any good ones that come up.

        • Valid point. In fact, you could say it a lot stronger: Judges have handed down some pretty wacked-out decisions. Still, in the choice between some random judge and some random ./ poster...

          But, other than pointing out that my sarcasm may be misplaced, you didn't answer my questions at all.

          • And what about the judge's assessment that Field was "attempting to manufacture a suit against Google"? That is, this wasn't about actual injury to Field, it was about Field actively looking for a chance to sue someone? Does any of that matter to you?

            To me, this is like the Newdow case. In my view, Newdow is 100% right in that he wants the phrase "under God" struck from the pledge. However, his case was dismissed recently because he didn't have standing - something about his not having legal custody of

            • the guy might be a self-promoting fortune seeker in trying to sue google, but, well, isn't that a key component of our adversarial justice system anyway?

              No, the key component is the adversarial judicial system. Fortune seekers, "lawsuit as lottery", no, those most definitely are not a key component of our legal system. They're a bug, not a feature.

              That's partly why the standing rules are the way they are. If you are legitimately injured, you can sue. If you don't like what somebody is doing but it

    • If the kind of damage you're talking about were actually happening here, I'd agree with you, but note that the judgement relies on (among other things) Google not displaying ads with the cached page or otherwise profiting from it, the originals not generating any income for the copyright holder, and the fact that the plaintiff was well aware of the conventions that could be used to prevent his site being copied and in fact used robots.txt to request quite the opposite.

    • Its funny, ironically the "opt-in" crowd here will shoot you down when it comes to google. They may have a point, but at its core google is a for-profit company making off-site replicas of content. That's a little bothersome. Whats even more bothersome is that the cache ignore deleted pages. So if I take something down then its still in the cache for a long time (forever?). So to truly delete something from the web you need to make a blank page. So now we have to jump through two hoops just for google.

      If
    • Those of you who do the "yesbutNOCACHEtag" dance have got it backwards to: it's not the responsibility of the copyrightholder to sing to the tune of whatever the latest fad is.

      Ah, but it is. IANAL, but my understanding is that the copyrightholder is required to take steps to protect his work, by getting a copyright in the first place, for example. Also, trademark owners can lose trademark protection by not trying to prevent infringement. So I'd say the plaintiff's case is silly, given that a NOCACHE tag
      • but my understanding is that the copyrightholder is required to take steps to protect his work, by getting a copyright in the first place

        your understanding is wrong.

        Also, trademark owners can lose trademark protection by not trying to prevent infringement.

        Your understanding on tradmarks is correct, but irrelevant. this story has no more to do with trademarks than it does about elephants.

  • by account_deleted ( 4530225 ) * on Thursday January 26, 2006 @04:35PM (#14572130)
    Comment removed based on user account deletion
    • Actually, that was Rep. Howard Coble*'s head exploding

      *(R-North Carolina) Chairman of the House Judiciary subcommittee on Intellectual Property & main sponsor of the DMCA
  • NOARCHIVE (Score:2, Interesting)

    by DarkClown ( 7673 )
    wonder if the guy bothered with a robots.txt or used the meta NOARCHIVE - not that actually preventing that was his intent.
    i don't mind the google cache at all, what drives me up a wall is what jeeves and other engines do with external pages by sticking them in a frame. so, if you put code in the page to force it out of frames, then engines like yahoo penalize (or drop from the index entirely) for messing with the user navigation.....
    • I was going to say the same thing, then I saw your post. I contacted google and asked them not to cache my pages years ago and they said, to put add a meta tag with that in it and my pages would not be cached. It works. What are these guys so into sue someone today that they can't code their pages right?
  • by cfulmer ( 3166 ) on Thursday January 26, 2006 @05:05PM (#14572528) Journal
    Forget the fair use analysis, the most important thing here is the success of the "Implied License" claim. Basically, it goes like this: You operate a website. The web was created specifically with the idea that "robots" would crawl across it, and there is a standard well-known way to prevent them from crawling your site. Even more specifically, there's another standard well-known way to keep search engines from cacheing your content. Being on the web but not using these techniques means that you give search engines permission to cache your content.

    It's sort of like what happens when you leave a potful of candy at your front-door on Oct. 31st. In theory, you could claim that all those kids who come to your door and help themselves are stealing. But, because everybody knows how Halloween works, you've implicitly given permission for them to do it.

    In this opinion, the Fair Use analysis was basically just used as a stopgap of "what little infringement that's left after you account for the implied license is a fair use." If the website had included a robots.txt file, the fair use case would have been much harder to make.

    The Implied License is a stake in the ground for "This is the Internet. The rules are different here." IMO, that's a good thing -- there are a bunch of things that just couldn't happen if you had to get explicit permission from every content owner.
    • Your summary is rather misleading. The court also knew that the plaintiff was well aware of those preventative mechanisms and opted not to use them. In fact, he deliberately set up robots.txt so that his content would be considered. The same might not be true at all for Ma and Pa AOL's family homepage.

      The Implied License is a stake in the ground for "This is the Internet. The rules are different here." IMO, that's a good thing -- there are a bunch of things that just couldn't happen if you had to get exp

    • The web was created specifically with the idea that "robots" would crawl across it, ...

      Uhh, no. The web was created specifically with the idea that humans would crawl across it. It wasn't until the web grew beyond easy comprehension by humans that robots were created to crawl it.

      Were your statement correct, the robots.txt exclusion protocol would have been part of the CERN webserver documentation from day one. It wasn't. My web pages were up for a very long time before there were robots wandering the web

  • by rjonesx2 ( 947289 ) on Thursday January 26, 2006 @05:05PM (#14572534) Homepage
    The Google cache is absolutely ridiculous. As an individual who has had quite a bit of experience on both sides of the white hat / black hat search engine industry, the cache is NOT a webmaster's friend.

    1. The cache removes content control away from the author. For example, a site like EzineArticles.com prevents scraping by using an IP blocking method based on the speed at which pages are spidered by that IP. It is absurdly easy to circumvent this by simply spidering the Google cache of that article instead of spidering the site. Google's IP blocking is far less restrictive, and combined with the powerful search tool, it allows for easy, anonymous contextual scraping of sites whose Terms of Service explicitly refuse it.

    2. The cache extends access to removed content, often for months if not years at a time. Google rarely replaces 404 pages (perhaps it is because of their wish to have the largest number of indexed pages). I have clients who have nearly 48,000 non existent pages still cached in google that have not been present in over 14 months. Despite using 404s, 301s, etc. these pages have not yet been removed. Furthermore, Google's often mishandling of robots.txt, nocache, and nofollow leaves webmasters dependent upon search traffic hesitant to force removal of these pages using the supposedly standardized methods of removal.

    3. The cache allows Google to serve site content anonymously. Don't want the owner of a site to know you are looking at their goods (think of companies grepping for competitor IPs), just watch the cache instead.

    The list goes on and on. But I think the point is this...

    Why should a web author have to be technologically savvy to keep his or her content from being reproduced by a multi-billion dollar US company? Content control used to be as simple as "you write it, its yours". It got a little more complicated with time to the point at which it might be useful to use, perhaps, a Terms of Service. Even a novice could write "No duplication allowed without expressed consent". Now, a web author must know how to manipulate HTML meta tags and/or a robots.txt file.

    Fair use is for users, for people, not multi-billion dollar companies.
    • I think the commentor right above you has hit the nail on the head, though: there is a well-known, standard way to prevent these things from happening (robots.txt and meta tags), so if you choose not to use those tools, you're granting people an implicit license index your stuff. As he said - if you leave a bowl of candy on the front lawn on the 31st of October, are you going to sue kids that help themselves to it? Probably not, and even if you did, you likely wouldn't win. Everybody knows that these things
    • Why shouldn't fair use apply to corporations? What's the cutoff point? Can someone quote someone else in a book? That book is making a multi-billion dollar corporation money - but that would be considered fair use.

      Or consider that the document isn't really reproduced until someone (an individual) requests the page, then it is being reproduced for that individual. Wow - that's too philosophical.

      Fair use applies just as much to old fashioned printed documents as it does to web documents. If you had a lin
    • Huh, and "fixing" those "problems" is good for me how?

      The google cache is very nice precisely because of the things you don't like. I want to screen scrape, see old stuff removed for no good reason, and visit sites anonymously. But that last part could be done without google cache anyway.

      As an user, I don't give a damn about your interest in keeping control, sorry.
      • You, with your liberal anything-goes views, contribute zero value to the society by using the web. The GPP, with his desire to prevent certain detrimental activities, is contributing content that is presumably of value to some people even allowing for his wishes to control certain behaviour. Which of you do you think the law should support here?

        • Huh? And using a heavily restricted site contributes to society how?

          Take slashdot for instance. Is my contribution any greater because I have to jump through hoops to make this post (slashdot bans my ISP's anonymous proxy)? I sent quite a lot of mail about it already, but it all seems to go to the bit bucket.

          In the end, I can still post, but as I can't remain logged in, I have to load each page twice, once for the page itself, the second time to change to nested mode.

          But back ontopic. Why are they detriment
          • Huh? And using a heavily restricted site contributes to society how?

            The things we're talking about are hardly heavily restricted. More to the point, the alternative isn't necessarily an unrestricted site; it could well be no site at all. This is the point a lot of people forget in their haste to tell us how information wants to be free, you can't stop distribution, etc: by default, only the author of the work has it. If society wants the author to share, it has to make it worth his while, or he'll simpl

    • 1. The cache removes content control away from the author.

      So does any browser. It's what HTML and HTTP were designed to do.

      2. The cache extends access to removed content, often for months if not years at a time.

      So does my personal copy of your site, thanks again to HTTP.

      3. The cache allows Google to serve site content anonymously.

      So does any proxy.

      If you want absolute control over your work, don't publish it.
  • by Godeke ( 32895 ) * on Thursday January 26, 2006 @05:11PM (#14572597)
    After reading the actual opinion that granted summary judgement, if this same logic is applied to the scanning and offering of search on "real world" materials, Google may be able to withstand lawsuits on the book scanning effort quite well. There are some differences that could create a different outcome, but this outcome was 100% favorable to Google and the idea of indexing and caching of materials to allow such search and reference was solidly defended by the judge.
  • by LesPaul75 ( 571752 ) on Thursday January 26, 2006 @05:18PM (#14572661) Journal
    Ok, no more excuses Slashdot... It's time to start caching pages and preventing the Slashdot effect.
    • Maybe this ruling will encourage our fearless leader CmdrTaco to move the /. caching discussion higher up on his list of things to talk about.

      However, imagine the word "Slashdotted" fading into the past, while a new overload rises up in its place: Digg

      "You got Digged"

      Is that something we really want to happen?
  • Web caches too! (Score:2, Interesting)

    What about web caches? They violate copyright law too! My firefox does it too. Should I use 0 Mbytes of disk space for browser caching?
  • by Kazoo the Clown ( 644526 ) on Thursday January 26, 2006 @05:31PM (#14572821)
    By now someone must have created a search engine that only indexes sites whose robots.txt tells them not to index. I'm surprised I haven't heard of a particular one. Bet it would raise a few hackles though...
  • What a pity that Chinese citizens will never know if the archived material discusses Tibet or Tiananmen Square. But that's alright, greed trumps decency and ethics anyways. Oh, and I have every expectation that the Google apologists will mod me flamebait, so go on, I've got karma to burn and a deep abiding hatred of evil corporations that aid tyrants that are scared of simple words.
  • So if I use Google's cache to locate torrents for say I don't know ... King Kong (2005) ... I'm all good?
  • What did you expect from a guy who is also a lawyer? Shakespeare was right 500 years ago, and it hasn't changed yet.
  • Google after the search engine automatically copied and cached a story he posted on his website.

    And so did everybody else's browser that ever visited that site. I'm sure he'll want to sue us all next.

  • The idea just got legitimized. Sure, let appeals pass to solidify the ruling, and perhaps get some loyal slashdotter lawyer to do a cheap verification on some disclaimer/license. A nice archive of the past few days' stories and their links would be VERY nice.

    Mirrordot [mirrordot.org] and company do a decent job, but too often they don't cache enough (like Pages 2-5 of a story), and having it official would be great for users and would-be slashdotting victims. ... though this does bring potential advertising revenue int

What is research but a blind date with knowledge? -- Will Harvey

Working...