Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Google Businesses The Internet Government The Courts News

Google's Cache Ruled Fair Use 213

jbarr writes "An EFF Article states that: 'A district court in Nevada has ruled that the Google Cache is a fair use ... the Google Cache feature does not violate copyright law.' Notable is the basis that 'The Google Cache qualifies for the DMCA's 512(b) caching 'safe harbor' for online service providers.'" From the article: "The district court found that Mr. Field 'attempted to manufacture a claim for copyright infringement against Google in hopes of making money from Google's standard [caching] practice.' Google responded that its Google Cache feature, which allows Google users to link to an archival copy of websites indexed by Google, does not violate copyright law."
This discussion has been archived. No new comments can be posted.

Google's Cache Ruled Fair Use

Comments Filter:
  • Archive.org (Score:1, Insightful)

    by wbechard ( 830613 ) on Thursday January 26, 2006 @04:25PM (#14571990)
    Google wasn't really copying as much as they were archiving the past... Look at Archive.org's way back machine. Same principal.
  • by scharkalvin ( 72228 ) on Thursday January 26, 2006 @04:25PM (#14571994) Homepage
    Most browers have a built in cache. They don't violate copyright law
    do they?
  • Good news (Score:4, Insightful)

    by Eightyford ( 893696 ) on Thursday January 26, 2006 @04:25PM (#14571996) Homepage
    This is good news for the Wayback Machine at archive.org. I think the case against google images, and especially google video is a little stronger, however.
  • by gasmonso ( 929871 ) on Thursday January 26, 2006 @04:26PM (#14572001) Homepage

    Google's cache is often times the only way to read an article posted on here. Also, its a good resource for people behind firewalls and can only pull up a cached version. It's a good win for Google.

    http://religiousfreaks.com/ [religiousfreaks.com]
  • Good judgement (Score:3, Insightful)

    by karolgajewski ( 515082 ) on Thursday January 26, 2006 @04:26PM (#14572009) Journal
    Finally, a frivolous lawsuit that got its just desserts. We can only hope that will herald a new age, where the insanely stupid lawsuits are going to fianlly the death they so rightly deserve.
  • Comment removed (Score:5, Insightful)

    by account_deleted ( 4530225 ) on Thursday January 26, 2006 @04:34PM (#14572125)
    Comment removed based on user account deletion
  • by Routerhead ( 944388 ) on Thursday January 26, 2006 @04:40PM (#14572197)
    It seems a stretch to argue that Google is providing this service for financial gain. For starters, when a user pulls up a Google cached page, they don't get Google's ads on that page. They get, as you noted, the actual site as it was when Google cached it, complete with that site's ads, if there are any.

    In addition, the cached version is never anything more than a poor substitute for the actual site. Text comes through, but much of the site's look and feel is lost. If Google wanted to hijack this site, as you seem to suggest, they'd want to incorporate the subordinate links and images as well.

    Finally, given that users tend to resort to the cached version only when the actual site itself is down, it is hard to argue that the site itself is taking a financial hit. The user can't get there in the first place.
  • by metternich ( 888601 ) on Thursday January 26, 2006 @04:45PM (#14572270)
    Browsers don't make their caches publicly availible. You're comparing apples and oranges.
  • by schon ( 31600 ) on Thursday January 26, 2006 @04:58PM (#14572453)
    "Fair Use" is broadly supposed to have minimal to nil financial effects on the copyrightholder

    Before we address this (false) assumption, here's what's happening:

    Copyright holder makes a web page available *FOR FREE* to the general public. Google caches it. Please explain how Google's cache financially hurts the copyright holder. Providing something *FOR FREE* that is available *FOR FREE* would seem to have a "nil financial effect on the copyright holders", no?

    Google's cache is basically a large-scale financial transfer from the copyrightholders

    Sorry, WHAT ?!!??!

    Financial == monetary matters. I haven't checked in the past 5 minutes, but prior to that, someone visting your (free, publically accessible) website doesn't move money from your bank account to theirs.
  • by cfulmer ( 3166 ) on Thursday January 26, 2006 @05:05PM (#14572528) Journal
    Forget the fair use analysis, the most important thing here is the success of the "Implied License" claim. Basically, it goes like this: You operate a website. The web was created specifically with the idea that "robots" would crawl across it, and there is a standard well-known way to prevent them from crawling your site. Even more specifically, there's another standard well-known way to keep search engines from cacheing your content. Being on the web but not using these techniques means that you give search engines permission to cache your content.

    It's sort of like what happens when you leave a potful of candy at your front-door on Oct. 31st. In theory, you could claim that all those kids who come to your door and help themselves are stealing. But, because everybody knows how Halloween works, you've implicitly given permission for them to do it.

    In this opinion, the Fair Use analysis was basically just used as a stopgap of "what little infringement that's left after you account for the implied license is a fair use." If the website had included a robots.txt file, the fair use case would have been much harder to make.

    The Implied License is a stake in the ground for "This is the Internet. The rules are different here." IMO, that's a good thing -- there are a bunch of things that just couldn't happen if you had to get explicit permission from every content owner.
  • by rjonesx2 ( 947289 ) on Thursday January 26, 2006 @05:05PM (#14572534) Homepage
    The Google cache is absolutely ridiculous. As an individual who has had quite a bit of experience on both sides of the white hat / black hat search engine industry, the cache is NOT a webmaster's friend.

    1. The cache removes content control away from the author. For example, a site like EzineArticles.com prevents scraping by using an IP blocking method based on the speed at which pages are spidered by that IP. It is absurdly easy to circumvent this by simply spidering the Google cache of that article instead of spidering the site. Google's IP blocking is far less restrictive, and combined with the powerful search tool, it allows for easy, anonymous contextual scraping of sites whose Terms of Service explicitly refuse it.

    2. The cache extends access to removed content, often for months if not years at a time. Google rarely replaces 404 pages (perhaps it is because of their wish to have the largest number of indexed pages). I have clients who have nearly 48,000 non existent pages still cached in google that have not been present in over 14 months. Despite using 404s, 301s, etc. these pages have not yet been removed. Furthermore, Google's often mishandling of robots.txt, nocache, and nofollow leaves webmasters dependent upon search traffic hesitant to force removal of these pages using the supposedly standardized methods of removal.

    3. The cache allows Google to serve site content anonymously. Don't want the owner of a site to know you are looking at their goods (think of companies grepping for competitor IPs), just watch the cache instead.

    The list goes on and on. But I think the point is this...

    Why should a web author have to be technologically savvy to keep his or her content from being reproduced by a multi-billion dollar US company? Content control used to be as simple as "you write it, its yours". It got a little more complicated with time to the point at which it might be useful to use, perhaps, a Terms of Service. Even a novice could write "No duplication allowed without expressed consent". Now, a web author must know how to manipulate HTML meta tags and/or a robots.txt file.

    Fair use is for users, for people, not multi-billion dollar companies.
  • by pthisis ( 27352 ) on Thursday January 26, 2006 @05:09PM (#14572583) Homepage Journal
    "Fair Use" is broadly supposed to have minimal to nil financial effects on the copyrightholder and in general the "fair user" is doing the using for personal reasons.

    This is not true. Ebert & Roeper can show a movie clip, do some scathing commentary, decrease the film's box office take, and make money in the process, and that's clearly fair use. Protecting negative criticism is one of the core philosophical reasons behind fair use (originally, anyway) and it clearly has the potential to have devastating financial effects on the copyright holder.
  • by prSpectiv2 ( 450950 ) on Thursday January 26, 2006 @05:32PM (#14572838) Homepage
    explain to me again why if this is fair use, why i can't make photocopies of every book in the bookstore

    I'd say let's not be ridiculous but it looks like we're far too late for that. Books are "Products," aka items sold in exchange for access to intellectual or artistic property. If you don't own the book you can't give others access to it in the first place.

    Maybe you didn't get the memo, but the ENTIRE INTERNET was designed to promote the free dissemination of publicly available information. Sites are not "Products," they are frameworks that allow access to content, which may or may not be free. When FREE CONTENT is sent unchanged from one computer to the next that means the web is working. If one party wants to restrict access to their content, they slap a password and encryption over it. It's common knowledge that this is how the web works: free until stated otherwise.

    Your argument boils down to restricting the redistribution of content that is explicitly intended to be freely sent through dozens of computer systems before it's burned onto my monitor. By your argument, google should not only have to remove its cache, but all links to content that owners might not want to be accessible at any given moment, which adds up to every page on the net.
  • by dasil003 ( 907363 ) on Thursday January 26, 2006 @05:33PM (#14572847) Homepage
    Allow me to quote your original message:

    Google's cache is basically a large-scale financial transfer from the copyrightholders (who serve to benefit from the ads they serve and other interaction they get from end-users visitng their site) to google, who benefits directly by keeping people longer on google's site and thus, basically, shucks them more ads.

    While I don't think it's an open and closed case, you won't convince anyone by spouting such hyperbole.

    First of all, you're ignoring the fact that the main link goes to the actual website, and the only way to get to the cache is through a smaller link labeled "cache", which is unlikely to be clicked by someone who doesn't already know what it means. Furthermore, once they do click they get a very clear and prominent message in a frame stating that it is in fact a cache, and even explains what the cache is and how it works. So to say that it is a 'large-scale financial transfer' is just silly. At worst it's an extremely small-scale 'financial transfer', and that's a very dubious point to begin with.

    So then in your response you say:

    Nonsense. Google gains by drawing attention to itself and its other offerings. Let me ask you this simple question: if google doesn't gain financially, even if indirectly, why do they do it? Goodwill? Poppycock.

    The same could be said about search in general. They are profiting off of other people's content, because without it there'd be nothing to search. I'm sorry, but there's no evidence that the cache hurts websites financially. And there is certainly plenty of anecdotal evidence where it helps (eg. site goes down, customers go to cache to get phone # or contact email). Note that I'm not using that as support of Google's right to cache, merely as a counterargument against your baseless assertions.

    You are basically saying that GOOGLE has the right to display YOUR content as GOOGLE sees fit.

    Again with the hyberbole. No one believe Google has the right to display your content any way they want. What some people are arguing is that Google has the right to display your content as they do which is quite reasonable, and not deceptive in the least.

    In the google cache situation, the owner effectively loses that right.

    No, the page will disappear from Google once it is permanently down, it just takes some time. This may be a valid point about the internet archive, but then again information that finds its way on to the internet anywhere is likely to stick around for a long time.

    In other news, explain to me again why if this is fair use, why i can't make photocopies of every book in the bookstore (for example, without photos) and offer them for free reading in my coffeeshop?

    Um, because books cost money?
  • by rewt66 ( 738525 ) on Thursday January 26, 2006 @05:50PM (#14573065)
    I'm sure you're right. I'm sure you're much smarter than the judge, and have a much better grasp of the law. What a pity that you weren't the one judging the case!

    Sorry, mumbles, but I don't buy it. My money's on the judge being right, and you being a loudmouth with too much time to post over and over, as you have in reply to everybody who argued with you.

    And to try to actually contribute something to the debate: So you don't like the "fair use" part of the decision. Fair enough; though as I said, between you and the judge, my money's on the judge as the one who correctly groks what fair use is all about. But that aside, what about the other three points? Most interestingly, what about the "implied license" point?

    And what about the judge's assessment that Field was "attempting to manufacture a suit against Google"? That is, this wasn't about actual injury to Field, it was about Field actively looking for a chance to sue someone? Does any of that matter to you?

  • by Anonymous Brave Guy ( 457657 ) on Thursday January 26, 2006 @05:55PM (#14573148)
    This is good news for the Wayback Machine at archive.org.

    I'm not sure about that. Although the result of this case seems fair and clearly indicated on several counts, there's a lot that might not apply to archives more generally, so I'm not sure how much of a precedent has been set.

    In particular, the case was brought by someone who practically admitted trying to set Google up: he knew about mechanisms like META tags and robots.txt, knew that Google was caching his site, made no attempt to stop them, and indeed actually set up robots.txt explicitly to allow bots to crawl his site. This supports Google's first two defences here, having an implied licence and estoppel.

    The most interesting discussion, IMHO, is on the fair use defence. The court considered in a lot of detail whether the use made by Google qualifies as fair use. On the first criteria (how the material is being used), it was found significant that the material was being used for different purposes in the cache than on the original site: the latter was presumed artistic, while the former allowed access to the material when the original site was down, historical comparisons of the site content, highlighting of search terms that made a page relevant to the user's search, etc. Hence the court concludes as follows:

    Because Google serves different and socially important purposes in offering access to copyrighted works through "Cached" links and does not merely supersede the objectives of the original creations, the Court concludes that Google's alleged copying and distribution of Field's Web pages containing copyrighted works was transformative.

    The court also noted that Google made no attempt to profit from the display of the material, did not attach advertisements, made clear that the copy could be out of date, and linked clearly to the original source. (I wonder whether that non-profit, no-ads observation will come back to kick Google later...)

    The other fair use discussion is less interesting, although the fact that the plaintiff had made his works available for free and not made any other attempt to profit from them was important, because this meant the market value of the original hadn't been damaged. One interesting tidbit is that apparently the SCOTUS has ruled that the fourth fair use factor (any damage to the market/value of the original work) can't be used to argue that the copyright holder could have licensed an otherwise fair use (such as the caching here) and thus the use can't be fair.

    Some of the DMCA defence stuff could have quite significant implications. In particular, the fact that Google caches material only for a fairly short time (14-20 days is mentioned) is relevant, since a prior ruling about Usenet servers could be used.

    In summary, Google would basically have won out on four different defences here, even without the fact that the original use might not qualify as direct copyright infringement (since the plaintiff went after the downloading done automatically in response to users; he didn't go after GoogleBot's initial copying process that caches the site on Google's system). It doesn't seem at all clear that a lot of the arguments would apply to other caching services, though: amongst other things, Google's cache in this case is temporary; known to the plaintiff, who had not tried to stop it and actually encouraged it; not for direct profit nor carrying any advertising; and clearly not damaging the market value of the original works.

  • by imthesponge ( 621107 ) on Thursday January 26, 2006 @07:28PM (#14574088)
    Or when sites provide content to Google's spiders but deny it to everyone else *cough*expertsexchange*cough*.

    http://www.experts-exchange.com/Storage/Q_21272116 .html [experts-exchange.com]

    http://www.google.com/search?q=cache:GyTRLbavSIgJ: www.experts-exchange.com/Storage/Q_21272116.html&h l=en&gl=us&ct=clnk&cd=1 [google.com]

  • by Obfuscant ( 592200 ) on Thursday January 26, 2006 @07:50PM (#14574259)
    The web was created specifically with the idea that "robots" would crawl across it, ...

    Uhh, no. The web was created specifically with the idea that humans would crawl across it. It wasn't until the web grew beyond easy comprehension by humans that robots were created to crawl it.

    Were your statement correct, the robots.txt exclusion protocol would have been part of the CERN webserver documentation from day one. It wasn't. My web pages were up for a very long time before there were robots wandering the web.

    It's sort of like what happens when you leave a potful of candy at your front-door on Oct. 31st.

    Yes, I know what happens. Been there, done that. The first person to visit saw an entire pot full of candy and took it all. After I refilled the pot, thinking that the selfishness of the first visitor was an abberation, the second visitor took the entire pot full of candy and the pot. Ethics are what you do when you think people aren't looking.

  • by trawg ( 308495 ) on Thursday January 26, 2006 @08:51PM (#14574763) Homepage
    Copyright holder makes a web page available *FOR FREE* to the general public. Google caches it. Please explain how Google's cache financially hurts the copyright holder. Providing something *FOR FREE* that is available *FOR FREE* would seem to have a "nil financial effect on the copyright holders", no?


    This is a really interesting topic, for me at least. I mirror a lot of files for Australian users on various websites over here. One thing I've found is that the vast majority of sites (at least the big ones) have a standard Terms and Conditions page, that generally say something along the lines of "you are not allowed to reproduce any of the content on this page".

    I always respect this and don't mirror anything from any sites that have this in their T&Cs (without asking for permission first). Unfortunately this means that any Australian that wants to download something has to go through the international site and can't use a local mirror.

    Now, my reckoning is that it definitely doesn't cost these companies anything if I mirror their files and make them available. If anything, it is a) saving them money on bandwidth costs and b) increasing exposure of their products and services, as we always link back to the parent site.

    However, I assume that from their perspective, they don't want people mirroring distributing awesome_application.exe because a malicious website owner could put up a trojaned, virused or otherwise bad version of their file. Users then cry and go to them for support and they have to try and clean up the mess. We approached Microsoft to mirror their files for our users but were denied; while they never said this was the reason I think it is a safe assumption.

    There are ways to limit these problems, like md5 checking, PGP signing, but I assume these are way out of the reach of regular users who just want to Click Download And Run.

    Anyway, a little off-topic for your post, but I thought it might be relevant as one of the downsides to allowing people wholesale access to mirror/reproduce/redistribute/copy your works.
  • by reve ( 59221 ) on Thursday January 26, 2006 @09:08PM (#14574862)
    Okay, let's say someone is scraping your site. If they are scraping your site and redistributing the information, you have a very, very clear violation of copyright and I urge you to contact your lawyer.

    If it's just some guy scraping your site for his own edification, it's probably fair-use -- regardless of your crazy terms of service no one read.

    Ultimately, an LWP::Simple based "scraper" is just a specialized browser -- unless they're redistributing your work.

    Someone using Google to violate your rights is not Google's fault. It's an opportunity to ask for punitive damages.
  • by tv_dinners ( 938936 ) on Thursday January 26, 2006 @11:30PM (#14575687)
    ..since it means that I can now create a search engine that caches and displays Google's results.

    I wonder how long it would take for Google to cry foul for doing exactly what they do ?

Old programmers never die, they just hit account block limit.

Working...