Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Google Businesses Government The Courts The Internet News

Google Loses Cache-Copyright Lawsuit in Belgium 340

acroyear writes "A court in Belgium has found that Google's website caching policies are a violation of that nation's copyright laws. The finding is that Google's cache offers effectively free access to articles that, while free initially, are archived and charged for via subscriptions. Google claims that they only store short extracts, but the court determined that's still a violation. From the court's ruling: 'It would be up to copyright owners to get in touch with Google by e-mail to complain if the site was posting content that belonged to them. Google would then have 24 hours to withdraw the content or face a daily fine of 1,000 euros ($1,295 U.S.).'"
This discussion has been archived. No new comments can be posted.

Google Loses Cache-Copyright Lawsuit in Belgium

Comments Filter:
  • Ridiculous (Score:5, Insightful)

    by brunes69 ( 86786 ) <[slashdot] [at] [keirstead.org]> on Tuesday February 13, 2007 @10:59AM (#17996998)
    If you can't cache content, then you can't search it.

    You have to copy content to your local machine to index it, and to be abel to select results with context. Hell, you have to copy it to *VIEW* it.

    The courts and the law need to wake up and realize you can't do anything with a computer without copying it a dozen times. 25% or more of what your computer does is copy things from one place (network, hard drive, memory, external media) to another.
  • Re:$1,295 per day? (Score:3, Insightful)

    by ceejayoz ( 567949 ) <cj@ceejayoz.com> on Tuesday February 13, 2007 @10:59AM (#17997006) Homepage Journal
    I suspect that's per-site, though.
  • Comment removed (Score:5, Insightful)

    by account_deleted ( 4530225 ) on Tuesday February 13, 2007 @11:01AM (#17997036)
    Comment removed based on user account deletion
  • Re:Ridiculous (Score:5, Insightful)

    by aussie_a ( 778472 ) on Tuesday February 13, 2007 @11:02AM (#17997052) Journal
    There's a difference between keeping a local copy and distributing it.
  • by Mr. Underbridge ( 666784 ) on Tuesday February 13, 2007 @11:03AM (#17997058)
    If I'm Google, I turn the morons off and see how fast they come screaming back when their ad revenue plummets. Seriously, IT'S FREE FREAKING ADVERTISING. Google should be charging *them*.
  • Comment removed (Score:5, Insightful)

    by account_deleted ( 4530225 ) on Tuesday February 13, 2007 @11:05AM (#17997090)
    Comment removed based on user account deletion
  • by kimvette ( 919543 ) on Tuesday February 13, 2007 @11:07AM (#17997118) Homepage Journal
    Personal Responsibility

    Google caching is a free service which is optional. Web site owners have total control over it. Note the following:

    <META HTTP-EQUIV="CACHE-CONTROL" CONTENT="NO-CACHE">


    If this is in place the site does not get cached.

    I hope Google is responding to such frivilous complaints and lawsuits by completely removing those sites from their index. If they do not remove those companies, they are doing evil through omission by allowing other companies to do evil to remain in business.
  • Public Domain (Score:1, Insightful)

    by C_Kode ( 102755 ) on Tuesday February 13, 2007 @11:12AM (#17997164) Journal
    The finding is that Google's cache offers effectively free access to articles that, while free initially, are archived and charged for via subscriptions.

    The way I see it, once you release media free of charge to the general public its content becomes public domain.
  • More stupidity (Score:1, Insightful)

    by Anonymous Coward on Tuesday February 13, 2007 @11:14AM (#17997206)
    If a publisher doesn't want their page cached there are technical measures they can and should take. The legal system isn't a crutch for idiots who can't tie their own shoelaces or wipe their own assholes. If an organization lacks the technical proficiency to publish on the web, they should stop publishing on the web. Search engine caches are an important and useful feature, being ruined for everyone because some stupid twat sees a payoff from Google.

  • by Anonymous Coward on Tuesday February 13, 2007 @11:15AM (#17997214)
    If you don't want it cached, then don't make it publicly available on your website.

    If you must make it publicly available on your website, then don't complain when it gets cached.

    If your business model requires that everyone else in the world do absurd things that don't make sense (like fail to cache and redistribute publicly available information when the cost to do so is virtually zero), then go find a better buisness model.

    Our laws should not make us pretend that reality is other than it is, or that the technological landscape has failed to take on a new shape.

    Current copyright law is producing these sorts of absurd contradictions. The law, not the basic principles of human behavior, should be changed.
  • Comment removed (Score:5, Insightful)

    by account_deleted ( 4530225 ) on Tuesday February 13, 2007 @11:15AM (#17997218)
    Comment removed based on user account deletion
  • by suv4x4 ( 956391 ) on Tuesday February 13, 2007 @11:21AM (#17997302)
    If they don't like it, they can very easily "opt out" by using Robots.txt to disallow Googlebot. I fail to see where the problem is here.

    Problem is.... newspapers, wanna have their pie and eat it too.
    Solution.... it's Google's fault.
    Result.... news dinosaurs go extinct and news mammals come to rule Earth
    Moral.... don't be greedy beyond survival.
  • Extend robots.txt? (Score:4, Insightful)

    by 140Mandak262Jamuna ( 970587 ) on Tuesday February 13, 2007 @11:24AM (#17997362) Journal
    Can't google propose an extension of the robots.txt file format to allow the original publishers to set a time limit on when the search engines should expire the cache?
  • by Anonymous Coward on Tuesday February 13, 2007 @11:25AM (#17997378)
    Well, if the rightsholders doesn't want people/robots to access their "jewels" then maybe they shouldn't fucking publish them on a public net in the first place?
  • Really? (Score:5, Insightful)

    by gillbates ( 106458 ) on Tuesday February 13, 2007 @11:27AM (#17997400) Homepage Journal

    If that is true, then why do I see copyright statements at the beginning of books and DVDs? It would seem the publishers are being hypocritical - they post their content publicly, refuse to use the robots.txt file, and then go on a litigation rampage when someone actually makes use of their web site. They're little different than the kid who takes his ball and goes home when he starts losing the game.

    Furthermore, I would argue that posting to a web page is implied permission because the owners do so expecting their work to be copied to personal computers. In an interesting turn of events, private individuals are allowed to copy and archive web pages, but Google is not.

  • by 91degrees ( 207121 ) on Tuesday February 13, 2007 @11:29AM (#17997442) Journal
    It's basically about established practice. We've pretty much established right and wrong when copying a book. As a rule, you don't do it. In many countries, libraries and schools have a licencing agreement that allows photocopying. With TV shows it's considered perfectly acceptable to copy an entire show. Audio mix tapes are usually considered acceptable or explictely legal.

    On the web, caching search engines have been in existence for a lot longer than expiring content has been around. It's established that search engines are a neccesity, and that robots.txt is the way to opt-out. When you do business in a new arena, it makes sense that the existing rules of the arena should apply.
  • by pinky99 ( 741036 ) on Tuesday February 13, 2007 @11:34AM (#17997510)
    Wow, I didn't notice that the EU was conquered by Belgium over night...
  • Just Pull Out (Score:5, Insightful)

    by Nom du Keyboard ( 633989 ) on Tuesday February 13, 2007 @11:37AM (#17997536)
    Google ought to just pull-out from indexing anyone who complains about their methods. You effectively disappear off of the Internet w/o Google, and these whiny complainers deserve exactly that. Maybe after they've lived in a black hole for a while they'll realize the benefit of having their free material easy for web users to find and view.
  • Caching is Copying (Score:3, Insightful)

    by Nom du Keyboard ( 633989 ) on Tuesday February 13, 2007 @11:40AM (#17997586)
    If caching is copying, than every user who isn't watching a streaming feed -- which isn't the way text and single image pages are rendered -- is guilty of copyright infringement every time they view a page. Your browser makes a copy of the page on your own hard drive. Watch out!! Here come the lawyers now.
  • by inviolet ( 797804 ) <slashdot@@@ideasmatter...org> on Tuesday February 13, 2007 @11:42AM (#17997632) Journal

    Good answer.

    This ruling doesn't significantly hurt Google. Alas, it only hurts everyone else -- all billion or so of Google's users. Having quick access to (at least a chunk of) a piece of content, especially when that content has expired or is temporarily unreachable, is convenient and valuable. Many times in my own searches, the piece of data I anxiously sought was available only in the cache.

    Let's hope that Google does not respond to the ruling by across-the-board reducing or removing the cache feature.

  • by McDutchie ( 151611 ) on Tuesday February 13, 2007 @11:49AM (#17997744) Homepage
    Google offers free access to a complete cached copy of your site by default. You should not have to opt out of having your copyright violated, any more than you should have to opt out of getting spammed, getting mugged in the street, etc. That is putting the world upside down. The violator should not have committed the violation to begin with. Offering complete cached/archived copies of websites should only happen with explicit permission.
  • by scorpionsoft.be ( 994417 ) on Tuesday February 13, 2007 @11:58AM (#17997902)
    Well, in this country, you don't win in court because you have 100 good lawyers
  • Re:Public Domain (Score:3, Insightful)

    by grimwell ( 141031 ) on Tuesday February 13, 2007 @12:00PM (#17997934)

    The way I see it, once you release media free of charge to the general public its content becomes public domain.


    Wouldn't that undermine the GPL? If the linux kernel is in the public domain, companies could use it freely without having to give back.

    Or what about street-performers performing their own material?
  • by jandrese ( 485 ) <kensama@vt.edu> on Tuesday February 13, 2007 @12:02PM (#17997954) Homepage Journal
    Which is not only completely impractical (very few sites would set the "cacheme" flag because almost nobody would know about it), but counter to the way the internet works. By default you have to assume that anything you post on the internet will be tracked by search engines, blogged about, cached, etc... That happens to _everything_ on the internet, it's the nature of the beast. That's also why the internet works so well. If you want to make your page behave differently than all of the other pages on the internet, then you need to look into setting some very easy to use flags (robots.txt and the meta tags listed above) to change the behavior. You can't assume that just because it's yours that it will be treated specially. If you're really worried about it then don't post on the internet, plain and simple.
  • Re:Ridiculous (Score:5, Insightful)

    by jandrese ( 485 ) <kensama@vt.edu> on Tuesday February 13, 2007 @12:06PM (#17998022) Homepage Journal
    So the answer is obvious: Just delist these guys from Google entirely and configure the webcrawler to ignore them. Problem solved and you won't have to worry about them coming back later and claiming that your locally stored copy is also a copyright violation too.
  • by nstlgc ( 945418 ) on Tuesday February 13, 2007 @12:08PM (#17998056)
    Being the devil's advocate:

    Spam is a free service which is optional. Email address owners have total control over it. Use the unsubscribe link at the bottom of the email.

    Assuming those unsubscribe links would work (we all know they don't), would you consider this a logical way of thinking? If tomorrow some other caching company comes along and introduces another way in which website owners have 'total control', will that clear them from copyright violation? What if I want my content to be cached on proxies, but I don't like them to be accessible from a massively public accessible and searchable cache?

    Personal opinion:

    To be honest, I don't think Google needs to stop caching anything automatically. The ruling states copyright owners need to contact Google and Google needs to respond by taking the content offline within 24 hours. That doesn't seem completely impossible to do, and that way they can keep caching those who don't contact them.
  • by PeterBrett ( 780946 ) on Tuesday February 13, 2007 @12:15PM (#17998202) Homepage

    If I'm Google, I turn the morons off and see how fast they come screaming back when their ad revenue plummets. Seriously, IT'S FREE FREAKING ADVERTISING. Google should be charging *them*.
    You suck at teh internets. This is about the "google cache" link supplied on Google's search results page.

    No, he makes a good point. If someone files a lawsuit against Google, all Google would have to do to stop them would be to suspend their site from all indexing and search results. There's no God-given right to be indexed by a search engine. Bad analogy; imagine you sell hot meaty pies, and some random guy walks around the town carrying a board with the words, "Eat Anonymous Coward's Hot Meaty Pies Today!!!". Now imagine that guy does it for free. Suing Google is somewhat like taking the guy to court because "Anonymous Coward" is your trademark and he didn't pay for a license to use it.

  • by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Tuesday February 13, 2007 @12:16PM (#17998214) Homepage Journal

    If caching is copying, than every user who isn't watching a streaming feed -- which isn't the way text and single image pages are rendered -- is guilty of copyright infringement every time they view a page.

    I have news for you. When you stream your browser makes a local copy of portions of the stream, decodes them, and displays them.

    If sampling is illegal (without permission) then clearly copying a portion of a video stream without permission would be illegal. However, since you can give permission to anyone you like, there's no crime being committed, as making a stream publicly availably is granting permission.

  • Re:Public Domain (Score:4, Insightful)

    by kramer ( 19951 ) on Tuesday February 13, 2007 @12:28PM (#17998404) Homepage
    The way I see it, once you release media free of charge to the general public its content becomes public domain.

    Then, perhaps its good that the rest of the world doesn't see it the way you do.

    Because if the world were to be the way you see it, the entire web content industry would immediately go pay-per-view or subscription only to avoid all their work becoming public domain. Yes, what you propose would literally destroy the useful and open environment of the Internet.

    Servers, bandwidth, and writers don't pay for themselves. If these sites can be copied wholesale and put up elsewhere without the original author having a say in the matter, you've just destroyed any monetary incentive to create. Much as many people like to think otherwise, money is important, and a strong incentive to create.

  • by poot_rootbeer ( 188613 ) on Tuesday February 13, 2007 @12:29PM (#17998412)

    "Abstract" and "extract" are not interchangeable terms.

    An abstract is a meta-description of a document, giving an overview of its content but usually not using any of the document content itself. An extract, on the other hand, is a literal subset of the document.

  • Re:Ridiculous (Score:2, Insightful)

    by Xichekolas ( 908635 ) on Tuesday February 13, 2007 @12:29PM (#17998416)
    Then the crybabies would sue claiming that Google was unfairly censoring their content... at which point some retard would equate that with violating 'net neutrality' and suddenly Congress would be involved... and if I have learned anything in my short 23 years on this planet, it's that when Congress gets involved in anything, it takes 10 years, $10 billion, and 10,000 pages of law to resolve.
  • Well, in this country, you don't win in court because you have 100 good lawyers

    Yeah, you have to bribe your way to a victory just like everywhere else!

  • by MightyYar ( 622222 ) on Tuesday February 13, 2007 @12:58PM (#17998850)
    Google gets permission, at least for the initial copy: when their Googlebot sends an HTTP GET request to the copyright holder's server, they either make a copy and send it to Google or they deny the request.

    You are right that determining what is moral is subjective. However, I will point out that most people would probably not see envision that their moral framework would change with time. That is, someone opposed to human slavery would presumably find the behavior repugnant whether it was done by people in the 18th century or in the present day. Someone opposed to abortion would not find it permissible so long as it was legal somewhere, or at another time. Apply this to copyright, and things fall apart. Is it okay to copy something after 30 years if it was published in the 1800s? But now it's 90 years? Why was it okay to copy something after 30 years before and 90 years now? Am I morally corrupt if I still use the old 30 year rule? Or is it because I broke the law, irrespective of what the law says? My point is that copyright changes constantly over time and varies from country to country. It is impossible to have a consistent moral view if you include copyright - it's basis has nothing to do with morals.
  • robots.txt (Score:4, Insightful)

    by Skadet ( 528657 ) on Tuesday February 13, 2007 @01:14PM (#17999090) Homepage
    Isn't this what robots.txt is for?
  • by drawfour ( 791912 ) on Tuesday February 13, 2007 @01:41PM (#17999562)
    Plus they actually authorized Google (and anyone else) to get the local copy.

    Google: Hey, what that page? Can I see? (HTTP GET)
    Them : Sure, here you go! (200 OK HTTP response)

  • by Kadin2048 ( 468275 ) <slashdot.kadin@xox y . net> on Tuesday February 13, 2007 @01:52PM (#17999762) Homepage Journal
    Well if they want to be assholes about it, why not just drop them off of the database completely?

    It seems to me that Google is in a good position now to offer a deal to sites; they can either agree to be crawled, and thus end up in a cache for 30 days or whatever, or they can just not end up in the index at all. Their option.

    Get rid of the "oh we want to be in the index and get traffic, but not be cached" option, which is basically web sites wanting to have their cake and eat it too.

    I think these sites have an inflated opinion of their own relevance to the world. They can sue Google, but Google can effectively remove them from the Internet, at least as far as 70-90 [skrenta.com]% (depending on who's doing the counting) of users are concerned.
  • by MightyYar ( 622222 ) on Tuesday February 13, 2007 @01:53PM (#17999798)

    Getting a book from a library or buying it in a shop or indeed if Penguin Publishing gives you a copy of the book it does not grant you the right to republish the text.
    Agreed.

    I think that the problem is that copyright law is largely based on physical media, and electronic distribution is a headache for the courts to sort out. For instance, with a book there is very little problem in just saying "Don't make a copy." You can use a book without making a copy. Electronic distribution is different - several copies are needed to make the information usable. Let's use the scenario where you download an ebook while sitting in Starbucks. Starting with the copyright holder's server, you have copies made by internet routers on the way to the Starbucks. These are commercial copies - the routers are routing data for money, not for personal fair use. Many routers will cache popular data to cut down on bandwidth. Then you have the T-Mobile access point that you are hooked into at Starbucks - another commercial service. You make a copy on your local hard drive, and then another copy to your screen so that you can read this ebook.

    So no one seems to be attacking the idea of caching copyrighted material for purposes of making money - it is done for network efficiency all the time. So what line has Google crossed? They make the cache directly accessible, so that it is obvious that you are looking at the cache. That's kind of an odd line, though, because presumably then the court would have been fine with a service like Google, except that the caching is invisible to the end user.

    Another problem is archival. In the physical media world, there is little danger of our culture disappearing. A book can be archived indefinitely. So can a DVD or CD. Web sites, on the other hand, are by their nature transient. We risk losing a historical record of our culture if we disallow caching. We would need a new law to allow this, unfortunately.

    Copyright law is fun!
  • Re:Public Domain (Score:3, Insightful)

    by geekoid ( 135745 ) <dadinportland&yahoo,com> on Tuesday February 13, 2007 @02:10PM (#18000088) Homepage Journal
    No it wouldn't destroy it, it would change it, certianly.

    Copyright has destroyed more then it has helped. I refer to what was happening before the revalutionary war.
    This effect was curtailed by the 14 yaer limitation, but now that there isn't a real expiration date to copyright* it is happening again. Corporation are getting so much power that they are controlling culture.

    Now, I don't agree with the original post about public domain, because by hos logic every book in a book store is public domain. I also believe a limited copyright is a good thing(14 years, 6 year extension). But if it became down to no copyright, and an unlimited copyright, I'll choose no copyright.

    *Yes there is a limit, but for all intent and purposes it's meaningless.

  • by mrchaotica ( 681592 ) * on Tuesday February 13, 2007 @02:32PM (#18000426)

    Why should we have to opt out from being cached, why can't we opt in instead?

    You did "opt in," by broadcasting your shit on the Internet in the first place!

    Don't like it? Don't upload it! Why is that simple concept so fucking hard to understand?!

    I mean, jeez -- don't you realize that what you're saying is equivalent to yelling in my ear and then complaining that I heard you?

  • by Chryana ( 708485 ) on Tuesday February 13, 2007 @02:41PM (#18000592)

    Why should we have to opt out from being cached, why can't we opt in instead?
    Because you already "opt in" when you publish a web page. Most content providers are very happy to be indexed in google. Why should the majority suffer because of a few fools who don't know what is best for them?

    I think the phone calls made by marketers are a perfect example of this.
    No, it's actually a terrible analogy. Marketers are not providing a service, like google is doing. (You agree that indexing services are necessary on the Internet, right?) The vast majority of websites want to be indexed by google. No one wants the "service" provided by telemarketers.
  • Re:Really? (Score:4, Insightful)

    by Anonymous Brave Guy ( 457657 ) on Tuesday February 13, 2007 @03:44PM (#18001594)

    Furthermore, I would argue that posting to a web page is implied permission because the owners do so expecting their work to be copied to personal computers.

    But this isn't just copying to a personal computer, it's copying and redistributing in a modified form while passing on some of the expense to the original host site and concealing information that the original host site would otherwise have received.

    In an interesting turn of events, private individuals are allowed to copy and archive web pages, but Google is not.

    Individuals aren't, in general, allowed to redistribute entire works subject to others' copyright either.

    As an aside, I also don't have a problem with a commercial corporation not automatically having the same rights as a private citizen. The world would be a better place if more legal systems understood that they are not the same.

  • by MyLongNickName ( 822545 ) on Tuesday February 13, 2007 @04:04PM (#18001992) Journal
    Microsoft's fault? Care to elaborate how they could set things up differently to take into account all the time zones and special rules?

    And then, send a carbon copy to IBM and Sun and thousands of other companies that pretty much do things the same way (and have their own patches)?

  • by jmorris42 ( 1458 ) * <jmorris&beau,org> on Tuesday February 13, 2007 @04:20PM (#18002248)
    > but as far as I know, robots.txt has no special status in law anywhere.

    Long accepted custom counts in most jurisdictions court systems. Especially in light of the default, everyone permitted. By making content available on a public web server you are obviously OK with anyone looking at it, Google included. If you don't want the big G looking, the accepted custom is to place a line into robots.txt telling that search engine to stay out. Of course no sane business would willingly disappear themselves from the net like that, so these guys want to dictate the TERMS under which Google indexes and presents their content.

    Google should start making examples of some of these cases. Simply delete them. And ask for a declaratory judgement as to whether any other entity in that country can sue on similar grounds. If the court gives the wrong answer announce a near date when they will delete the entire .cc from their results until such time as the local laws are corrected. Provided they exercised some good judgement on the selected date the local laws would get fixed.

    Governments will continue to try extending their tentacles into the network until the major stakeholders start kneecapping em at the first hint of interferrence.

Those who can, do; those who can't, write. Those who can't write work for the Bell Labs Record.

Working...