Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Google Businesses Government The Courts The Internet News

Google Loses Cache-Copyright Lawsuit in Belgium 340

acroyear writes "A court in Belgium has found that Google's website caching policies are a violation of that nation's copyright laws. The finding is that Google's cache offers effectively free access to articles that, while free initially, are archived and charged for via subscriptions. Google claims that they only store short extracts, but the court determined that's still a violation. From the court's ruling: 'It would be up to copyright owners to get in touch with Google by e-mail to complain if the site was posting content that belonged to them. Google would then have 24 hours to withdraw the content or face a daily fine of 1,000 euros ($1,295 U.S.).'"
This discussion has been archived. No new comments can be posted.

Google Loses Cache-Copyright Lawsuit in Belgium

Comments Filter:
  • Har Har (Score:5, Informative)

    by N8F8 ( 4562 ) on Tuesday February 13, 2007 @09:53AM (#17996922)
    The ruling basically reiterates the current Google policy.
  • Waffles (Score:5, Funny)

    by bostons1337 ( 1025584 ) on Tuesday February 13, 2007 @09:53AM (#17996926)
    Don't they have anything better to do....like make us Americans some waffles.
  • by Anonymous Coward
    I thought the whole EU had some sort of "fair dealing" exemptions. If they do, I can't believe that Google's lawyers lost this.
    • Re: (Score:3, Informative)

      by radja ( 58949 )
      the EU has a copyright directive. it's up to the individual countries to make it into a national law, so copyright law still differs across countries in the EU.
    • Re: (Score:2, Insightful)

      Well, in this country, you don't win in court because you have 100 good lawyers
      • Re: (Score:3, Insightful)

        by drinkypoo ( 153816 )

        Well, in this country, you don't win in court because you have 100 good lawyers

        Yeah, you have to bribe your way to a victory just like everywhere else!

    • I'm currently writing up my thesis, and to be frank, without the google cache I'd have to pay a small fortune just to gain access to the abstracts of some papers I need. It would be very difficult to do what I need to do.

      I even found that some papers I've published are locked behind these pay per view portals. Ok I have copies, but given a choice I'd insist they be available free.

      The google cache lets me find papers stored outside these portals, often on peoples university home space. Without it I simply co
  • That's unfortunate (Score:3, Interesting)

    by aussie_a ( 778472 ) on Tuesday February 13, 2007 @09:54AM (#17996952) Journal
    That is unfortunate, but I'm amazed caching is even legal in some (most?) countries. Its always seemed like it was just rampant copyright infringement to me, except of course the law in certain countries makes an exception for it.
    • Ridiculous (Score:5, Insightful)

      by brunes69 ( 86786 ) <slashdot@nOSpam.keirstead.org> on Tuesday February 13, 2007 @09:59AM (#17996998)
      If you can't cache content, then you can't search it.

      You have to copy content to your local machine to index it, and to be abel to select results with context. Hell, you have to copy it to *VIEW* it.

      The courts and the law need to wake up and realize you can't do anything with a computer without copying it a dozen times. 25% or more of what your computer does is copy things from one place (network, hard drive, memory, external media) to another.
      • Re:Ridiculous (Score:5, Insightful)

        by aussie_a ( 778472 ) on Tuesday February 13, 2007 @10:02AM (#17997052) Journal
        There's a difference between keeping a local copy and distributing it.
      • Re:Ridiculous (Score:5, Insightful)

        by jandrese ( 485 ) <kensama@vt.edu> on Tuesday February 13, 2007 @11:06AM (#17998022) Homepage Journal
        So the answer is obvious: Just delist these guys from Google entirely and configure the webcrawler to ignore them. Problem solved and you won't have to worry about them coming back later and claiming that your locally stored copy is also a copyright violation too.
        • Re: (Score:2, Insightful)

          by Xichekolas ( 908635 )
          Then the crybabies would sue claiming that Google was unfairly censoring their content... at which point some retard would equate that with violating 'net neutrality' and suddenly Congress would be involved... and if I have learned anything in my short 23 years on this planet, it's that when Congress gets involved in anything, it takes 10 years, $10 billion, and 10,000 pages of law to resolve.
        • robots.txt (Score:4, Insightful)

          by Skadet ( 528657 ) on Tuesday February 13, 2007 @12:14PM (#17999090) Homepage
          Isn't this what robots.txt is for?
      • 25% or more of what your computer does is copy things from one place (network, hard drive, memory, external media) to another.

        I guess that explains why computers still seem so slow. 50% of the time they're deciding whether or not to make a jump (and making one) and 25% of the time they're shoveling bytes, that only leaves 50% of the time to actually do work :D

      • Maybe, but you don't have to show it to your site visitors in order to index it. I can see from my weblogs that people view my webpages from the google cache instead of going to the website. So they're looking at my webpages but Google gets the page views. It doesn't hurt me as much because I don't have a commercial site, but for somebody who was selling content, Google would be stealing their content and the money they would have gotten from people who really want to see it.
  • by Rude Turnip ( 49495 ) <valuationNO@SPAMgmail.com> on Tuesday February 13, 2007 @09:56AM (#17996978)
    That's $472,675 per year, or, in Google's accounting terms, $0 after rounding to the nearest million.
    • Re: (Score:3, Insightful)

      by ceejayoz ( 567949 )
      I suspect that's per-site, though.
    • According to the article in the (quality) newspaper I read (http://www.standaard.be/Artikel/Detail.aspx?arti k elid=DMF13022007_023, Dutch only), the stated fine is 25,000 per day. Which amounts to a lot more than $472,675 per year, 9,125,000 using your method of calculation (probably should only count business days, doesn't really matter now). (Note: 9,125,000 is approximately $11,877,000). Also, this is the second ruling on the matter. In the first ruling the fine was 1,000,000 per day the articles were on
  • Comment removed (Score:5, Insightful)

    by account_deleted ( 4530225 ) on Tuesday February 13, 2007 @10:01AM (#17997036)
    Comment removed based on user account deletion
    • Comment removed (Score:5, Insightful)

      by account_deleted ( 4530225 ) on Tuesday February 13, 2007 @10:05AM (#17997090)
      Comment removed based on user account deletion
      • Comment removed (Score:5, Insightful)

        by account_deleted ( 4530225 ) on Tuesday February 13, 2007 @10:15AM (#17997218)
        Comment removed based on user account deletion
        • Re: (Score:2, Informative)

          That doesn't matter. Publishers of those free urban tabloids still retain copyright on the articles and graphics given away for free in the tabloids.
        • by inviolet ( 797804 ) <slashdotNO@SPAMideasmatter.org> on Tuesday February 13, 2007 @10:42AM (#17997632) Journal

          Good answer.

          This ruling doesn't significantly hurt Google. Alas, it only hurts everyone else -- all billion or so of Google's users. Having quick access to (at least a chunk of) a piece of content, especially when that content has expired or is temporarily unreachable, is convenient and valuable. Many times in my own searches, the piece of data I anxiously sought was available only in the cache.

          Let's hope that Google does not respond to the ruling by across-the-board reducing or removing the cache feature.

          • by tedrlord ( 95173 )
            I'd kind of like to see them respond by disabling caching of any site in the .be tld. Suddenly Belgium's news and information sites stop getting any hits and their media industry freaks out. Normally I'm against corporations throwing their weight around but I'm even more against countries tossing around poorly planned regulation.

            *grumbling about all the wonderful daylight savings patches*
          • I suggest it hurts a slightly smaller subset - the ones who use Google who need access to news created in, by, and for Belgium and Belgians.

            Which cuts down the numbers a bit.

            I'm all in favor of just letting Belgium do this completely stupid thing and then letting them rot until they change their minds. Cut these publishers off until they die out.
      • Really? (Score:5, Insightful)

        by gillbates ( 106458 ) on Tuesday February 13, 2007 @10:27AM (#17997400) Homepage Journal

        If that is true, then why do I see copyright statements at the beginning of books and DVDs? It would seem the publishers are being hypocritical - they post their content publicly, refuse to use the robots.txt file, and then go on a litigation rampage when someone actually makes use of their web site. They're little different than the kid who takes his ball and goes home when he starts losing the game.

        Furthermore, I would argue that posting to a web page is implied permission because the owners do so expecting their work to be copied to personal computers. In an interesting turn of events, private individuals are allowed to copy and archive web pages, but Google is not.

        • Re:Really? (Score:4, Insightful)

          by Anonymous Brave Guy ( 457657 ) on Tuesday February 13, 2007 @02:44PM (#18001594)

          Furthermore, I would argue that posting to a web page is implied permission because the owners do so expecting their work to be copied to personal computers.

          But this isn't just copying to a personal computer, it's copying and redistributing in a modified form while passing on some of the expense to the original host site and concealing information that the original host site would otherwise have received.

          In an interesting turn of events, private individuals are allowed to copy and archive web pages, but Google is not.

          Individuals aren't, in general, allowed to redistribute entire works subject to others' copyright either.

          As an aside, I also don't have a problem with a commercial corporation not automatically having the same rights as a private citizen. The world would be a better place if more legal systems understood that they are not the same.

      • by 91degrees ( 207121 ) on Tuesday February 13, 2007 @10:29AM (#17997442) Journal
        It's basically about established practice. We've pretty much established right and wrong when copying a book. As a rule, you don't do it. In many countries, libraries and schools have a licencing agreement that allows photocopying. With TV shows it's considered perfectly acceptable to copy an entire show. Audio mix tapes are usually considered acceptable or explictely legal.

        On the web, caching search engines have been in existence for a lot longer than expiring content has been around. It's established that search engines are a neccesity, and that robots.txt is the way to opt-out. When you do business in a new arena, it makes sense that the existing rules of the arena should apply.
      • No. Google broke the law. The law assigns no responsibility to copyright holders to protect their property from those who would copy it

        TFS says:

        It would be up to copyright owners to get in touch with Google by e-mail to complain if the site was posting content that belonged to them. Google would then have 24 hours to withdraw the content
      • That argument makes no sense before the law. If publishing companies don't like me photocopying their books and passing them on to people, laden with ads for profit, could I say "No, the companies should have printed them on special anti-photocopying paper"?

        Anti-photocopying paper would be the equivalent of some sort of technical means of preventing web spiders from accessing the page. The 'robots.txt' file is simply a machine-readable notification of the page owner's limits on how the content can be used

    • by petabyte ( 238821 ) on Tuesday February 13, 2007 @10:08AM (#17997132)
      Or, even better, use the META tag to set NOARCHIVE:

      <meta name="ROBOTS" content="NOARCHIVE" />

      All of my website (quaggaspace.org) shows up in google, but you'll notice there is no "cached" button.
      • Here is the problem (Score:4, Interesting)

        by roman_mir ( 125474 ) on Tuesday February 13, 2007 @11:11AM (#17998128) Homepage Journal
        Why should we have to opt out from being cached, why can't we opt in instead? I think the phone calls made by marketers are a perfect example of this. If you need your page to be found on Google or other search engines, add a meta tag, which explicitely lets a search engine to collect this page for indexing/caching. In fact allow these differences to be explicit, let search engines either index or cache or both.
        • Why should we have to opt out from being cached, why can't we opt in instead?

          Here's an idea, if you have a Belgium domain, Google should NOT cache or index your website unless you provide a robots.txt saying what can & can't be indexed & cached, just to be safe so Google doesn't do something it doesn't have permission to do. Then Google will probably get sued for unfair business practices or whatever for not indexing websites of people who are too lazy to write robots.txt or find it easier (and cheaper??) to just hire some lawyers than edit a text file.

          The GP is right. Th

        • by mrchaotica ( 681592 ) * on Tuesday February 13, 2007 @01:32PM (#18000426)

          Why should we have to opt out from being cached, why can't we opt in instead?

          You did "opt in," by broadcasting your shit on the Internet in the first place!

          Don't like it? Don't upload it! Why is that simple concept so fucking hard to understand?!

          I mean, jeez -- don't you realize that what you're saying is equivalent to yelling in my ear and then complaining that I heard you?

    • by suv4x4 ( 956391 ) on Tuesday February 13, 2007 @10:21AM (#17997302)
      If they don't like it, they can very easily "opt out" by using Robots.txt to disallow Googlebot. I fail to see where the problem is here.

      Problem is.... newspapers, wanna have their pie and eat it too.
      Solution.... it's Google's fault.
      Result.... news dinosaurs go extinct and news mammals come to rule Earth
      Moral.... don't be greedy beyond survival.
    • I can "opt out" of having my stuff stolen by putting locks on my doors and windows.
      But I don't see why, if I forget to lock my door or choose not to bother, it should be legal for someone to take all my stuff.
      • I appreciate the need for analogy since intellectual property law is so... well, complicated and obtuse. However, analogies involving physical objects will always fail when applied to intellectual property. This is because taking someone's physical property is almost always morally wrong, whereas morality generally does not apply to intellectual property.

        In this case, the court said that it is fine for Google to copy, but the copyright holders have a right to have any offending content taken down within 24
        • I certainly do not understand under British law where exactly it says you can assume permission for copying non-public domain material. I would hazard a guess that what google does is illegal here. However as you say, copyright law is complex perhaps they fall under a *specific* exclusion for caching.

          Under most jurisdictions the law does recognise copying non public-domain material without permission is illegal.
          Whether you think copying material without permission or stealing someone's stuff is "moral" or n
          • Re: (Score:3, Insightful)

            by MightyYar ( 622222 )
            Google gets permission, at least for the initial copy: when their Googlebot sends an HTTP GET request to the copyright holder's server, they either make a copy and send it to Google or they deny the request.

            You are right that determining what is moral is subjective. However, I will point out that most people would probably not see envision that their moral framework would change with time. That is, someone opposed to human slavery would presumably find the behavior repugnant whether it was done by people in
  • 24 hours! (Score:3, Funny)

    by loconet ( 415875 ) on Tuesday February 13, 2007 @10:03AM (#17997056) Homepage
    "Google would then have 24 hours to withdraw the content or face a daily fine of 1,000 euros ($1,295 U.S.).'""

    I think it is safe to say they can afford to take their time...
  • by Mr. Underbridge ( 666784 ) on Tuesday February 13, 2007 @10:03AM (#17997058)
    If I'm Google, I turn the morons off and see how fast they come screaming back when their ad revenue plummets. Seriously, IT'S FREE FREAKING ADVERTISING. Google should be charging *them*.
    • Most of these newspapers are at least 60 years old. Somes are more than 100 years old. Nobody is really interested by Belgian news except Belgian themselves. Belgians know these newspapers names and URLs already. I really doubt that Google has any significant impact on their trafic.

      Their market is 4.2 millions of Belgian frenchspeakers, not the whole world.

      They are stupid, I don't share their point of view but I really doubt that it will hurt their business.

  • by jvkjvk ( 102057 ) on Tuesday February 13, 2007 @10:14AM (#17997212)
    ...in the foot.

    I don't believe that Google currently is mandated to show users any particular results. The simplest technological solution for Google might be to drop indexing the sites that send these takedown notices entirely. No index, no cache; dump it all and don't look back.

    They are in no way legally bound to do come up with a more advanced solution that would be more $$ and add more complexity to the codebase.

    Now because there very well may be information that is unavailable anywhere else (although it seems relatively unlikely - yes, they might have copyrighted articles that are unavailable otherwise, but I cannot imagine the information contained therein is such, unless you're talking about creative works) Google may try to work something out. Oh, that and they are remarkably not evil compared to the power they currently wield.

    Imagine how many takedown notices they would receive after the first few rounds of companies that complained cannot be found through Google...
  • by Hoi Polloi ( 522990 ) on Tuesday February 13, 2007 @10:23AM (#17997342) Journal
    "Well now, the result of last week's competition when we asked you to find a derogatory term for the Belgians. Well, the response was enormous and we took quite a long time sorting out the winners. There were some very clever entries. Mrs Hatred of Leicester said 'Let's not call them anything, let's just ignore them.' and a Mr St John of Huntingdon said he couldn't think of anything more derogatory than Belgians. But in the end we settled on three choices: number three, the Sprouts, sent in by Mrs Vicious of Hastings, very nice; number two, the Phlegms, from Mrs Childmolester of Worthing; but the winner was undoubtedly from Mrs No-Supper-For-You from Norwood in Lancashire, Miserable Fat Belgian Bastards!"
    • by Bazman ( 4849 )
      Python? I was expecting the HHGTTG reference! Its the rudest word in existence, dontcha know. Belgium!

  • by mshurpik ( 198339 ) on Tuesday February 13, 2007 @10:23AM (#17997344)
    >Google claims that they only store short extracts, but the court determined that's still a violation.

    Abstracts are generally a) uninformative and b) free. Seems like a huge overreaction on the EU's part.
    • Re: (Score:2, Insightful)

      by pinky99 ( 741036 )
      Wow, I didn't notice that the EU was conquered by Belgium over night...
      • Oh, you mean your version of Slashdot shows the *article* (not post) you're replying to?

        Damn, Malda must have fixed that in the last five minutes.
    • Re: (Score:3, Insightful)


      "Abstract" and "extract" are not interchangeable terms.

      An abstract is a meta-description of a document, giving an overview of its content but usually not using any of the document content itself. An extract, on the other hand, is a literal subset of the document.

  • Google just makes a policy that they don't index any site that even once sends such a request. Problem solved. More seriously, maybe an extension to robots.txt that defines cache lifespan would be reasonable.
  • Extend robots.txt? (Score:4, Insightful)

    by 140Mandak262Jamuna ( 970587 ) on Tuesday February 13, 2007 @10:24AM (#17997362) Journal
    Can't google propose an extension of the robots.txt file format to allow the original publishers to set a time limit on when the search engines should expire the cache?
  • by l2718 ( 514756 ) on Tuesday February 13, 2007 @10:25AM (#17997380)
    What do this say about proxy services, then? These also store content which may be subject to copyright and serve it to users.
  • Am I misremembering, or wasn't it also Belgium that ruled against Lindows in the trademark lawsuit that Microsoft brought? (After a US court said essentially that since "windows" was an English word, MSFT didn't stand much chance of winning the US suit.)

    If so, perhaps there's good reason that in "Hitchhikers Guide to the Galaxy", belgium is a swear word.
  • by Heddahenrik ( 902008 ) on Tuesday February 13, 2007 @10:28AM (#17997434) Homepage
    I'm often getting irritated about that I find stuff with Google and then aren't able to read it. Who wants to find a short text describing what you're searching for, only to find out that I have to pay or go through some procedure to actually read the stuff?

    I hope Google removes these sites totally. Then, as written by others too, we need a law that says that the ones putting stuff on the web has to write correct HTML and robot.txt files if they don't want their content cached. Google can't manually go through every site on the web and it would be even more impossible for Google's smaller competitors.
  • We call it a "Belgian Dip."
  • Just Pull Out (Score:5, Insightful)

    by Nom du Keyboard ( 633989 ) on Tuesday February 13, 2007 @10:37AM (#17997536)
    Google ought to just pull-out from indexing anyone who complains about their methods. You effectively disappear off of the Internet w/o Google, and these whiny complainers deserve exactly that. Maybe after they've lived in a black hole for a while they'll realize the benefit of having their free material easy for web users to find and view.
  • Caching is Copying (Score:3, Insightful)

    by Nom du Keyboard ( 633989 ) on Tuesday February 13, 2007 @10:40AM (#17997586)
    If caching is copying, than every user who isn't watching a streaming feed -- which isn't the way text and single image pages are rendered -- is guilty of copyright infringement every time they view a page. Your browser makes a copy of the page on your own hard drive. Watch out!! Here come the lawyers now.
    • Re: (Score:3, Insightful)

      by drinkypoo ( 153816 )

      If caching is copying, than every user who isn't watching a streaming feed -- which isn't the way text and single image pages are rendered -- is guilty of copyright infringement every time they view a page.

      I have news for you. When you stream your browser makes a local copy of portions of the stream, decodes them, and displays them.

      If sampling is illegal (without permission) then clearly copying a portion of a video stream without permission would be illegal. However, since you can give permission to anyo

    • by McDutchie ( 151611 ) on Tuesday February 13, 2007 @02:55PM (#18001818) Homepage

      You are confused. Caching is fine. Searching is fine. Wholesale republication of cached pages without prior permission (i.e. Googles "cached version" link) is not fine.

      Want proof? Try "caching" a prominent website on your own site and see how fast you get sued. What's good for the goose is good for the gander. If Google can republish cached pages and mere mortals cannot, that's class justice.

  • Sounds Good To Me (Score:2, Interesting)

    by Imaria ( 975253 )
    If Google is not allowed to have any cache of these sites, then wouldn't that mean they would have nothing to index for their searches? If you send Google that email, and suddenly don't show up on any of their searches, congrats. On the plus side, no-one has access to your content anymore. On the downside, NO-ONE has any access to your content anymore, because no-one can find you.
  • Simple really (Score:3, Interesting)

    by RationalRoot ( 746945 ) on Tuesday February 13, 2007 @11:14AM (#17998184) Homepage
    If someone does not want their extracts caches, remove them ENTIRELY from google.

    I don't believe that anyone has added "being indexed" to human rights yet.

    D
  • I wonder what measures are in place to prevent abuse of this by non-owners of the materials. For example say I don't like what you wrote about me - could I tell Google that I own the content, please take it out of your cache. I'm fine with the idea that people should be able to say who does what with their original web content - but there are simple technical ways for them to prevent caching. So really this seems to just open the door to abuse ala the DMCA and Michael Crook [boingboing.net].
  • I often search for stuff, and then Google lists some very promising searches, with a lot of relevant text in the description, but no cached version available. So I click on the link, and I get a "register/subscribe page" with totally NO SIGN of any of the text that previously appeared. Anyway this happens especially with journals.

    I thought Google had a policy that a site was not allowed to show Google one thing and a normal user something else?

    Or that policy has "Unless Google is paid off by said site" some
  • HTTP offers several standard headers to be interpreted by caches. The question is, does Google honour the instructions in those headers? On the other hand, content providers that serve content sensitive to cache problems are encouraged to correctly use those headers. Unfortunately (some/most?) content management systems do not provide means to control those.
  • by spasm ( 79260 ) on Tuesday February 13, 2007 @12:16PM (#17999136) Homepage
    I keep waiting for Google to respond to one of these idiotic 'copyright' cases by simply removing service to IP address space associated with the country as an object lesson.

    I can't imagine the Belgian public putting up for long with completely losing access to Google simply because their copyright laws were written in another century..
  • by Impy the Impiuos Imp ( 442658 ) on Tuesday February 13, 2007 @02:12PM (#18001058) Journal
    Here's how to fix the problem. When such a page would be linked to in the cach, instead put up the following:

    This page is cached, but your government officials will not let you read it. Here are their names and addresses, and the date of the next election, and the challengers to them who have signed a document that they will reverse this ruling if elected:

    Censor: Hercule Poirot
    Free Speech Challenger: Agatha Christie
    Next election for them: 18 Aug 2007

    Censor: Phinneas d'Satay
    Free Speech Challenger: Mannequin Pisse
    Next election for them: 18 Aug 2007

    etc.

    Tailor it per local region if that can be determined from the IP.

    9) Wait a few years

    10) Profit!

"Being against torture ought to be sort of a multipartisan thing." -- Karl Lehenbauer, as amended by Jeff Daiell, a Libertarian

Working...