Google Loses Cache-Copyright Lawsuit in Belgium 340
acroyear writes "A court in Belgium has found that Google's website caching policies are a violation of that nation's copyright laws. The finding is that Google's cache offers effectively free access to articles that, while free initially, are archived and charged for via subscriptions. Google claims that they only store short extracts, but the court determined that's still a violation. From the court's ruling: 'It would be up to copyright owners to get in touch with Google by e-mail to complain if the site was posting content that belonged to them. Google would then have 24 hours to withdraw the content or face a daily fine of 1,000 euros ($1,295 U.S.).'"
Har Har (Score:5, Informative)
Re:Har Har (Score:5, Informative)
"To prevent all search engines from showing a "Cached" link for your site, place this tag in the <HEAD> section of your page:
<META NAME="ROBOTS" CONTENT="NOARCHIVE">
To allow other search engines to show a "Cached" link, preventing only Google from displaying one, use the following tag:
<META NAME="GOOGLEBOT" CONTENT="NOARCHIVE"> "
Waffles (Score:5, Funny)
So no "fair dealing" or "fair use" in Belgium? (Score:2, Interesting)
Re: (Score:3, Informative)
Re: (Score:2, Insightful)
Re: (Score:3, Insightful)
Yeah, you have to bribe your way to a victory just like everywhere else!
Re: (Score:2)
I even found that some papers I've published are locked behind these pay per view portals. Ok I have copies, but given a choice I'd insist they be available free.
The google cache lets me find papers stored outside these portals, often on peoples university home space. Without it I simply co
That's unfortunate (Score:3, Interesting)
Ridiculous (Score:5, Insightful)
You have to copy content to your local machine to index it, and to be abel to select results with context. Hell, you have to copy it to *VIEW* it.
The courts and the law need to wake up and realize you can't do anything with a computer without copying it a dozen times. 25% or more of what your computer does is copy things from one place (network, hard drive, memory, external media) to another.
Re:Ridiculous (Score:5, Insightful)
Re: (Score:2)
Then a "totally legal" search for flying spaghetty monster would look like this:
http://www.venganza.org/ [venganza.org] http://www.venganza.org/games/index_large.htm [venganza.org] http://en.wikipedia.org/wiki/Flying_Spaghetti_Mons ter [wikipedia.org] http://flyingspaghettimonster.org/ [flyingspag...onster.org] http://uncyclopedia.org/wiki/Flying_Spaghetti_Mons ter [uncyclopedia.org] http://blog.pietrosperoni.it/2005/08/28/duck-and-c over-and-the-flyin [pietrosperoni.it]
Re: (Score:3, Informative)
Re:Not in terms of copyright law (Score:5, Insightful)
Google: Hey, what that page? Can I see? (HTTP GET)
Them
Re: (Score:3, Insightful)
Long accepted custom counts in most jurisdictions court systems. Especially in light of the default, everyone permitted. By making content available on a public web server you are obviously OK with anyone looking at it, Google included. If you don't want the big G looking, the accepted custom is to place a line into robots.txt telling that search engine to stay out. Of course no sane business would willingly disappear themselve
Re:Ridiculous (Score:5, Insightful)
Re: (Score:2, Insightful)
robots.txt (Score:4, Insightful)
Give them a choice. (Score:4, Insightful)
It seems to me that Google is in a good position now to offer a deal to sites; they can either agree to be crawled, and thus end up in a cache for 30 days or whatever, or they can just not end up in the index at all. Their option.
Get rid of the "oh we want to be in the index and get traffic, but not be cached" option, which is basically web sites wanting to have their cake and eat it too.
I think these sites have an inflated opinion of their own relevance to the world. They can sue Google, but Google can effectively remove them from the Internet, at least as far as 70-90 [skrenta.com]% (depending on who's doing the counting) of users are concerned.
Re: (Score:2)
I guess that explains why computers still seem so slow. 50% of the time they're deciding whether or not to make a jump (and making one) and 25% of the time they're shoveling bytes, that only leaves 50% of the time to actually do work :D
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
$1,295 per day? (Score:5, Funny)
Re: (Score:3, Insightful)
Re:$1,295 per day? (Score:5, Funny)
Re: (Score:2)
Re: (Score:2)
Comment removed (Score:5, Insightful)
Comment removed (Score:5, Insightful)
Comment removed (Score:5, Insightful)
Re: (Score:2, Informative)
Re:What's the problem? (Score:5, Insightful)
Good answer.
This ruling doesn't significantly hurt Google. Alas, it only hurts everyone else -- all billion or so of Google's users. Having quick access to (at least a chunk of) a piece of content, especially when that content has expired or is temporarily unreachable, is convenient and valuable. Many times in my own searches, the piece of data I anxiously sought was available only in the cache.
Let's hope that Google does not respond to the ruling by across-the-board reducing or removing the cache feature.
Re: (Score:2)
*grumbling about all the wonderful daylight savings patches*
Re: (Score:3, Insightful)
And then, send a carbon copy to IBM and Sun and thousands of other companies that pretty much do things the same way (and have their own patches)?
Re: (Score:2)
Which cuts down the numbers a bit.
I'm all in favor of just letting Belgium do this completely stupid thing and then letting them rot until they change their minds. Cut these publishers off until they die out.
Really? (Score:5, Insightful)
If that is true, then why do I see copyright statements at the beginning of books and DVDs? It would seem the publishers are being hypocritical - they post their content publicly, refuse to use the robots.txt file, and then go on a litigation rampage when someone actually makes use of their web site. They're little different than the kid who takes his ball and goes home when he starts losing the game.
Furthermore, I would argue that posting to a web page is implied permission because the owners do so expecting their work to be copied to personal computers. In an interesting turn of events, private individuals are allowed to copy and archive web pages, but Google is not.
Re:Really? (Score:4, Insightful)
But this isn't just copying to a personal computer, it's copying and redistributing in a modified form while passing on some of the expense to the original host site and concealing information that the original host site would otherwise have received.
Individuals aren't, in general, allowed to redistribute entire works subject to others' copyright either.
As an aside, I also don't have a problem with a commercial corporation not automatically having the same rights as a private citizen. The world would be a better place if more legal systems understood that they are not the same.
Re:What's the problem? (Score:5, Insightful)
On the web, caching search engines have been in existence for a lot longer than expiring content has been around. It's established that search engines are a neccesity, and that robots.txt is the way to opt-out. When you do business in a new arena, it makes sense that the existing rules of the arena should apply.
Re: (Score:2)
TFS says:
flawed analogy (Score:2)
Anti-photocopying paper would be the equivalent of some sort of technical means of preventing web spiders from accessing the page. The 'robots.txt' file is simply a machine-readable notification of the page owner's limits on how the content can be used
Re:What's the problem? (Score:5, Interesting)
<meta name="ROBOTS" content="NOARCHIVE"
All of my website (quaggaspace.org) shows up in google, but you'll notice there is no "cached" button.
Here is the problem (Score:4, Interesting)
Re: (Score:2)
Why should we have to opt out from being cached, why can't we opt in instead?
Here's an idea, if you have a Belgium domain, Google should NOT cache or index your website unless you provide a robots.txt saying what can & can't be indexed & cached, just to be safe so Google doesn't do something it doesn't have permission to do. Then Google will probably get sued for unfair business practices or whatever for not indexing websites of people who are too lazy to write robots.txt or find it easier (and cheaper??) to just hire some lawyers than edit a text file.
The GP is right. Th
THE INTERNET DOES NOT WORK THAT WAY!!! (Score:5, Insightful)
You did "opt in," by broadcasting your shit on the Internet in the first place!
Don't like it? Don't upload it! Why is that simple concept so fucking hard to understand?!
I mean, jeez -- don't you realize that what you're saying is equivalent to yelling in my ear and then complaining that I heard you?
Re:What's the problem? (Score:4, Insightful)
Problem is.... newspapers, wanna have their pie and eat it too.
Solution.... it's Google's fault.
Result.... news dinosaurs go extinct and news mammals come to rule Earth
Moral.... don't be greedy beyond survival.
Re:What's the problem? - Desired Outcome/Wet (Score:2)
Re: (Score:2)
Re: (Score:2)
But I don't see why, if I forget to lock my door or choose not to bother, it should be legal for someone to take all my stuff.
Re: (Score:2)
In this case, the court said that it is fine for Google to copy, but the copyright holders have a right to have any offending content taken down within 24
Re: (Score:2)
Under most jurisdictions the law does recognise copying non public-domain material without permission is illegal.
Whether you think copying material without permission or stealing someone's stuff is "moral" or n
Re: (Score:3, Insightful)
You are right that determining what is moral is subjective. However, I will point out that most people would probably not see envision that their moral framework would change with time. That is, someone opposed to human slavery would presumably find the behavior repugnant whether it was done by people in
Re: (Score:3, Insightful)
Getting a book from a library or buying it in a shop or indeed if Penguin Publishing gives you a copy of the book it does not grant you the right to republish the text.
Agreed.
I think that the problem is that copyright law is largely based on physical media, and electronic distribution is a headache for the courts to sort out. For instance, with a book there is very little problem in just saying "Don't make a copy." You can use a book without making a copy. Electronic distribution is different - several copies are needed to make the information usable. Let's use the scenario where you download an ebook while sitting in Starbucks. Starting with the copyright holder's serve
Re: (Score:2)
I don't want Google to delist. That's the easy way and Google obeys the 10 million ways to not have your site indexed/cached/traversed/whatever. Let Google drop those sites to forced pagerank zero. Which is known to cause some interesting side effects, actually. If they complain that their traffic
24 hours! (Score:3, Funny)
I think it is safe to say they can afford to take their time...
Why are newspapers retarded? (Score:4, Insightful)
Re: (Score:2)
Their market is 4.2 millions of Belgian frenchspeakers, not the whole world.
They are stupid, I don't share their point of view but I really doubt that it will hurt their business.
Re: (Score:2)
Is it grassy? Are there three shooters?
Back, and to the left. Back, and to the left. Back... and to the left.
Re: (Score:3, Insightful)
If I'm Google, I turn the morons off and see how fast they come screaming back when their ad revenue plummets. Seriously, IT'S FREE FREAKING ADVERTISING. Google should be charging *them*.
You suck at teh internets. This is about the "google cache" link supplied on Google's search results page.
No, he makes a good point. If someone files a lawsuit against Google, all Google would have to do to stop them would be to suspend their site from all indexing and search results. There's no God-given right to be indexed by a search engine. Bad analogy; imagine you sell hot meaty pies, and some random guy walks around the town carrying a board with the words, "Eat Anonymous Coward's Hot Meaty Pies Today!!!". Now imagine that guy does it for free. Suing Google is somewhat like taking the guy to court
Content providers may shoot themselves... (Score:3, Interesting)
I don't believe that Google currently is mandated to show users any particular results. The simplest technological solution for Google might be to drop indexing the sites that send these takedown notices entirely. No index, no cache; dump it all and don't look back.
They are in no way legally bound to do come up with a more advanced solution that would be more $$ and add more complexity to the codebase.
Now because there very well may be information that is unavailable anywhere else (although it seems relatively unlikely - yes, they might have copyrighted articles that are unavailable otherwise, but I cannot imagine the information contained therein is such, unless you're talking about creative works) Google may try to work something out. Oh, that and they are remarkably not evil compared to the power they currently wield.
Imagine how many takedown notices they would receive after the first few rounds of companies that complained cannot be found through Google...
Oblig Monty Python Reference (Score:5, Funny)
Re: (Score:2)
Abstracts are illegal? (Score:3, Interesting)
Abstracts are generally a) uninformative and b) free. Seems like a huge overreaction on the EU's part.
Re: (Score:2, Insightful)
Re: (Score:2)
Damn, Malda must have fixed that in the last five minutes.
Re: (Score:3, Insightful)
"Abstract" and "extract" are not interchangeable terms.
An abstract is a meta-description of a document, giving an overview of its content but usually not using any of the document content itself. An extract, on the other hand, is a literal subset of the document.
Re: (Score:2)
Simple Answer... (Score:2)
Extend robots.txt? (Score:4, Insightful)
Implications for proxies (Score:3, Informative)
Belgium! (Score:2)
If so, perhaps there's good reason that in "Hitchhikers Guide to the Galaxy", belgium is a swear word.
Good, I don't want to find that! (Score:3, Interesting)
I hope Google removes these sites totally. Then, as written by others too, we need a law that says that the ones putting stuff on the web has to write correct HTML and robot.txt files if they don't want their content cached. Google can't manually go through every site on the web and it would be even more impossible for Google's smaller competitors.
That fine... (Score:2)
Just Pull Out (Score:5, Insightful)
Caching is Copying (Score:3, Insightful)
Re: (Score:3, Insightful)
I have news for you. When you stream your browser makes a local copy of portions of the stream, decodes them, and displays them.
If sampling is illegal (without permission) then clearly copying a portion of a video stream without permission would be illegal. However, since you can give permission to anyo
Re:Caching is Copying (Score:4, Interesting)
You are confused. Caching is fine. Searching is fine. Wholesale republication of cached pages without prior permission (i.e. Googles "cached version" link) is not fine.
Want proof? Try "caching" a prominent website on your own site and see how fast you get sued. What's good for the goose is good for the gander. If Google can republish cached pages and mere mortals cannot, that's class justice.
Sounds Good To Me (Score:2, Interesting)
Simple really (Score:3, Interesting)
I don't believe that anyone has added "being indexed" to human rights yet.
D
Wait until DCMA style takedown attacks start... (Score:2)
What I find annoying (Score:2)
I thought Google had a policy that a site was not allowed to show Google one thing and a normal user something else?
Or that policy has "Unless Google is paid off by said site" some
Cache-Control (Score:2)
waiting for google to *switch off* a country.. (Score:4, Interesting)
I can't imagine the Belgian public putting up for long with completely losing access to Google simply because their copyright laws were written in another century..
Here's how to fix the problem (Score:4, Interesting)
This page is cached, but your government officials will not let you read it. Here are their names and addresses, and the date of the next election, and the challengers to them who have signed a document that they will reverse this ruling if elected:
Censor: Hercule Poirot
Free Speech Challenger: Agatha Christie
Next election for them: 18 Aug 2007
Censor: Phinneas d'Satay
Free Speech Challenger: Mannequin Pisse
Next election for them: 18 Aug 2007
etc.
Tailor it per local region if that can be determined from the IP.
9) Wait a few years
10) Profit!
Re:Personal Responsibility (Score:4, Informative)
THIS is the correct tag:
<META NAME="ROBOTS" CONTENT="NOARCHIVE">
Sorry about the brain fart. I wish we could edit posts (preview, I know, but that would not have made me catch this one)
Re: (Score:2)
Not that copyright law doesn't need improvement in this area, but blaming the rights holders is off
Re: (Score:3, Insightful)
Re: (Score:2)
When they publish their work on a public net, that does not by any stretch mean they are relinquishing copyright to the work.
Re: (Score:2)
Re: (Score:2, Insightful)
Re:Personal Responsibility (Score:5, Insightful)
Re: (Score:2)
Yes, tracking, caching, being blogged about, etc. is normal, natural, and okay. But just because your website gets tracked, cache
Re: (Score:2, Insightful)
Spam is a free service which is optional. Email address owners have total control over it. Use the unsubscribe link at the bottom of the email.
Assuming those unsubscribe links would work (we all know they don't), would you consider this a logical way of thinking? If tomorrow some other caching company comes along and introduces another way in which website owners have 'total control', will that clear them from copyright violation? What if I want my content to be cached on prox
Re:What about MY memory, is that a cache? (Score:4, Informative)
Re: (Score:2, Funny)
Re: (Score:2)
Re: (Score:3, Insightful)
Wouldn't that undermine the GPL? If the linux kernel is in the public domain, companies could use it freely without having to give back.
Or what about street-performers performing their own material?
Re:Public Domain (Score:4, Insightful)
Then, perhaps its good that the rest of the world doesn't see it the way you do.
Because if the world were to be the way you see it, the entire web content industry would immediately go pay-per-view or subscription only to avoid all their work becoming public domain. Yes, what you propose would literally destroy the useful and open environment of the Internet.
Servers, bandwidth, and writers don't pay for themselves. If these sites can be copied wholesale and put up elsewhere without the original author having a say in the matter, you've just destroyed any monetary incentive to create. Much as many people like to think otherwise, money is important, and a strong incentive to create.
Re: (Score:3, Insightful)
Copyright has destroyed more then it has helped. I refer to what was happening before the revalutionary war.
This effect was curtailed by the 14 yaer limitation, but now that there isn't a real expiration date to copyright* it is happening again. Corporation are getting so much power that they are controlling culture.
Now, I don't agree with the original post about public domain, because by hos logic every book in a book store is public domain. I also b
Re:Public Domain--Recind (Score:2)
I think you mean recind. Resending means you'd send it to them again, even if you didn't want them to have it any longer.