Internet Archive Says It Has Restored 9 Million Broken Wikipedia Links By Directing Them To Archived Versions in Wayback Machine (archive.org) 40
Mark Graham, the Director of Wayback Machine at Internet Archive, announces: As part of the Internet Archive's aim to build a better Web, we have been working to make the Web more reliable -- and are pleased to announce that 9 million formerly broken links on Wikipedia now work because they go to archived versions in the Wayback Machine.
For more than 5 years, the Internet Archive has been archiving nearly every URL referenced in close to 300 wikipedia sites as soon as those links are added or changed at the rate of about 20 million URLs/week. And for the past 3 years, we have been running a software robot called IABot on 22 Wikipedia language editions looking for broken links (URLs that return a '404', or 'Page Not Found'). When broken links are discovered, IABot searches for archives in the Wayback Machine and other web archives to replace them with. Restoring links ensures Wikipedia remains accurate and verifiable and thus meets one of Wikipedia's three core content policies: 'Verifiability.'
For more than 5 years, the Internet Archive has been archiving nearly every URL referenced in close to 300 wikipedia sites as soon as those links are added or changed at the rate of about 20 million URLs/week. And for the past 3 years, we have been running a software robot called IABot on 22 Wikipedia language editions looking for broken links (URLs that return a '404', or 'Page Not Found'). When broken links are discovered, IABot searches for archives in the Wayback Machine and other web archives to replace them with. Restoring links ensures Wikipedia remains accurate and verifiable and thus meets one of Wikipedia's three core content policies: 'Verifiability.'
Archive.org is precious! (Score:5, Insightful)
Archive.org is precious! Since a long time I rather send my students the archived version of web pages. If it is not there, I upload it. That way I can reuse a web page many years later and trust it is still there. Phenomenally simple!
robots.txt to block Wayback Machine (Score:5, Informative)
Until the domain's new owner sets up a robots.txt, causing Wayback Machine to retrospectively block public access to the archived copy of a document. See debate about this a year and a half ago [slashdot.org].
Re:robots.txt to block Wayback Machine (Score:5, Interesting)
Exactly what I was thinking. A site posts something that creates a situation, they take the page down and engage in PR spin, Wikipedia links to the archived copy of the page to demonstrate what content had been there, and then the site modifies their robots.txt, retroactively clearing the content from the IA.
I understand IA's policy of abiding by robots.txt, but when someone needs to be held accountable for what they said, having a single source that can serve as a living embodiment of "the Internet never forgets" would be quite nice.
Re: (Score:2)
Can I archive the archived page?
Re:robots.txt to block Wayback Machine (Score:5, Informative)
Until the domain's new owner sets up a robots.txt, causing Wayback Machine to retrospectively block public access to the archived copy of a document. See debate about this a year and a half ago.
Except they don't do that any more [bit-tech.net], unless the domain's new owner explicitly blocks the internet archive's user agent. A disallow * policy is now ignored.
Re: (Score:2)
Thank you for the update. The Daily Pangram 1-550 is saved.
Re: (Score:1)
Back in the day (when things were just getting built), I would bookmark interesting sites
like crazy never thinking that they would disappear. So, when sometime later, I'd try to
revisit that site, Boom! it was gone (yes I made a sound like that). So now, I scrape the
pages I have an interest and it's there "forever" on my local HD. Except now even that's
getting difficult as some sites are built on-demand in JS, so the "save page as" doesn't
really save everything I think it saves. So if the site's backend
Re: (Score:1)
In the early days of the internet (mid 1990's to 2000), I used AT&T's web browser. This has the wonderful feature of closing the web browser the minute the dial-up connection was lost. So to avoid losing downloads, I'd just save every web page first, then read it.
There are command line utilities to download a web page from an URL eg. wget.
Re: (Score:2)
That's why Chrome's "Save to PDF" is invaluable. Because sooner or later the website / web page WILL disappear.
Just watch, people will ruin it (Score:4, Insightful)
You just know people will file DMCA takedowns for their content archived on Wayback, breaking the links yet again.
Because people are petty and obsessed with controlling their content even though they're not making money from it anymore and they would have otherwise forgotten about it completely.
Re: (Score:2)
What makes you think this will happen? These 404s are usually the result of people reorganizing a site, retiring a blog, etc. They probably don't even know about it.
Re:Just watch, people will ruin it (Score:5, Interesting)
Except the Internet Archive is a recognized library, which means they actually have powers to ignore DMCA takedowns. In fact, as a library they get a lot of exceptions to the DMCA. It's why they host a lot of copyrighted material for free
It's one of he few positives of the DMCA.
Re: (Score:2)
One of the interesting protections lets you upload files and even if the link is DMCA'd, all that would do is just hide the link from its search feature and browsing sections. The files stay up if you know the direct link. There's a lot of legally gray files like ROMs on there because of that.
how are these links broken in the first place? (Score:1)
How are these 9 million links broken in the first place?
Wikipedia has a useful and seemingly complete archive of every version and edit for every article. I'm curious how these broken links originate, and how they differ from those that are available in the WIkipedia Revision History.
Re: (Score:2, Informative)
It's not like somebody broke the links by editing Wikipedia. Websites disappear all of the time.
Re:how are these links broken in the first place? (Score:5, Informative)
Link rot.
Even websites that have been around for decades experience it, because they change the structure of their site, breaking links to articles that might even still be available.
If you follow a CNN link from 15 years ago, it probably won't work.
It's a bit scary to think how much of our history we're losing to link rot and archive.org is doing their best to fight it. They are awesome people.
Re: (Score:2)
Link rot is all to familiar to me. Local newspapers, such as nola.com in New Orleans, are impossible to search because of site updates that wiped out the entire history.
I had not realized there were nine million such links on Wikipedia, as they tend to mind such matters more closely than the commercial media companies. (An exception are the NYT and Washington Post, who in my experience do pretty well in keeping old links working or at least redirected to the same content).
I donate to archive.org partly be
Re: (Score:2)
Link rot.
Even websites that have been around for decades experience it, because they change the structure of their site, breaking links to articles that might even still be available.
If you follow a CNN link from 15 years ago, it probably won't work.
It's a bit scary to think how much of our history we're losing to link rot and archive.org is doing their best to fight it. They are awesome people.
Or it is like how Web searches frequently ended up back in the still adolescent days of the Web.
You search on a topic, and the first couple dozen pages are all different sites that link to the same page that link to the same page, that link to the same page, that link to what has long since become a 404.
I don't know what was more frustrating; the fact that no one can be arsed to create backup sources for info; or the searches where you have an important question about something (perhaps a tech problem), and
Re: (Score:2)
Or more simply, they have reorganised their website directory system. http:/// [http].com/developersupport//demos/mainindex.html suddenly becomes http:/// [http]/presentations/demos/mainindex.html
Re: (Score:2, Informative)
This is about external links, not wikilinks.
Health care (Score:1)
Accurate? (Score:1)
Those articles were deleted for a reason!
These nazis are trying to plug up the memory hole!
Shut it down!
So now we link to outdated information (Score:1)
and that's okay? Web sites go down for a variety of reasons, and one of them is to delete outdated information or just information that the site owner no longer wants to display. So with this system if Wikipedia has ever cited a page, it never goes away. Now maybe the site owner is juts lazy and is being "protected" from his laziness by this project. Or just maybe the site owner eliminated information because he legitimately wanted to. In that case this project is contrary to his desires. It's just another
Re: (Score:2)
Most academic links go down because the student no longer works there, and the research lab has a clean out of old documents and web pages.
Re: So now we link to outdated information (Score:2)
Since you're assigning responsibility for updating outdated content, why isn't it the responsibility of the cited website's author to update their page, rather than taking it down?
In my experience with Wikipedia dead links, it's almost always a case of a server no longer existing or a site changing their CMS without setting up redirects.
You can do this yourself... (Score:1)