Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
The Internet Wikipedia

Internet Archive Says It Has Restored 9 Million Broken Wikipedia Links By Directing Them To Archived Versions in Wayback Machine (archive.org) 40

Mark Graham, the Director of Wayback Machine at Internet Archive, announces: As part of the Internet Archive's aim to build a better Web, we have been working to make the Web more reliable -- and are pleased to announce that 9 million formerly broken links on Wikipedia now work because they go to archived versions in the Wayback Machine.

For more than 5 years, the Internet Archive has been archiving nearly every URL referenced in close to 300 wikipedia sites as soon as those links are added or changed at the rate of about 20 million URLs/week. And for the past 3 years, we have been running a software robot called IABot on 22 Wikipedia language editions looking for broken links (URLs that return a '404', or 'Page Not Found'). When broken links are discovered, IABot searches for archives in the Wayback Machine and other web archives to replace them with. Restoring links ensures Wikipedia remains accurate and verifiable and thus meets one of Wikipedia's three core content policies: 'Verifiability.'

This discussion has been archived. No new comments can be posted.

Internet Archive Says It Has Restored 9 Million Broken Wikipedia Links By Directing Them To Archived Versions in Wayback Machine

Comments Filter:
  • by Anonymous Coward on Tuesday October 02, 2018 @09:58AM (#57410754)

    Archive.org is precious! Since a long time I rather send my students the archived version of web pages. If it is not there, I upload it. That way I can reuse a web page many years later and trust it is still there. Phenomenally simple!

    • by tepples ( 727027 ) <.tepples. .at. .gmail.com.> on Tuesday October 02, 2018 @10:06AM (#57410806) Homepage Journal

      Until the domain's new owner sets up a robots.txt, causing Wayback Machine to retrospectively block public access to the archived copy of a document. See debate about this a year and a half ago [slashdot.org].

    • by Anonymous Coward

      Back in the day (when things were just getting built), I would bookmark interesting sites
      like crazy never thinking that they would disappear. So, when sometime later, I'd try to
      revisit that site, Boom! it was gone (yes I made a sound like that). So now, I scrape the
      pages I have an interest and it's there "forever" on my local HD. Except now even that's
      getting difficult as some sites are built on-demand in JS, so the "save page as" doesn't
      really save everything I think it saves. So if the site's backend

      • by mikael ( 484 )

        In the early days of the internet (mid 1990's to 2000), I used AT&T's web browser. This has the wonderful feature of closing the web browser the minute the dial-up connection was lost. So to avoid losing downloads, I'd just save every web page first, then read it.

        There are command line utilities to download a web page from an URL eg. wget.

      • That's why Chrome's "Save to PDF" is invaluable. Because sooner or later the website / web page WILL disappear.

  • by ZorinLynx ( 31751 ) on Tuesday October 02, 2018 @09:59AM (#57410764) Homepage

    You just know people will file DMCA takedowns for their content archived on Wayback, breaking the links yet again.

    Because people are petty and obsessed with controlling their content even though they're not making money from it anymore and they would have otherwise forgotten about it completely.

    • by MobyDisk ( 75490 )

      What makes you think this will happen? These 404s are usually the result of people reorganizing a site, retiring a blog, etc. They probably don't even know about it.

    • by tlhIngan ( 30335 ) <slashdot&worf,net> on Tuesday October 02, 2018 @01:00PM (#57412148)

      You just know people will file DMCA takedowns for their content archived on Wayback, breaking the links yet again.

      Because people are petty and obsessed with controlling their content even though they're not making money from it anymore and they would have otherwise forgotten about it completely.

      Except the Internet Archive is a recognized library, which means they actually have powers to ignore DMCA takedowns. In fact, as a library they get a lot of exceptions to the DMCA. It's why they host a lot of copyrighted material for free

      It's one of he few positives of the DMCA.

      • One of the interesting protections lets you upload files and even if the link is DMCA'd, all that would do is just hide the link from its search feature and browsing sections. The files stay up if you know the direct link. There's a lot of legally gray files like ROMs on there because of that.

  • How are these 9 million links broken in the first place?

    Wikipedia has a useful and seemingly complete archive of every version and edit for every article. I'm curious how these broken links originate, and how they differ from those that are available in the WIkipedia Revision History.

    • Re: (Score:2, Informative)

      by Anonymous Coward

      It's not like somebody broke the links by editing Wikipedia. Websites disappear all of the time.

    • by ZorinLynx ( 31751 ) on Tuesday October 02, 2018 @10:16AM (#57410880) Homepage

      Link rot.

      Even websites that have been around for decades experience it, because they change the structure of their site, breaking links to articles that might even still be available.

      If you follow a CNN link from 15 years ago, it probably won't work.

      It's a bit scary to think how much of our history we're losing to link rot and archive.org is doing their best to fight it. They are awesome people.

      • by wistlo ( 1416337 )

        Link rot is all to familiar to me. Local newspapers, such as nola.com in New Orleans, are impossible to search because of site updates that wiped out the entire history.

        I had not realized there were nine million such links on Wikipedia, as they tend to mind such matters more closely than the commercial media companies. (An exception are the NYT and Washington Post, who in my experience do pretty well in keeping old links working or at least redirected to the same content).

        I donate to archive.org partly be

      • Link rot.

        Even websites that have been around for decades experience it, because they change the structure of their site, breaking links to articles that might even still be available.

        If you follow a CNN link from 15 years ago, it probably won't work.

        It's a bit scary to think how much of our history we're losing to link rot and archive.org is doing their best to fight it. They are awesome people.

        Or it is like how Web searches frequently ended up back in the still adolescent days of the Web.

        You search on a topic, and the first couple dozen pages are all different sites that link to the same page that link to the same page, that link to the same page, that link to what has long since become a 404.

        I don't know what was more frustrating; the fact that no one can be arsed to create backup sources for info; or the searches where you have an important question about something (perhaps a tech problem), and

    • Re: (Score:2, Informative)

      by Anonymous Coward

      This is about external links, not wikilinks.

  • Do they provide a health plan? The workers or their children might need hospital care.
  • by Anonymous Coward

    Those articles were deleted for a reason!

    These nazis are trying to plug up the memory hole!

    Shut it down!

  • and that's okay? Web sites go down for a variety of reasons, and one of them is to delete outdated information or just information that the site owner no longer wants to display. So with this system if Wikipedia has ever cited a page, it never goes away. Now maybe the site owner is juts lazy and is being "protected" from his laziness by this project. Or just maybe the site owner eliminated information because he legitimately wanted to. In that case this project is contrary to his desires. It's just another

    • by mikael ( 484 )

      Most academic links go down because the student no longer works there, and the research lab has a clean out of old documents and web pages.

    • Since you're assigning responsibility for updating outdated content, why isn't it the responsibility of the cited website's author to update their page, rather than taking it down?

      In my experience with Wikipedia dead links, it's almost always a case of a server no longer existing or a site changing their CMS without setting up redirects.

  • The Amber project, http://amberlink.org/ [amberlink.org] provides a plugin for various content management systems to do the same thing on your own site.

"Facts are stupid things." -- President Ronald Reagan (a blooper from his speeach at the '88 GOP convention)

Working...