Fixing Broken Links With the Internet Archive 79
eggboard writes "The Internet Archive has copies of Web pages corresponding to 378 billion URLs. It's working on several efforts, some of them quite recent, to help deter or assist with link rot, when links go bad. Through an API for developers, WordPress integration, a Chrome plug-in, and a JavaScript lookup, the Archive hopes to help people find at least the most recent copy of a missing or deleted page. More ambitiously, they instantly cache any link added to Wikipedia, and want to become integrated into browsers as a fallback rather than showing a 404 page."
Re:No. 404 is important! (Score:5, Informative)
Chillax, dude, it's simply a matter of implementation and preferences.
While archive.org might think this is a new idea, I've been using Errorzilla mod [jaybaldwin.com] for the good part of a decade. When a 404 is encountered, you get the regular error page, and then it adds some buttons that let you look at the Google cache, Coral cache, Wayback archive, etc.
Quite useful and non-harmful.
Re:No. 404 is important! (Score:4, Informative)
The only way this can be implemented without causing problems for others is to have it be an option in the browser for those who want it to do the additonal lookup.
That is the proposal. The browser does it. The web server still returns 404, so your code does not have to work around anything. This is not the NXDOMAIN redirection fiasco.