Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
The Internet

Inside the Internet Archives 85

blackbearnh writes "O'Reilly Media is running an interview with Gordon Mohr, Chief Technologist for the Internet Archive (archive.org). If you've ever wondered how pages are selected for archiving, or just how they manage such a huge quantity of data, the answers are here. The interview also touches on the problems of intellectual property in archives, archiving the Internet in a post Web 2.0 world, and the potential vulnerabilities exposed by archiving web sites that may include security exploits."
This discussion has been archived. No new comments can be posted.

Inside the Internet Archives

Comments Filter:
  • by Anonymous Coward on Wednesday June 18, 2008 @09:51AM (#23839215)
    My God, it's full of ones and zeros!
  • by Itninja ( 937614 ) on Wednesday June 18, 2008 @09:53AM (#23839251) Homepage
    The Interviewer: And I'm not sure I want to think about what posterity is going to think about a recording of my Twitter feed.

    If Twitter becomes so mainstream so as to be more than a 'remember when?' to posterity I will kill myself.
    • Not only that, but surely it points out how stupid and pointless some of the stuff is that he must post to twitter?

      Hopefully they've got a reasonable enough algorithm that it can pick the useful sites from the random blog crap.
    • Re: (Score:2, Funny)

      by GregNorc ( 801858 )
      I really like Penny Arcade's comic about twitter [penny-arcade.com].

      Twitter seems useless to me. Maybe if my friends used it I might, but for now an away message or facebook status does the job just fine.
      • Re: (Score:2, Funny)

        by negRo_slim ( 636783 )

        Maybe if my friends used it I might,
        That's what I always thought, but then I realize how few shits I'd give about what my friends would write... and it all becomes clear, all those sites are pointless!
    • And in the unlikely event that Twitter does become more that than, thanks to the internet archive, we will be able to remind you of your previous commitments.
    • Re: (Score:3, Funny)

      by Chyeld ( 713439 )

      If Twitter becomes so mainstream so as to be more than a 'remember when?' to posterity I will kill myself.
      --
      I am ten ninjas.
      Lets be realistic here, you are ten ninjas. You will be killing yourself regardless.
    • Can we expand upon this thread? I have a vehement hatred of Twitter (never used it myself) but I can't quite put my finger on why. I usually like to know why I hate something, but when it comes to Twitter I am sputtering for words...vapid...inane...blonde.
      • by Gulthek ( 12570 )
        Why waste time hating something you don't use? Especially something you don't have to use?

        Twitter is just a website that allows people to post snippets of text that other people can subscribe to. That's it.

        Do you also have an unreasonable hatred of some book genres you don't read? Movies you don't watch? Videogames you don't play? Sports you don't care about? If so, why?
    • When you chamber the bullet, can you let us all know via Twitter?
  • by AliasMarlowe ( 1042386 ) on Wednesday June 18, 2008 @10:03AM (#23839413) Journal
    and does archive.org record google's cache?
  • I'm going to have to RTFA! I keep wondering why my old Quake site the Springfield Fragfest (abandoned years ago) is in the Wayback Machine, while "Kneel" Harriot's Yello There has seemingly disappeared from the entire internet.

    The only page from Yello There I can find is one that was linked from my site (the aformentioned Quake site). That particular page was wonderfully recursive because of Yello's frame.
  • by jacquesm ( 154384 ) <j@wwAUDEN.com minus poet> on Wednesday June 18, 2008 @10:07AM (#23839457) Homepage
    I keep running into bookmarks that have gone awol, then find that archive.org also doesn't have the pages anymore.

    Combining a bookmarking / chaching service would be really handy.
    • by blhack ( 921171 ) on Wednesday June 18, 2008 @10:20AM (#23839671)

      Combining a bookmarking / chaching service would be really handy.
      I heard that lexmark makes one, its called a "printer".
      • Re: (Score:3, Insightful)

        by jacquesm ( 154384 )
        hehe, yes, so true, but then you can't access it electronically any more.

        I really think the bookmark + cache would be a nice thing to have without resorting to 'dead tree' format.

        But it's a good point, a printer would be an easy way to collect stuff that you really want / need to keep.
        • by cffrost ( 885375 )

          hehe, yes, so true, but then you can't access it electronically any more.
          I heard that Canon makes an automatic printer>scanner>shredder.
    • by RareButSeriousSideEf ( 968810 ) on Wednesday June 18, 2008 @10:44AM (#23840077) Homepage Journal
      Yeah, how exactly do pages go AWOL from archive.org? I've encountered that, plus pages suddenly acquiring META refresh tags (maybe through an external script or iframe?) that redirect to some domain squatter's site now. Extremely annoying. I'm going to have to mess around with wget to see what's in the markup, unless someone can suggest an easier way to get at such content.

      Combining a bookmarking / chaching service would be really handy.
      Furl [furl.net] fits that bill, doesn't it?

      • thank you !

        I'd never even heard of them.
      • Re: (Score:3, Informative)

        by TTK Ciar ( 698795 ) *

        New material is always being added to The Archive's web archive, and (afaik) unlike the collections archive it is never deliberately deleted. Most of what appears to be "pages going AWOL" is indexing errors. In order for newly archived stuff to become visible to the wayback machine interface, the entire web archive needs to be periodically re-indexed. Unfortunately the indexing process is error-prone, and stuff that might have been accessible before the index might disappear afterwards (and appear again aft

    • Re: (Score:2, Informative)

      by Alphax.au ( 913011 )

      Combining a bookmarking / chaching service would be really handy.
      WebMynd [mozilla.org] claims to do that; I haven't tried it myself though.
    • I tried to find a Hogans Heroes page that I did in 1996 so I could find whatever email I used so that I could make an attempt at getting back my original slashdot uid. No luck for me.

      I blame Internet Archive for this and ask twitter and his sock puppet army to start ranting about this horrible horrible travesty as well. The loss of my 5 digit uid is as bad as Gitmo and waterboarding combined!
    • by Burz ( 138833 )

      Combining a bookmarking / chaching service would be really handy.

      Install the Scrapbook add-on for Firefox, [mozilla.org] which does exactly that. You can save URLs, pages, sets of linked pages, and/or selected areas simply by right-clicking and selecting 'Capture'.

      The pages are saved in a folder of your choosing, can be organized in a hierarchy and also searched and viewed using the add-on; It even has a feature to 'refresh' a saved paged from its URL, or just send you to the page's original URL.

      Finally, it has a quick element editor that lets you remove page elements, add 'sticky' n

    • I'd love it if they would archive political sites as seen at various addresses, and make them available to the public - the RNC homepage (at least once upon a time) was completely different when logging in from different areas of the country - graphics, photos, everything custom to the geography from which one logged in.
  • is clearly because someone wanted the worlds largest porn collection!

    From TFA:
    "there's a lot of porn on the internet, so there's a lot of porn that gets collected when you're archiving the whole internet"
  • Quick - all you naysayers, start jumping up and down!

    JT: You mentioned that you use a lot of Open Source and in-house developed software. I assume the underlying operating system is something Open Source(y)? GM: Yes, yes; what we've moved over the years from a Redhat version to a brief use of something that was a pure Debian to now using almost exclusively Ubuntu.
    Personally, I'd like to think Ubuntu is used because it's relatively easy to use, and Just Works(TM).
    • Re: (Score:3, Insightful)

      by TTK Ciar ( 698795 ) *

      The transition from Debian to Ubuntu was driven by developers' desire for more and newer features. We originally went with Debian-Stable because it was, well, stable, and did everything we needed the PetaBox to do at the time. But programmers whined and moaned that such-and-such package wasn't supported, or was too old, and claimed that this held back development of features which Brewster wanted to see made into reality.

      Brewster was never much for stability anyway, so the transition was made. It bit u

  • Wayback (Score:5, Informative)

    by TheRealMindChild ( 743925 ) on Wednesday June 18, 2008 @10:18AM (#23839657) Homepage Journal
    While I love the wayback machine, a little "problem" creped in a couple of years ago that is still there... and it drives me nuts.

    At one point, I forgot to renew my domain name and a squatter snatched it up the second it was available. I have since lost the html/java applets/images/etc that I had originally there. I used to show people what it looked like via the wayback machine. But you can't do it anymore. Example: http://web.archive.org/web/*/http://www.mindchild.net [archive.org]

    Apparently, the current squatter put a robots.txt on that domain, and wayback refuses to show any ARCHIVED pages where the domain CURRENTLY has a robots.txt. I emailed them about it, and after a couple of months, I actually got a reply pretty much saying "That is just the way it is. We are underfunded and have no time to fix it. Sorry".

    So if for some reason you don't want to have your site viewable via the wayback machine, just put up a robots.txt. It doesn't even need to contain anything.
    • Re: (Score:2, Insightful)

      by ibwolf ( 126465 )
      This is an unfortunate side effect of their policies but it is very understandable that they would like to err on the side of caution.

      Should the robots.txt ever go away or change then your old stuff will become accessible again.
    • Re:Wayback (Score:4, Insightful)

      by iangoldby ( 552781 ) on Wednesday June 18, 2008 @10:29AM (#23839843) Homepage

      wayback refuses to show any ARCHIVED pages where the domain CURRENTLY has a robots.txt.
      In true Raymond Chen style, think about what the world would be like if this wasn't true: If it wasn't true, then a site owner would have no way to remove his content from the Wayback Machine retrospectively. That raises far more problems that the ability of a new owner to remove a previous owner's content.
      • Re:Wayback (Score:5, Insightful)

        by SydShamino ( 547793 ) on Wednesday June 18, 2008 @10:44AM (#23840079)

        If it wasn't true, then a site owner would have no way to remove his content from the Wayback Machine retrospectively.
        I don't necessarily disagree with their policy, but this is the wrong argument for it.

        If you publish something, you lose the right to withdraw it from the public archives retrospectively. That's part of the "contract" (term used figuratively) with the public that establishes the foundation of copyright law.

        If you don't want it to appear on the Wayback Machine, you have an ability called robots.txt. That's already more than you have if you publish a book and want to keep it out of libraries. In neither case, though, do you have the right to demand or expect the content to be removed from the archive on your request.

        I see what the archive does to be a courtesy service, not something that the site owners should expect.
        • Well, inclusion in the Wayback Machine is optional, albeit opt-out rather than opt-in. So presumably the argument is not over whether a site owner should have control over whether his content is archived in the Wayback Machine.

          But if you couldn't remove content retrospectively, then exclusion would be optional only for site owners who happen to know about the Wayback Machine and its robots.txt policy. That specifically is the thing that I would find unjustifiable.
          • by sp332 ( 781207 )
            If you put it on the internet, it is expected that you want people to see it. I usually prefer opt-in to opt-out, but this is a case where the content is ALREADY PUBLIC. In this case, any opt-out is being generous.
          • It's already well established that robots.txt is required to avoid having your data indexed and archived. Perhaps mom and pop websites don't know this, but I would argue that mom and pop don't understand large portions of the copyright law, and that this is just one small part. I think at this point that any "copyright law for dummies" book, if not bought and paid for by the xxAA, would include this information in the section on web publishing.
            • Forget legalistic arguments. The problem that I see with your position is that you are making ignorance the 'unforgivable sin'. ('Unforgivable' in the sense that once committed you can't ever correct it.)

              • Again, once you publish something, you forever lose the "right" to keep it from entering the public domain, free for all use with no restrictions, at some point in the future. This is no different that your "right" to prevent a library from owning and lending a copy of your book, long after its out of print. Neither exist, and neither should.

                That's a basic tenet of copyright. Without copyright, too many people would keep their works unpublished and hidden, preventing great art from ever becoming part of

      • "If it wasn't true, then a site owner would have no way to remove his content from the Wayback Machine retrospectively."

        Well, what the heck is the point for a Wayback Machine that refuses to way back, then?
    • Re: (Score:3, Informative)

      by corsec67 ( 627446 )

      User-agent: ia_archiver
      Allow: /
      in the robots.txt that you mention (http://mindchild.net/robots.txt) is hardly not containing anything.

      But, it is interesting how they take the current robots.txt to apply to old content that used to be at that location...
      • Re:Wayback (Score:4, Insightful)

        by RareButSeriousSideEf ( 968810 ) on Wednesday June 18, 2008 @10:52AM (#23840205) Homepage Journal
        Ideally they could obey the robots.txt at the time of archiving, and simultaneously grab a snapshot of the whois record. In the future, new robots.txts would by default only take away previously archived content if the domain hadn't changed hands. This would keep squatters from killing the archive, and the original copyright owner could always actively request removal of content if s/he matched the old whois record (though this would take manpower at archive.org, which is a problem).

    • Re: (Score:3, Funny)

      by oodaloop ( 1229816 )
      I'm sorry you feel that way. I, for one, welcome our robots.txt overlords.
    • by gojomo ( 53369 ) on Wednesday June 18, 2008 @12:55PM (#23842231) Homepage
      Unfortunately, this "squatters-add-robots-restrictions" problem comes up a lot.

      We'd like to address it, and to do so there are two major issues to be tackled: (1) our current Wayback Machine software only excludes sites on a "for all time" basis; (2) short of mechanistically trusting the current domain owner, determining who has the right to exclude or restore material could be a very labor-intensive, error-prone, and liability-compounding process.

      The new open-source 'Wayback' software, which will go live for the Worldwide Wayback Machine later this year, enables time-range exclusions. (It's currently only used for many smaller collections we do for partners.) That should give us the capability to address (1). Addressing (2) will require further discussion about the proper and efficient policies -- but it's on our agenda once the technical capability for time-range exclusions is in place.

      Specifically regarding the mindchild.net site you mention, it looks like the issue is that our current retroactive-exclude robots.txt-parser doesn't understand the 'Allow' directive. (The mindchild.net/robots.txt tries to enable ia_archiver/WaybackMachine access via an 'Allow'.) That too will be fixed in the new 'Wayback' deploy (if not sooner).

      - Gordon @ IA

    • by ljw1004 ( 764174 )
      Simple. Use the wayback-machine to see how the wayback-machine used to display your page before it instituted its robots.txt policy.
    • From the site's current robots.txt: -

      User-agent: ia_archiver
      Allow: /

      Now that's irony. (Actually, is that irony? I'm always a bit worried I might get it wrong, since the whole Alanis Morissette thing.)

  • I had a cheesy site back in college where I played around with HTML and learning the basics. I ended up making a few pages that poked fun at friends.

    I went to archive.org years later looking for them cause I remember back in the day they nabbed em and now they're all gone. The images and sounds I used were all gone.

    I wanted to recreate a page from that archive for nostalgia reasons with my old friends. Can't do it and I can't find the files anymore in my local archives.

    I was kinda disappointed but I guess i
  • They use Alexa???

    Ew.

  • by bcrowell ( 177657 ) on Wednesday June 18, 2008 @10:43AM (#23840071) Homepage

    I was left with a several questions that weren't addressed by the article.

    The slashdot summary says the article explains how pages are selected for archiving, but I couldn't find anything in the article that explicitly explained that. It does say that the actual crawler is run by alexa, which hands off the data to them, but it didn't say what the criteria were. Alexa computes various stats about web sites, so presumably they could apply some kind of minimum cut. Or do they try to index every single lame personal page, unless the owner opts out? That seems like it would require an unreasonable amount of disk space. The web also has a lot of stuff like, e.g., the kind of spam sites that try to scam google's search/ad system; I wonder if the archive records those.

    The article didn't say a darn thing about funding. They have to run thousands of machines, so the electric bills must be formidable. Where the heck do they get their money? Is there a significant chance that their funding will dry up at some point in the future, and the whole archive will disappear?

    The article states that they moved from plain Debian to Ubuntu. That surprised me, and I was curious why they'd do that. E.g., if you're shopping for webhosts, it's much more common for them to offer plain Debian than Ubuntu. I love Ubuntu as a desktop distro, but it surprises me that they'd see any big advantage in using Ubuntu for their application.

  • by dbarron ( 286 ) on Wednesday June 18, 2008 @10:45AM (#23840117)
    Check this out....it reads like a free software update blog :)
    http://web.archive.org/web/19980113191222/http://slashdot.org/ [archive.org]
  • AAAARGH!1!

    I can't stand "post [xyz] world", "pre [xyz] mindset" or any such similar phrases. Go away, GO AWAY!!!!

    Really, the archive is tasked with 'saving' the internet every so often. I'm sure they'll figure out how to save AJAX stuff. And if not, then that stuff isn't really meant to be saved, now is it? (I mean, we don't need a save of Gmail, since it's account based.)
  • Back in 2003, the Internet Archive guys set up a new project called, "Recall" which theoretically would allow somebody to do a Google-style search through the collected material irrespective of the data. 3D searches through the data stacks.

    This was very exciting! Seriously; you might remember the content of a page you were looking at five years ago, but can you remember it's specific web address? --Especially with the turnover and abandoning of domain names, it is entirely possible to simply lose contact

    • Dumb typo. Perhaps not obviously, I meant in the first line of the above post, "irrespective of the date"

      Normally I let typos go; people are generally forgiving and will read around them knowing that they are just as susceptible to making errors, but in the case of those typos which don't just create a spelling mistake, but actually switch the meaning of an entire sentence, I will sometimes haul myself to the task of writing a short retraction. Just like this one.

      Cheers.

      -FL

      • You mean they recalled recall ? ...

        I too would like for google to store old versions of sites a little longer and keep sites that have dissapeared.
        Ah well, guess that isn't their purpose.
    • by gojomo ( 53369 ) on Thursday June 19, 2008 @01:08AM (#23851181) Homepage

      'Recall' wasn't exactly Google-like search. IIRC, in some respects it was better, with an advanced idea of related concepts, and with data on frequency of terms over time. In other respects, it was not what people would expect: there was no exact phrase matching, and certain terms that didn't become tracked concepts weren't findable at all, even though you could see the words in other indexed results.

      Unfortunately, IA couldn't maintain the deployment when the developer, Anna Patterson, moved to Google. So, Recall turned out to be a short-lived experiment, grand in scale of pages indexed and novel features but not in traffic served.

      Patterson did big things at Google and now has another search startup, Cuill, that's likely to do more good things for the web.

      At the Internet Archive, we've also been using the open-source projects Nutch and Hadoop to offer search on smaller web collections for our partners. (A pair of such searchable partner collections for the US National Archives and Records Administration lives at webharvest.gov [webharvest.gov].) Someday we may be able to scale these up to the full 11+ year archive.

      - Gordon @ IA
      • Thanks for the info! I hope I didn't seem too gripe-y; I appreciate that you guys are working at all on such a project as the Archive. Though I would indeed love to see one day it fully searchable! Good luck in your continued efforts.


        Cheers!


        -FL

  • I thought he worked for Intel and was the person behind Mohr's Law. Has he changed his interests recently?
  • by Anonymous Coward
    I think it is really weird that EVERY SINGLE news site on the Internet is mysteriously missing any captures from May 2001 to Sept 2001 (maybe one or two days in July are there).

    And then all of a sudden on Sept 11, ALL the news sites have multiple captures per day.

    I want to see what CNN, LA times, Washington Post, etc. had in the news on Sept 8th, 9th and 10th...
  • #1 Why aren't archived pages modified very slightly to insert a <BASE HREF="archive.org/82828282/etc"> tag, so that archived images, sub-pages, and the like will be fetched from the archive, rather than linking to non-existent locations on the current server? Surely the current server operators don't like the dozens of hit for everyone that visits the archive...

    But more than that, it's a PITA to visit an archived page, and manually copy and paste every single link, one at a time. And I'm sure most p
    • Umm, they do this with JavaScript appended to the page.
    • #1: It's done with javascript, as someone else already said.

      #2: The Deriver system's video encoding was the attempt of a few people (usually one, sometimes two, occasionally three, often zero) to get something that would work at all with as much of the content as possible. If a change improved output for some items, while causing new problems for others, it tended to not be adopted. The Archive has always been bad about getting back to volunteers. It takes manpower to reply and incorporate third partie

      • #1 I hate Javascript...

        #2 I remain convinced the conversion system was simply set up by woefully unqualified individuals. I've done several such systems that are surely much more complicated. Traveling to the Bay Area in person would be prohibitive. Oh well.

        #3 Thanks for the explanation.

Business is a good game -- lots of competition and minimum of rules. You keep score with money. -- Nolan Bushnell, founder of Atari

Working...