Forgot your password?
typodupeerror
Software Twitter Technology

Library of Congress Offers Update On Huge Twitter Archive Project 88

Posted by samzenpus
from the 140-little-problems dept.
Nerval's Lobster writes "Back in April 2010, the Library of Congress agreed to archive four years' worth of public Tweets. Even by the standards of the nation's most famous research library, the goal was an ambitious one. The librarians needed to build a sustainable system for receiving and preserving an enormous number of Tweets, then organize that dataset by date. At the time, Twitter also agreed to provide future public Tweets to the Library under the same terms, meaning any system would need the ability to scale up to epic size. The resulting archive is around 300 TB in size. But there's still a huge challenge: the Library needs to make that huge dataset accessible to researchers in a way they can actually use. Right now, even a single query of the 2006-2010 archive takes as many as 24 hours to execute, which limits researchers' ability to do work in a timely way."
This discussion has been archived. No new comments can be posted.

Library of Congress Offers Update On Huge Twitter Archive Project

Comments Filter:
  • Why? (Score:5, Insightful)

    by Anonymous Coward on Monday January 07, 2013 @06:40PM (#42511363)

    Why does the federal government need to archive the useless information twitter calls tweets .. yet another huge wast of my money (being a taxpayer and all)

    • by Anonymous Coward

      Because we desperately need to know that little Susie just ate some pizza and finished taking a shit 5 minutes ago.

    • Re:Why? (Score:5, Insightful)

      by griffjon (14945) <GriffJon@nosPAM.gmail.com> on Monday January 07, 2013 @06:54PM (#42511593) Homepage Journal

      To paraphrase a quote by the Internet Archive chairman from some years back, "The average lifespan of a Web page today is 100 days. This is no way to run a culture."

      • Re:Why? (Score:5, Insightful)

        by Anonymous Coward on Monday January 07, 2013 @07:10PM (#42511759)

        To paraphrase a quote by the Internet Archive chairman from some years back, "The average lifespan of a Web page today is 100 days. This is no way to run a culture."

        The average life of an inane conversation used to be maybe 15 minutes. I'm not sure the world is a better place for having extended that.

        • by icebike (68054)

          mod parent up.

        • The average life of an inane conversation used to be maybe 15 minutes. I'm not sure the world is a better place for having extended that.

          In the old days of USENET, conversation threads used to run for weeks, sometimes months, actually.

          Not minutes.

          of course, back then, we actually knew who everyone was, and could ping and finger them.

      • "This is no way to run a culture."

        Teen angst and celebrity gossip are considered culture, but popular movies and music are not. American society at its finest!

    • Re:Why? (Score:5, Interesting)

      by fsterman (519061) on Monday January 07, 2013 @07:28PM (#42511965) Homepage

      Because academia is starved for data. Companies hoarding information limits what we can do with it. The library of congress is acting as an aggregate buyer for thousands of individual researchers, it is a huge cost savings.

    • by skids (119237)

      My first reaction was "no, please, don't encourage the twits."

    • Re:Why? (Score:5, Insightful)

      by Hatta (162192) on Monday January 07, 2013 @07:49PM (#42512219) Journal

      Because Twitter is a great model for the spread of ideas. If you study the spread of ideas, you can begin to understand it and use that understanding to affect it. That has enormous value.

      • by nospam007 (722110) *

        "Because Twitter is a great model for the spread of ideas."

        Indeed. We'll have a treasure trove of racist/bigot/whatever messages from 20-30 years ago, when they were young and dumb, for every candidate we are going to vote for.

    • by Dahamma (304068)

      300TB of storage can be built for less than $100k these days. Far from a "huge" waste of money. Though given the value of most Twitter posts, it's still probably a waste of $99,500.

    • Because tweets aren't useless - they're as much a part of societies communications as post cards, phone calls, etc... etc... There's a lot of information there about the day-to-day interests and communications patterns of a lot of ordinary people.

      For a historian or a sociologist, that archive is going to be a gold mine.

  • by Anonymous Coward

    Is there limitation hardware or software? Where is the bottleneck?

    Just give me a csv.

    • by ackthpt (218170)

      Is there limitation hardware or software? Where is the bottleneck?

      Just give me a csv.

      Probably a simple hashing routine would cut down on the size 1 = LOL, 10 = ROFL, 11 = ROFLMAO, ...

    • Researchers are hampered by all the CPU cycles going to FBI and CIA searches. (Makes me think of Person on Interest)
  • provide a limited version of the database with only some information from the tweets, so there's less data to search through? (of course, keep the full data in case a search depends on it)

    • by greenlead (841089)
      I agree. I think they should limit the initial database to certain time spans surrounding events of national interest and "tweets" that seem to be related. They can learn database structure and procedures from there and perhaps later add in the full archive. The most important part of anything like this is metadata. For example, a tweet that says "dudes! this concert rocks!!!" is useless unless you happen to know that the user is at a Trans-Siberian orchestra concert. And then, if you are able to attach al
      • I meant providing all the tweets in a simpler form, as opposed to excluding some of the tweets entirely, but I suppose it would make sense to at least test on a small subset of tweets first.

        • I don't think they should pick and choose what to keep, the value is in the fact that they are everyday conversations and observations (much like Samuel peeps diary). However I can't think of a reason why an academic would want every tweet for four years, they could get the same insights from a much smaller random sample.
    • by uncanny (954868)

      with only some information from the tweets

      That's a really good idea. Hell, that would probably make their whole project a couple of megabytes!

    • by Instine (963303)
      provide hourly chunks of raw data as torrents.

      Done!
  • by Anonymous Coward

    Just buy batches of 300 of those 1Tb flash drives in the article below and pass them out to the researches as needed?

  • You had better turn on indexing.
    • by Trepidity (597)

      Indexing on data sets of that size is itself a pretty big challenge. You don't want an index that takes years to build, and it doesn't do much good if it's so huge that it is itself super-slow to access.

      There is some research [pdf] [helsinki.fi] on making compressed full-text indexes, but much of it is still research-level.

    • by Anonymous Coward

      Deduplication ought to help too.

  • Archives trash.

    Really, why not record and archive random traffic sounds? Some day when everyone is flitting about in whisper quiet air cars they'll marvel at the cacophony of the present age. Gadzooks!

    • by Reilaos (1544173) on Monday January 07, 2013 @07:00PM (#42511673) Homepage

      Some of the most important historical knowledge comes from things that people at the time wouldn't consider important. Things like grocery lists can help determine the diets and agricultural abilities of a culture at the time.

      For an example I just made up: In the future, the presence or lack of traffic reports could, alongside legal/budget records, help a historian verify the spread/development of roadways.

      Twitter could be a huge source of topics and a wealth of information for historians in the future.

      They may conclude that we were all idiots. This too, counts as useful information.

      • Yep, the ancient rubbish pit is often the most informative part of an archeological dig, however this is more along the lines of Samuel Peeps' diary. Four years worth of tweets is a bit over the top, IMHO a few random days and a few significant days would be all you really need. I have something similar at home, it's a large coffee table book that has one page of newspaper clippings for every month of the 20th century.
  • Classical books, works of art, grand inventions that changed the world...and we chose to archive people pissing about on a Friday night. Good Job, America. You've shown the world where your priorities lie.
    • It's illegal to make a copy of any of those other things though.

    • Thing is YOU don't get to define what future generations think of you and your civilization, if you want to help them to form an accurate view rather than just the image you want to portray then you need to leave some juicy rubbish dumps undisturbed, this is one such dump. I'd question the justification for the size of this particular dump but you make it sound like they are throwing out Mark Twain to make room for twitter. You know they have the resources to do both things at the same time, and that this p
      • by Jetra (2622687)
        And what, exactly, is going to show our future kin if we archive all of Twitter? I'll tell you one thing, swallowing it is going to be very, very hard.
  • by rts008 (812749) on Monday January 07, 2013 @07:08PM (#42511753) Journal

    So, just how many 'Libraries of Congress' are there in 300TB?
    Does this mean that as the archives swell, the metric does also?
    Where does this madness end? ;-)

  • 1 Library of Congress [wikipedia.org] ~ 10Tb of data

    Therefore, the database will be around 30 LoCs in size.

    But, if we consider this database as part of the Library of Congress, we get a fixpoint problem..

  • "The Library's mission is to support the Congress in fulfilling its constitutional duties and to further the progress of knowledge and creativity for the benefit of the American people." (from its website.) No, I don't see how archiving Twits and tweets furthers this mission *at all.*

    It's not much of a step from there to archiving all the phone conversations of all Americans ... oh, wait, sorry. That's in the FBI's mission statement.
    • No, I don't see how archiving Twits and tweets furthers this mission *at all.*

      And in what way does that matter? After all, Congress, the President and the Supreme Court don't follow the Constitution, why should we expect any other bureaucracy to do what they're supposed to?

  • seriously? (Score:2, Insightful)

    by Anonymous Coward

    300TB worth of tweets, which are basically very small text files? A single tweet, that uses all available character should only be 140 bytes. I just refuse to believe that there is 2+ trillions tweets out there, to make up 280+TB. Considering 1 billion tweets would be 140GB. (unless I'm failing massively at math here, which is quite possible.)

  • Look, I don't know about you, but we process hundreds of TB of data when we process genomes, using this fancy stuff called "databases", "hash indexing", and fancy software that may be hard for you to find like Perl, C, and various scripting languages.

    It's fairly simple coding. Just build an index hash from keywords (which are all preceded by #), add another index by words (ignoring all the bit.ly and other web links), add a third index by @ reference (aka user names, which are really just a 20 character par

  • What confuses me:

    Percentage of Americans with Accounts:
    Twitter: 13%
    Facebook: 70%

    So there is FAR less diversity, and extremely poor quality data, why did they not archive public Facebook posts instead?

    I see it as, facebook hosts people who write articles, stories, poems, songs, music, pictures, etc. THAT is the point of the Library of Congress: Documenting and Preserving Culture. Not trying to datamine the history behind "WAT R U DOIN FRI GRRL?",
  • All your meme are belong to us!
  • A substantial number of posts are literal duplicates by known spambots.

    You could store those separately as well as the Retweets (RTs).

    Then, think about what typically gets posted.

    Most might be something like 520,000 variations on "Touchdown!" or "That's gotta hurt!" during sporting events, or "It's snowing!"

    A lot of the rest are probably repeats of what someone just said on Comedy Network or during a TV program. They will all be at about the same time in a region and be substantially the same thing, with 5

  • 300TB is about right. Twitter says they have 400 million tweets per day. Figure about 500 bytes per message with text, and metadata (source, destination, timestamp, flags). 400,000,000 msgs/day * 365*4 days * 500 bytes = 292,000,000,000,000 bytes.

    Twitter offers a feed of 1 in 10,000 public tweets, so you can see how banal it is. I had a program monitoring that for a while, extracting links and evaluating them for spam. It's about as bad as you'd expect.

  • by Inda (580031)
    Right now, even a single query of the 2006-2010 archive takes as many as 24 hours to execute.

    Why? Why does it take so long?

    They talk about the hardware and software not being up to scratch, but many other companies seem to be able to process huge amounts of data quickly. Google, for one, seems to do it.

"In matters of principle, stand like a rock; in matters of taste, swim with the current." -- Thomas Jefferson

Working...