Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Google Software The Internet

Extracting Meaning From Millions of Pages 138

freakshowsam writes "Technology Review has an article on a software engine, developed by researchers at the University of Washington, that pulls together facts by combing through more than 500 million Web pages. TextRunner extracts information from billions of lines of text by analyzing basic relationships between words. 'The significance of TextRunner is that it is scalable because it is unsupervised,' says Peter Norvig, director of research at Google, which donated the database of Web pages that TextRunner analyzes. The prototype still has a fairly simple interface and is not meant for public search so much as to demonstrate the automated extraction of information from 500 million Web pages, says Oren Etzioni, a University of Washington computer scientist leading the project." Try the query "Who has Microsoft acquired?"
This discussion has been archived. No new comments can be posted.

Extracting Meaning From Millions of Pages

Comments Filter:
  • by John Hasler ( 414242 ) on Friday June 12, 2009 @09:03AM (#28306859) Homepage

    The major problem is that it assumes the presence of meaning in Web pages in the first place.

  • by Anonymous Coward on Friday June 12, 2009 @09:13AM (#28306961)

    Are we moving towards a web in which Google centralises everything on their own pages? These new engines present content without the need to visit pages it originates from. Is Google basically mooching off other people's websites with hardly anything - if anything at all - in return?

    It could be dangerous if the only visitor a web site can expect is the Google bot.

  • by jerep ( 794296 ) on Friday June 12, 2009 @09:43AM (#28307297)

    it just repeats what other people have said

    I don't see anything new here, most people have done this since the beginning of time.

  • by Colonel Korn ( 1258968 ) on Friday June 12, 2009 @09:50AM (#28307379)

    That is how Wikipedia was meant to be. A group of statements about subjects, all of which can be referenced to some original source. So that people can look up something quickly and then look at the sources for more definite information....

    Seeing how many people cite Wikipedia directly, use it as the main source for their research and the amount of newspapers that have been reported to directly quote inaccurate facts from Wikipedia... I don't think it is working properly. It requires a lot of optimism to believe "People will use that as a initial source and then verify the information"

    That's not wikipedia's failure. Those same people would just be referencing nothing or a web site with zero public review and commenting without it.

  • Correction.... (Score:5, Insightful)

    by wowbagger ( 69688 ) on Friday June 12, 2009 @10:32AM (#28307983) Homepage Journal

    "...that pulls together facts by combing through more than 500 million Web pages."

    Correction:

    "...that pulls together assertions by combing through more than 500 million Web pages."

    Whether those assertions are correct or even reasonable is a completely different issue.

    It might be interesting to then take those assertions and have some means to validate or invalidate them, but currently that's going to require meat, not metal.

    Now, if you could come up with some form of AI^Walgorithm to do that automatically, then you would have something.

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...