Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
The Internet

Semantic Web Getting Real 135

BlueSalamander writes "Tim O'Reilly just did an interview with Devin Wenig, the CEO-designate of Reuters. With no great enthusiasm I started to read yet another interview on how the semantic web was going to make everything great for everybody. Wenig made some good points about the end of the latency wars in news and the beginning of the battle for automatically detecting linkages and connections in the news. Smart news, not just fast news. Great stuff — but just more words? Nope — a little searching revealed that Reuters just opened access to their corporate semantic technology crown jewels. For free. For anyone. Their Calais API lets you turn unstructured text into a formal RDF graph in about one second. I ran about 5,000 documents through it and played with a subset of them in RDF-Gravity. The results were impressive overall. Is this the start of the semantic web getting real? When big names and big money start to act, not just talk, it may be time to pay attention. Semantic applications anyone? The foundation appears to be here."
This discussion has been archived. No new comments can be posted.

Semantic Web Getting Real

Comments Filter:
  • Semantic Spam (Score:2, Insightful)

    by Rog7 ( 182880 )
    Next up, semantic spam.

    Actually, I think it's beaten the rest of the content to the punch. =(
    • by Reverend528 ( 585549 ) * on Sunday February 10, 2008 @08:19PM (#22374892) Homepage
      Well, as long as the spammers stick to the spec and use the <RDF:spam> type for their content, then it should be pretty easy to filter.
      • Re:Semantic Spam (Score:5, Insightful)

        by fonik ( 776566 ) on Sunday February 10, 2008 @09:22PM (#22375298)
        And this seems to be a major problem of the whole semantic web buzz. Search engines like Google can cut down on abuse because they're a third party that is unrelated to the content. The whole semantic web thing offloads categorization to the content source, the very party that is most likely to try to abuse the system.

        It just doesn't seem like the best idea in the world to me.
        • by Necrobruiser ( 611198 ) on Sunday February 10, 2008 @10:18PM (#22375598)
          Of course you realize that this will just lead to a bunch of neo-netzis with their anti-semantic remarks....
        • Re: (Score:3, Informative)

          This is slashdot and all, I know. But you seem not to have read even the summary: this is about someone exposing an API which lets you turn text into and RDF graph independently of the text producer. If you want, this something like someone giving you access to a tool like the one used by Google.
          • by fonik ( 776566 )
            Yeah, I did read that and I was speaking generally about the whole semantic web buzz and not specifically about the artcile. This is a case of a single third party categorizing a large amount of data. Since they are all categorized in the same way the potential for abuse is low. But is that an improvement over current search algorithms?
            • Well, this is not a search algorithm, this is an (API giving access to an) algorithm which constructs an RDF graph from plain text data. While such a thing can be used for searching, your question is not very different from comparing apples and oranges.
        • Re:Semantic Spam (Score:5, Informative)

          by SolitaryMan ( 538416 ) on Monday February 11, 2008 @05:00AM (#22377354) Homepage Journal

          And this seems to be a major problem of the whole semantic web buzz. Search engines like Google can cut down on abuse because they're a third party that is unrelated to the content. The whole semantic web thing offloads categorization to the content source, the very party that is most likely to try to abuse the system. It just doesn't seem like the best idea in the world to me.

          I think you are missing the point of Semantic Web: you can refer or link to an object, not just document.

          The company declares its URI. Now, If you are writing an article about this company, you can uniquely identify it and every web crawler knows *exactly* what company are you talking about. If the URI for the company is a hyperlink to its web site, then it can't be abused: the company itself declares what it is. The unique URI will in fact be a link to some file with information about company (maybe an RDF file -- doesn't really matter for the concept)

          The system can (and will be abused) in the same way as an old web: irrelevant links, words, concepts -- nothing new for the crawler and can be defeated with existing techniques.

          Again, Semantic Web = Links between concepts, not just documents, so please do not bury the good idea under the pile of misunderstanding.

          • What if the URI points to a link which redirects the user to the company's page but also adds spam in the page?
        • Re: (Score:3, Interesting)

          by nwbvt ( 768631 )
          It does seem like we are in a cycle. Way back in the days when dinosaurs like Lycos and Hotbot ruled the search engine world, information on the net was categorized by tagging. Those of you over the age of 17 remember it, back then if you did a search for "American Revolution" half your results would end up being porn sites that put meta tags containing the phrase "American Revolution" on their page (although I can say those were great days to be a teenager). Then Google came about with their new "Page R
        • In Soviet Russia, the system abuses you !

        • Re: (Score:2, Interesting)

          by soxos ( 614545 )

          The whole semantic web thing offloads categorization to the content source, the very party that is most likely to try to abuse the system.
          That's the same criticism given to Wikipedia or unmoderated Slashdot. Consider Semantic web for discovery combined with moderation and see that there could be something to this.
      • by altek ( 119814 )
        Of course, we could keep this at Layer 3, if programmers would just start properly implementing the Evil Bit!
    • Much of the output of the various news sources today is, arguably, spam.
      So the question I would have liked to pose is:
      Since we can't filter out bias, how can the technology help to make the news biases more transparent and quantifiable?
      For example, work like this about VP Cheney [newsbusters.org] deserves to be bagged, tagged, and ignored, for it is a blemish on the face of legitimate journalism.
    • Am I the only one who misread that?
      • by bane2571 ( 1024309 ) on Sunday February 10, 2008 @09:06PM (#22375200)
        I read it like this:
        Semantic web getting real [player]
        and immediately thought "it was bad enough when the original web got it"
      • Re: (Score:3, Funny)

        by gotzero ( 1177159 )
        "Please note this environment may not be completely safe, so we are going to prevent you from entering. We have also initiated so many system processes that it will simulate a virus on this system."

        The links in that article are neat. I am looking forward to watching the maturity of this!
      • Nope, I did too, and I was wondering... does this mean that Norton won't crash and slow down Windows computers more then most spyware/viruses?
  • What? (Score:1, Offtopic)

    by TubeSteak ( 669689 )
    Is the semantic web supposed to be one of those Web 3.0 things?
    • Re: (Score:3, Interesting)

      by owlnation ( 858981 )
      Yes -- essentially.

      And the only reason we moved from Web 1.0 to web 2.0, and the only reason we need to move from Web 2.0 to Web 3.0 is...

      We are still stuck on Search 1.0

      Well, ok, to be fair to Google -- Search 1.5

      Sorry, but we won't see much improvement in utility until someone rolls out Search 2.0. That is a product LONG overdue.
      • by AuMatar ( 183847 )
        Why? Quite frankly I've never had a search that didn't find what I wanted in the top 10 links of google, within 1-2 tries. 90%+ of the time its within the top 5 links in 1 try.
        • by Cederic ( 9623 )

          Google fails to provide useful search results for a lot of searches. Some of them may well have no internet content available. Many others the content is swamped by a pletheora of other site aggregotors, link farms, or even genuine vendors selling an item you're searching for, but not giving you the information you're after.

          Sure, I can refine my Google searches to cut out all these distractions. But I'm lazy; I want a two word search to give me the link I need straight away.

          • by AuMatar ( 183847 )
            Show me an example of one of those searches. I hear people claim this, but I've never, ever found one. The most I've found is too much content around my keywords and needing to be more specific. I've never seen a link farm or an aggregator in one of my searches.

            Then again, less than 1% of my searches have to do with buying something. Maybe thats the difference.
    • Re: (Score:3, Insightful)

      by STrinity ( 723872 )

      Is the semantic web supposed to be one of those Web 3.0 things?


      If by that you mean "a collection of buzz-words that everyone uses without having Clue 1 what the hell they're talking about," yes.
  • Content? (Score:4, Insightful)

    by Walzmyn ( 913748 ) on Sunday February 10, 2008 @08:26PM (#22374934)
    What good are fancy links if the content still sucks?
    • by EmbeddedJanitor ( 597831 ) on Sunday February 10, 2008 @08:47PM (#22375072)
      THis looks like command line vs GUI wars all over again. GUIs are fine for rapidly hitting easy-to-find targets but sometimes typing is far easier and faster. Lumbering crap GUIs are really hard to drive (eg. MS Visual Studio).

      Semantic webs might be OK for small document sets where you can visualy search tags and click them. Want to look up something about monkeys? Look for the tag that says monkeys (or maybe find primates first, then monkeys) and click it.

      But for huge data sets this sucks. After a smallish number of documents & subjects it must be far easier to type monkeys in search box and have Google etc do the search.

      This might work for handling some queries, but will suck supremely for complex queries over large data sets (eg. the whole www).

      • Re: (Score:3, Interesting)

        by smurgy ( 1126401 )
        I really think you're forgetting about the power of booleans over indexed content and the weakness of string searching. Positing a tag-dense web search in which autoindexers crunch tags for every page as one containing an overabundance of hits compared to string searching is arguable, but in fact what tag searching does is provide a far meaningful range of hits. There might or might not be more, but it's better.

        We need to couple the proposed "semantic web" with more than the single-box search page or rathe
  • Where's the Money? (Score:3, Interesting)

    by Blakey Rat ( 99501 ) on Sunday February 10, 2008 @08:26PM (#22374936)
    I've never understood what the financial benefits for a site joining the semantic web are supposed to me. Reuters may be one thing, but how would you sell this technology to Amazon? Or NewEgg? If commercial sites can't/won't use it, how is it supposed to gain critical mass?
    • by QuantumG ( 50515 ) <qg@biodome.org> on Sunday February 10, 2008 @08:37PM (#22375002) Homepage Journal
      Yeah, it won't matter until Google starts getting in on the act. When you can search for "a website where I can get free kittens and other pets" and get exactly that, instead of just sites that have those keywords in it (like this message in a day or so), then it will be valuable for people to RDF their site and maybe even look at the mess that the translator makes and clean it up.
    • Re: (Score:3, Insightful)

      Feeding Proxies is one potentially lucrative use of semantic technology.

      Here is a basic scenario for ten years down the line:

      1. You build a profile probably through a combination of allowing your online activities to be profiled, filling out in-depth surveys, and rating certain types of web-content on a semi-regular basis.

      2. A proxy identity is imbued with a 'personality' based on both your preferences as represented in step one, and ongoing analysis of content that causes you to register a strong reaction
    • Re: (Score:3, Interesting)

      by pereric ( 528017 )
      If I have a business selling - for example - bicycle pedals, being well listed at www.bike-pedal-finder.com, or by users of some yellow pages could certainly help my business. If the search engines could use information like below, it will probably help:

      <dealer name="my company">
      <in stock>
      <pedal model=M525 price=20E>
      <pedal model=M324 price=10E stauts=pre-owned>
      </in stock>
      <location> ... </location>
      <shi

  • by Anonymous Coward
    And now for a host of Anti-Semantic comments in 3 ... 2 ... 1 ...

    Well, I am sure the authors will just call them Anti-Zio[a]ntic comments.
  • Yawn... (Score:5, Interesting)

    by icebike ( 68054 ) on Sunday February 10, 2008 @08:30PM (#22374966)
    So I need this WHY?

    Most websites have little to say, and take all day to say it.
    Having a detailed graphical analysis of the blather seems unlikely to improve the situation. GI,GO.

    It would seem spending just a tad more time writing for HUMANS would be way more productive than writing for machines. Having a thousand computers watching your 100 monkeys seems unlikely to bring enlightenment or useful knowledge out of a pile of garbage and human blathering that passes for information on the web these days.

    People used to write web pages.
    Now they write software to write web pages.
    Its not surprising they now need to write software to understand the web pages.
    Whats the point?
    • Re: (Score:2, Informative)

      You're a little unclear on the concept of an RDF graph. It's not a graph like your intro algebra class - it's a RDF (thats Resource Description Framework) representation of the semantics of a document. Check Wikipedia for Semantic Web or RDF.
    • Re:Yawn... (Score:5, Interesting)

      by QuantumG ( 50515 ) <qg@biodome.org> on Sunday February 10, 2008 @08:50PM (#22375092) Homepage Journal
      Writing AI that can read English (and all the other languages) and figure out the meaning is just, well, taking too long. But let's say it wasn't.. what would be the point? Would you say there was no point? Or would you say it was freakin' awesome and look forward to the day when you can actually ask a question and get a sensible answer from a machine?

      Well, if we are very forgiving we can get this kind of thing happening with current technology, we just have to supply all the "content" in a form that our primitive algorithms can handle. The Semantic Web is that. Maybe around the 3rd generation of these algorithms we might be ready to do the translation to machine form automatically.. maybe not.. but at least the Semantic Web people are again talking about translation.. was a time when they all said it was a fruitless path and the best way was to just supply applications for creating machine readable content easily.

      • Perfect! A concise reasonable explanation. Thanks.
      • I can already ask Yahoo or Google a question and get a sensible answer. I guess I'm missing how this "semantic web" thing equates with AI that understands the meaning of English.

        Besides that, if you rely on the "content providers" to provide the meta-data the system is less than useless. Legitimate sites won't use it or update it, and illegitimate sites will abuse the system.

        • Re:Yawn... (Score:4, Interesting)

          by QuantumG ( 50515 ) <qg@biodome.org> on Sunday February 10, 2008 @09:55PM (#22375494) Homepage Journal
          Uh huh.

          When is the next shuttle launch? [google.com]

          This is the first hit, not shuttle launch info. [nasa.gov]

          This is the second hit.. [nasa.gov] ah hah! The next launch is on Feb 7.. wait a minute, it's Feb 10! Was it delayed or something? Oh, I see, it says "Launched".. great, when's the next one.. March 11 +.. hmm.. wtf does + mean? Apparently I need to read this [nasa.gov] and hmm.. nothing there about what the + means.. I guess it means it might get delayed, they do that.

          See all that reasoning I had to do? See how long that took me? That's what the Semantic Web is for.

          • You are pretty knowledgeable about this stuff, so I'm going to ask you:

            How does this stuff handle abuse? I mean, what's to stop Senior Spamalot from marking up all his machine-readable stuff for shuttle launches, but actually dishing you to a Viagra page? I don't understand how the "Semantic Web" won't be terribly abused.
            • Re:Yawn... (Score:4, Insightful)

              by QuantumG ( 50515 ) <qg@biodome.org> on Sunday February 10, 2008 @10:27PM (#22375636) Homepage Journal
              How do *you* know when information is bullshit?

              How does Google's pagerank algorithm?
              • Re: (Score:3, Interesting)

                by MightyYar ( 622222 )
                It's a damn good point, but I'm better at it than a computer. Though to tell you the truth, Google's spam filter on gmail is darned close to perfect (once trained) - so I can see how they would be able to filter the information using something akin to their spam filter. And they'd still use something like pagerank to rank the results, so that might go a long way toward nailing the spammers.

                But I wonder whether that approach is going to be any simpler or more effective than just developing better or more int
                • by dkf ( 304284 )

                  The answer, as I see it, is computer-generated metadata... at which point, why not just build that functionality into your search engine?

                  Yahoo are already doing that. If you go to their search page [yahoo.com], enter some search term (e.g. "linux") and search. Now, on the results page there should be a little arrow down at the bottom of the top bar; click on that and it will open up a panel that includes concepts linked to the search terms (and also possible refinements of the search). I know (from talking to the people at Yahoo) that they're deriving the concepts automatically from their spidered data, and it works really well.

                  How resistant is it to s

                  • Thanks, neat link - I haven't used Yahoo search in a long time.

                    It's a cool trick, but it doesn't really do much useful right now. I tried the space shuttle example, and it didn't really add any value over and above what google does. On the other hand, it does a pretty good job when your search is not very specific - like just typing "Britney Spears".

                    They should make it more obvious that you need to push that little arrow! I never would have tried that!
          • Obviously the current searches are not semantic, so the key is searching for the right thing. At first glance, your query sounds simple enough. However, the problem is that there simply may not be any webpages dedicated to providing the exact information you asked for. In this case, are there webpages that are kept up-to-date with information specific to the next shuttle launch? What you really need to search for is not the "next" shuttle launch, whose definition is always changing, but "shuttle launch [google.com]
            • Re:Yawn... (Score:5, Insightful)

              by QuantumG ( 50515 ) <qg@biodome.org> on Sunday February 10, 2008 @11:25PM (#22375968) Homepage Journal
              Ok, you seem to be of the belief that I'm still talking about search.. in the classical "give me a web page about" sense. I'm not.. and the Semantic Web people are not. "next" has a meaning.. everyone knows what it is. "shuttle launch" has an almost unique meaning.. although some concept of our culture and common sense is needed to disambiguate it. Asking when the next shuttle launch is has a unique answer: a date and a statement of the confidence in that date. For example "March 12, depending on weather and other things that might scrub the launch." I don't expect this to be "webpages that are kept up-to-date with information specific to the next shuttle launch"... I expect the answer to my question to be synthesized in real time from a dynamic pool of knowledge which is obtained from reading the web. I want a brain in a jar that is at my beck and call to answer every little question like this that I have through-out the day.. on everything from spacecraft launches to what the soup of the day is at the five closest restaurants to my office. There doesn't need to be some web page that is updated daily by some guy who works near me and enjoys soup.. there just needs to be information on soup and location posted by restaurants in my area.

              So am I talking about search? Well, yes, but its an algorithm that uses search to answer my questions.. instead of me having to do it.

              Think about that soup question.. how would you do it now? I'd go to Google maps.. enter the location of my office, search businesses for restaurants, click on one of the top 5 to see if they have a daily updated menu, note the soup of the day, go back to Google maps, click on the next one, etc, until I had the answer I wanted. That's a pretty simple algorithm.. it's something a machine learning system could come up with.
      • I must say I don't think it's quite as "freakin' awesome" as you seem to. I believe that natural language is not only hard to handle correctly, but also hard to use correctly. There is a reason why we have formal specifications and legal language -- "natural" language is just too vague. Now in some niche areas where you don't have your hands available I can see the allure of voice recognition, but I honestly think that speaking to computers to have them do stuff in anything resembling natural language wi
        • by QuantumG ( 50515 )
          So ask in a formal language.. point is, we can't even ask questions now.

          We can't even ask questions about systems which are designed to be machine readable. Look at software debuggers.

          • We can't ask questions? I suppose asking for the value of a variable in gdb doesn't count. If you are referring to 'why is this program not working?' in the debugger reference, I assume you know that getting answers for these questions effectively requires strong AI, or at least a better specification of what it is that we are trying to do with the program in the first place.

            Back to search and the semantic web, I think that we are using formal languages to ask questions in search every day. I would lo
      • Re: (Score:3, Insightful)

        You think that if we feed weak AI algorithms a lot of cleaned up, pre-tagged data, that's going to help overcome the weakness of the algorithms and produce something worthwhile?

        Sorry, there's a flaw in your reasoning: Who gets to pre-tag the data? Everybody. But you can't trust everybody on the net. So you'll get a lot of data that's specifically designed to confuse and subvert the weak algorithms, and by definition such algorithms aren't strong enough to rise to the challenge.

        The Semantic Web people wi

        • by QuantumG ( 50515 )
          Blah, vetting the quality of your inputs is necessary but it's a completely different algorithm to answering queries. This is already true of search engines... and we have good ways of handling it. But hey, you're the kind of person who gives up looking for a job because you're sure no-one will hire you.

          • 1) "vetting the quality of your inputs" is not AI. It's just putting in what you want to see coming out, assuming you understand sufficiently the way the particular algorithm you're tweaking works.

            2) "we have good ways of handling it" is a euphemism for human beings. Yes, just throw people at the problem and let them censor the bits of data that they don't like. Again, you're just letting in what you want to see coming out. Search engines have teams who get paid to scrub their data. It's not AI. We still

            • by QuantumG ( 50515 )
              You must be living in some other world to me. Google search results are not vetted by humans. It's this little algorithm called pagerank.. you might have heard of it.

              • Perhaps you should read up on it? PageRank proper is only a small factor in Google's index sorting method. Other factors are ad hoc things like weights for whether words appear in headings or paragraphs, whether the page is full of hidden keywords, whether the word "homepage" appears prominently etc.

                PageRank itself is merely about counting links, which is entirely independent of content, and not as useful on its own as you might think. For example, there's no guarantee that an index page will appear befor

    • by tm2b ( 42473 )
      The point is that sophisticated enough tools can help you find the websites that do have something useful to say.

      The amount of garbage out there only makes these tools more necessary.
    • Re:Yawn... (Score:5, Interesting)

      by daigu ( 111684 ) on Sunday February 10, 2008 @11:21PM (#22375954) Journal
      I'll tell you why you need it. It provides another layer of abstraction. Let's try a few illustrative examples.

      1. Let's say you work for a Fortune 500 company and you get over 10,000 emails a day from customers complaining. Do you think it is better to read each one or have a tool that abstracts it to graphically display key concepts that they are complaining about so management can do something about it today?

      2. You are a clinical researcher in Cancer and have a terabyte of unstructured patient data. Can you think how text descriptions of pathology reports might be displayed graphically against outcomes to suggest some interesting insights?

      There's a lot of useful information that isn't on blogs - although it would be useful for them too. You need to exercise a bit more imagination.

         
    • You will need it because it will take far more than porn downloads to fill up the harddrives of tomorrow. Indexed links between every word in every file to every other word in every file will take care of that nagging empty space.
    • People used to write web pages.
      Now they write software to write web pages.


      We also have software to write software (see [[Compiler]]). Now that is just lazy and decadent.
    • Re: (Score:3, Informative)

      by Lally Singh ( 3427 )
      It's the difference between having all of your customer data in a set of text files vs a database. The database is structured, which lets the computer do more analysis on it. It can also index that data more effectively.

      Here's one example, say I want to do a little semi-political research. I ask semantic google (which, for the sake of argument, has a more advanced query language) for the relationship between the price of RAM and the price of oil.

      Right now, google could at best look for an article on that
  • by ScrewMaster ( 602015 ) on Sunday February 10, 2008 @08:51PM (#22375096)
    Semantic Web Getting Real

    Just what we need. Yet another version of RealPlayer.
  • pfft... (Score:4, Funny)

    by djupedal ( 584558 ) on Sunday February 10, 2008 @09:03PM (#22375178)
    "Wenig made some good points about the end of the latency wars..."

    Mr. Wenig must not be all that familiar with /.'s 'editorial' habits :\
  • The first time I read the title, I thought it said 'Symantec Getting Real'. Well, I was planning to leave a smart comment about Symantec and Real don't belong in the same sentence.
  • by WK2 ( 1072560 ) on Sunday February 10, 2008 @09:19PM (#22375278) Homepage
    If you are like me, and have absolutely positively no dang fucking clue what the summary is talking about: http://en.wikipedia.org/wiki/Semantic_Web [wikipedia.org]

    According to the Wikipedia history, this concept has been around since at least 2001.
    • Ummh, I think that's the point. The concept - first advocated by Tim Berners Lee - has been around for a long time. The technology to make it real has not. This is a big step in that direction. It's not the whole answer - but services like this will help overcome one of the key constraining factors: ubiquitous metadata tagging of content.
    • by globaljustin ( 574257 ) on Sunday February 10, 2008 @10:58PM (#22375818) Journal
      the wiki article you linked to says:

      For example, a computer might be instructed to list the prices of flat screen HDTVs larger than 40 inches (1,000 mm) with 1080p resolution at shops in the nearest town that are open until 8pm on Tuesday evenings. Today, this task requires search engines that are individually tailored to every website being searched. The semantic web provides a common standard (RDF) for websites to publish the relevant information in a more readily machine-processable and integratable form

      On first read, I like what they are trying to do, but I see so many problems with what they are thinking, and I am not a web designer in any sense.

      First, I don't have a problem finding things to buy on the internet. The problem is, signal to noise ratio. There are TOO MANY google results for something like 'plasma tv.' No matter what kind of RDF is used, it will be abused by people who want their URL to show up in your search for whatever reason. I think someone touched on this earlier a little in this thread, but it deserves repeating.

      Second, can you imagine a scenario where, say, best buy or fry's uses some 'semantic web' application to do real time web searchable updates of their inventory? That's what would have to happen for this to work, and do something that isn't already possible.

      Right now, I can search for 'plasma tv' in google or ebay. Then I can call my local retailers to see if they carry that item, and have it in stock. In order for this system to make any kind of tangible change in the example given, retail chains would have to update their inventories online, whenever a purchase is made, or new items delivered to the store.

      It's an interesting idea. I wonder if the retailers would go for it? All it means for them is fewer people comming into their stores...sounds like that would hurt sales.

      I also hate internet hype. I really fouls things up, more than some want to acknowledge. I try to keep my 64 year old dad educated enough to buy coffee beans on ebay, check email, look at news, etc. Every time he sees 'symantic web' or 'web 2.0' in the media, it just confuses him, and I imagine, people like him who just use the net for basics like online bill pay, ebay, etc. He doesn't need a new buzzword to motivated to shop online or whatever.

      he has the motivation already...silly contrived 'new meida' buzzwords just waste time and confuse people
      • I try to keep my 64 year old dad educated enough to buy coffee beans on ebay, check email, look at news, etc. Every time he sees 'symantic web' or 'web 2.0' in the media, it just confuses him, and I imagine, people like him who just use the net for basics like online bill pay, ebay, etc.

        I'm afraid whenever I see this argument I immediately tend to discredit all the rest that I've read in that post. Designing technology for those who are least able to uptake it is a losing proposition at best; at worst a

        • Designing technology for those who are least able to uptake it

          I never mentioned Design! You didn't read my post very well, did you. I said that the HYPE of buzzwords like 'semantic web' or 'web 2.0' is lame, unnecesarily confusing, and annoying. The word hype was the first word in the subject of my post!

          Here, I've copied the paragraph from my post that you read incorrectly, emphasis mine

          I also hate internet hype. I really fouls things up, more than some want to acknowledge. I try to keep my 64 ye

          • I think that I spoke to the point you made, if not to the point you think you made. This 'discussion' - which I will leave vaguely defined, as it is - be it the design, the hype, or whatever, is taking place between people who are actively seeking this technology. It has really little to do with those people who due to some circumstance (of which age may be one) could care less until it's matured.

            It is therefore a fallacy to use your grandfather as an example of why things are 'too confusing'. There IS a l

            • you are definitely a troll...this 'discussion' is over
              • I don't know what provoked your vitriol. I'm not a troll - but the moderators are welcome to disagree. Since they haven't yet, I'm currently disposed to thinking you're overreacting to my disagreement with your viewpoint. I'm as happy as you seem to be to let it lay, however.

  • by timeOday ( 582209 ) on Sunday February 10, 2008 @09:21PM (#22375290)
    IMHO this is not the semantic web. The primary representation is still (just) natural language. Anything in addition to that is really just search engine technology under a different banner. Is that a bad thing? No! I've always said the semantic web was bound to fail because people don't want to spend a lot of extra effort tagging their information so others can slice and dice it; instead, the evolution of natural language processing in search (rather than manual tagging) will solve the problem. Maybe the Reuters idea of exposing the "inferred" metadata will be useful (as opposed to normal searches like google who simply keep this metadata in their own indices), though as yet I don't see why.
    • timeOday >>> "evolution of natural language processing in search (rather than manual tagging) will solve the problem"

      But then if you're creating an addon for joomla (or any template elements really) to display event listings why not add a semantic tag so that a search engine could limit the domain by "tag:events". The extra effort involved is pretty minimal, especially when, if you code well, each event is probably in a "<div class="event eventtype"> ..." anyway.

      Once people realise that searc
  • by presidenteloco ( 659168 ) on Sunday February 10, 2008 @09:22PM (#22375292)
    When you start aggregating as much text as google does, the semantics just starts popping out, in the form of word relationship statistics.
    The massive corpus size, when measured carefully, acts to filter semantic signal from expressive difference "noise".

    Combine that kind of latent semantic analysis of global human text with conceptual knowledge representation and inference
    technologies (which would use a combination of higher-order logic, bayesian probability, etc) and it should be possible to
    create a software program that could start to get a basic semantic understanding of documents and document relationships
    in the ordinary "dumb" web.

    Could the proponents of the semantic web please tell me what it will add to this?

    My basic proposition is that if an averagely intelligent human can infer the semantic essence (the gist, shall we say), of
    individual documents, and relationships between documents on the web, why can't we build AI software that does
    the same thing, and then reports its results out to people who ask.

    • by Anonymous Coward

      [...] if an averagely intelligent human can [do X], why can't we build AI software that does the same thing [...]
      Because wetware is still ahead of machines in a few domains. Be thankful for that because when we can build AI software for everything, we won't be needed anymore.
      • Re: (Score:3, Insightful)

        Why should I be thankful about spending my adult life working because machines aren't up to the task? I'll be thankful when machines take the work and leave us free to do what we want.
    • Could the proponents of the semantic web please tell me what it will add to this?

      Actually, the story is about a tool which does (a part of) what you are describing.

    • There's no more "intelligence" in AI than in a can of Campbell soup. It's basically statistics, linear algebra and (sometimes) handcoded rules for reasoning. It doesn't evolve. It doesn't build upon what it "knows". It has no self-awareness or consciousness and its reasoning capabilities, if present, are extremely weak compared to even children.

      We're so early in the development of this field that no one can even define what "self awareness" or "consciousness" really is, let alone how to create it or scale i
  • OpenCalais (Score:3, Funny)

    by lenzg ( 1236952 ) on Sunday February 10, 2008 @10:01PM (#22375516)
    Finally, Reuters released OpenCalais as free open-source software. OpenDover will appear any time soon. (someone may then connect both using a Channel, SSH perhaps)
    • by Zoxed ( 676559 )
      > (someone may then connect both using a Channel, SSH perhaps)

      Trains-on-rails, tunneled, would be the most secure: less chance of someone seeing your bytes ferried across, and a man-in-the-middle attack would be much more difficult !!

  • The best indicator of vaporware seems to be continual postings on Slashdot that something is real.

    Given that the Semantic Web is neither Semantic nor Web, I think we've got another data point for that theory.
  • The semantic web refers to a specific attempt/vision put forth by w3c.

    http://www.w3.org/2001/sw/ [w3.org]

    This article is about a news organization using semantic tools to help extract and manipulate certain data. Sure, they are related a little maybe, but if related meant equal, then every computer would break.

    Just because the word "semantic" matches, they've confused the two domains, and if humans can't even do it, I wonder what our automated semantic web would look like with robots trying to make connections. I ca
  • by janbjurstrom ( 652025 ) <inoneear@noSPAm.gmail.com> on Monday February 11, 2008 @04:41AM (#22377284)

    Reuters just opened access to their corporate semantic technology crown jewels. For free. For anyone. Their Calais API lets you turn unstructured text into a formal RDF graph in about one second. ...
    It's "free" for "anyone" for loose definitions of the terms. Glancing at their terms of use [opencalais.com] (emphasis added):

    You understand that Reuters will retain a copy of the metadata submitted by you or that generated by the Calais service. By submitting or generating metadata through the Calais service, you grant Reuters a non-exclusive perpetual, sublicensable, royalty-free license to that metadata. From a privacy standpoint, Reuters use of this metadata is governed by the terms of the Reuters and Calais Privacy Statements.
    So you pay with your metadata. One can say you're doing that with Google too. Nevertheless, that's not entirely free.

    Also, it's not yet for "anyone." According to the Calais roadmap [opencalais.com], only English documents are accepted: "Calais R3 [July 2008] begins ... to incorporate a number of additional languages... Japanese, Spanish and French with additional languages coming in the future."
  • by Gregory Arenius ( 1105327 ) on Monday February 11, 2008 @04:42AM (#22377296)

    I understand being jaded about internet hype and buzzwords but I'm still surprised that after nearly eighty comments there doesn't seem to be anyone who has anything to say other than "vaporware" and "it won't work because of the spammers." Yes, maybe it has been overhyped and yes it is taking a while for the envisioned ideas to come to fruition but that doesn't mean that those ideas aren't worthwhile.

    I'll use the following example because I recently had to do this with non semantic tools. Lets say you wanted to see how good or bad a job a transit agency is doing in its city in comparison to other similar cities. A couple of metrics you might use to find similar cities would be population size, population density and land area. Google doesn't do a good job with something like that. You end up needing to search for cities individually and then finding their data points. Or you can find a list of cities ranked by population or population density. If you search on Google for something like that you end up at one of the Wikipedia lists. These lists are helpful but....still lacking. They don't contain all the cities you need or they don't provide a way to look at multiple data sets at the same time. The lists are also compiled by hand and aren't automatically updated when the information on the city page is changed. The data is in wikipedia though. Every city page lists that information in a little box near the start of the article. But how do I take this data that is in Wikipedia from the form that its in into a form that I can use to find what I need to know? Enter the semantic web.

    Lets say that wikipedia, or at least the parts dealing with geography, were semantic. Now, there are tens of thousands of pages describing countries, regions, states, counties, parishes, cities, towns and villages. Then those pages are translated into many other languages. Some of the data that these pages contain is of the same type . They all contain the name of the locality, latitude, longitude, size, population size and elevation. For data such as this it would be pretty easy to have a form to enter the data into as opposed using the usual markup and the form could put the data into the proper markup for the page and the proper RDF. Once the data is in proper RDF form it would be easy to automate the process of updating translations of that page with the new data as well as updating any pertinent lists. It would also make it easier for people who want to analyze or use the data because they would be able to access it much more easily.

    But nobody really wants machine readable access to this information, you might say, except for the random geek and researcher. I would disagree. Lets say you're using a program like Marble which is similar to Google Earth in some ways but is completely open source. If they wanted to display the population of a city when you hover over it they would currently have to create and maintain their own dataset or they'd have to write a parser to extract it from wikipedia. Neither of those options is particularly easy at the moment but if the information was in semantic form on wikipedia it would be a piece of cake.

    The strength of the semantic web isn't, in my opinion, going to be AI like personal agents or anything like that. It'll be things that in many ways are already here. Like Yelp putting geotags on the restaurants they reviews and apps like Google Earth taking that data thats available in machine readable (Semantic!) for to overlay that data on a map so that you can see whats nearby. It'll be applications doing the same with the geotags from flickr. Its really useful mashups like http://www.housingmaps.com/ [housingmaps.com]. Its the transit agency putting realtime bus data up in semantic form so you can see on your iphones google map how far away the bus is. So yeah, maybe the semantic web is overhyped but that doesn't mean there isn't a lot of substance there, too.

    Cheers,
    Greg

  • Vapourware my arse (Score:4, Insightful)

    by theno23 ( 27900 ) on Monday February 11, 2008 @05:13AM (#22377424) Homepage

    The company I work for, Garlik [garlik.com] has two products that are run off semantic web technology. DataPatrol [garlik.com] (for pay) and QDOS [qdos.com] (free, in beta).

    We use RDF stores instead of databases in some places as they are very good at representing graph structures, which are a real pain to real with in SQL. You often hear the "what can RDF do that SQL can't" type arguments, which are all just nonsense. What can SQL do that a field database, or a bunch of flat files can't? It's all about what you can do easily enough that you will be bothered to do it.

    A fully normalised SQL database has many of the attributes of an RDF store, but
    a) when was the last time you saw one in production use?
    b) how much of a pain was it to write big queries with outer joins?

    RDF + SPARQL [w3.org] makes that kind of thing trivial, and has other fringe side benefits (better standardisation, data portability) that you don't get with SQL.

    I guess it shouldn't be a surprise to see the comments consisting of the usual round of more-or-less irrelevant jokes and snide commentary - this is Slashdot after all - but I can't help responding.

    • We use RDF stores instead of databases in some places as they are very good at representing graph structures, which are a real pain to real with in SQL. You often hear the "what can RDF do that SQL can't" type arguments, which are all just nonsense. What can SQL do that a field database, or a bunch of flat files can't? It's all about what you can do easily enough that you will be bothered to do it.

      Without knowing the details of your circumstances, it sounds like, maybe, the real point is that what you wa

      • by Wastl ( 809 )

        For the record; I am a researcher working in the Semantic Web area, and I am primary developer of the system IkeWiki [salzburgresearch.at] and the reasoning language Xcerpt. Since this discussion seems to pop up again and again on Slashdot, I didn't want to add comments to the same issues (trust, search) again. But your comment might add something new to the discussion:

        Without knowing the details of your circumstances, it sounds like, maybe, the real point is that what you want is an object oriented database rather than rela

    • I work for a research lab in the Netherlands; we've also finished quite a few projects using Semantic Web technology. Our use case is large heterogenous data sets in agrotech, like representing all knowledge on growing tomatoes and tomato quality in the Dutch agro sector.

      Finally a comment that compares Semantic Web technology to RDBMS technology. It's very unfortunate that it has "Web" in the name. Makes the clueless think it's supposed to be a try for WWW 3.0, or something...

You know you've landed gear-up when it takes full power to taxi.

Working...