Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
The Internet GUI Software

Welkin: A General-Purpose RDF Browser 189

Stefano Mazzocchi writes "Many consider the Semantic Web to be vaporware and others believe it's the next big thing. No matter where you stand, a question always pops up: Where is the RDF browser? The SIMILE Project, a joint project between W3C, MIT and HP to implement semantic interoperability of metadata in digital libraries, released today the first beta release of a general purpose graphic and interactive RDF browser named Welkin (see a screenshot), targetted to those who need to get a mental model of any RDF dataset, from a single RSS 1.0 news feed to a collection of digital data."
This discussion has been archived. No new comments can be posted.

Welkin: A General-Purpose RDF Browser

Comments Filter:
  • by otisg ( 92803 ) on Tuesday November 09, 2004 @09:58PM (#10772738) Homepage Journal
    Considering the big 1.0 for Firefox is out, I would think people who wanted their Semantic Web browsing software to be wide-spread would implement it as a Firefox plugin, no?
  • by Dancin_Santa ( 265275 ) <DancinSanta@gmail.com> on Tuesday November 09, 2004 @10:00PM (#10772747) Journal
    The question is about whether we really need a World Wide Web that looks like Wikipedia with links to every word and generally just a jumbled mess of blue and purple text. No matter how you cut it, the problem lies in having too much information immediately available.

    Imagine you are a reading a book, but each word is connected by string to a dictionary reference, and each dictionary reference definition is tied to the definitions of the words in the definition. You'd end up with a huge, eventually circular mess of string and you couldn't realistically get any enjoyment out of the book. The fact of the matter is that if you want to get more information about something, it is easy to go to an outside source to look it up. It does not need to be easier, because by making it easier than it must be you necessarily end up cluttering the thing you want to illuminate.

    There is an old saw, "Make things as simple as possible, but no simpler." The Semantic Web, while an interesting idea, tries to make things too easy, beyond the point of usefulness. The lack of content on the Semantic Web is a testament to the uselessness of such an over-engineered web space.
    • by Anonymous Coward on Tuesday November 09, 2004 @10:04PM (#10772780)
      "There is an old saw, "Make things as simple as possible, but no simpler." The Semantic Web, while an interesting idea, tries to make things too easy, beyond the point of usefulness. The lack of content on the Semantic Web is a testament to the uselessness of such an over-engineered web space."

      Or a testament to people's inability to understand new paradigms.
    • by Moby Cock ( 771358 ) on Tuesday November 09, 2004 @10:04PM (#10772785) Homepage
      Hear, hear. Wikipedia is about the most hyperlinking that I could stomach. It is useful in that specific application, but the notion of the Semeantic Web is silly. For crying out loud, people on /. don't RTFA, let alone verify all definitions of the words in the summary.
    • by gl4ss ( 559668 ) on Tuesday November 09, 2004 @10:06PM (#10772794) Homepage Journal
      make it so that they don't look like different links.. until you press some button or triple click on the word or whatever..

      so.. invisible strings that you can see if you wish.
    • by 0x461FAB0BD7D2 ( 812236 ) on Tuesday November 09, 2004 @10:10PM (#10772831) Journal
      Welkin is simply a PoC, IMO. It just attempts to prove that you can link information together in a fairly suitable way. This is always the first step in any new technology. Other products could, and probably should, use it for different purposes.

      Your main objection lies in that it does not filter information, but adds to the mass information overload humans experience daily. However, this can be changed simply. Welkin seems to dump all data at once. The code could be changed so you could traverse ideas. I can already see the usefulness of such a thing for educational purposes.

      The lack of content on the Semantic Web is a testament to its current lack of usefulness. If there was more content on it, it would be inherently more useful.
    • by mat catastrophe ( 105256 ) on Tuesday November 09, 2004 @10:12PM (#10772842) Homepage
      but about how quickly can Microsoft turn it into a security hole for your friends and family?
    • by sonsonete ( 473442 ) on Tuesday November 09, 2004 @10:12PM (#10772844) Homepage
      The point of the Semantic Web lies not in making information readily available to people browsing the internet but in providing semantic context with which computers can work. A person reading a document in a browser is not expected to follow links attached to every word. Rather, a computer program is expected to be able to use this information to learn the meaning behind the sting of characters.
      • by Jagasian ( 129329 ) on Wednesday November 10, 2004 @12:04AM (#10773600)
        And how is the meaning behind a string of characters given? For example, lets say you want to give the meaning behind a strong of characters that describes to a human the proof of Skolem's Paradox.
        • by Earlybird ( 56426 ) <slashdotNO@SPAMpurefiction.net> on Wednesday November 10, 2004 @01:03AM (#10773952) Homepage
          • And how is the meaning behind a string of characters given? For example, lets say you want to give the meaning behind a strong of characters that describes to a human the proof of Skolem's Paradox.
          It's given by marking it up. The computer doesn't need to know anything about the proof of anything, just like Google doesn't know anything about porn, and yet when you search for "big boobs", it knows what to return. *wink*

          The point isn't that a computer program will ever "know" what Skolem's paradox is, in the same way a human would "know" what it is. The semantic web isn't about building artificial intelligence into computers, but rather adding knowledge statements to information.

          If you tell a computer than Einstein is a scientist, that Einstein is a German, that Einstein won the Nobel prize in physics in 1921 and that this is an image of Einstein [uni-paderborn.de], then a computer will be able to infer that this picture is of a German scientist [slashdot.org].

          Based on this information, I could ask the computer for pictures of all the other German scientists who were awarded the Nobel prize in 1921, or some other time. Clearly the computer doesn't need to know about nationalities, or dates, or to understand pictures.

          There are simpler use cases, too. Say there's a product called Paradox (well, there used to be one). People searching for just the word "paradox" might get matches for pages about "Skolem's paradox". But if the pages were appropriately marked up, Google (or whatever) could ask you whether you meant a specific paradox, just the way Google currently asks if you perhaps meant something else [google.com].

          • by Jagasian ( 129329 ) on Wednesday November 10, 2004 @01:10AM (#10773988)
            Yes, but all this assumes that people agree on exact precise ways of representing everything. Just as in flat UNICODE text, you can describe the same thing in multiple ways, in RDF you can also describe the same thing in multiple ways that still differ in RDF's semantics. For example, there are multiple ways to encode a ternary predicate in terms of binary predicates. Each one of these represenations will not just differ in syntax, but also in RDF's semantics. Nothing is gained!

            RDF and the semantic web assume an ideal situation in which all information is complete and formatted in a uniform way.
      • by geg81 ( 816215 ) on Wednesday November 10, 2004 @05:28AM (#10774859)
        Well, maybe computers should just get smart enough to figure it out for themselves, instead of turning a billion web authors into markup slaves.
      • by jandersen ( 462034 ) on Wednesday November 10, 2004 @07:03AM (#10775065)
        'Rather, a computer program is expected to be able to use this information to learn the meaning behind the sting of characters. '

        Exactly - and while it seems like a worthwhile mental exercise, I would like to point out that this kind of construction has an immense potential for controlling information and shaping people's opinions. Just imagine advertisers getting their foot in somewhere in this; suddenly the machine's 'understanding' is coloured by commercial or political interests.

        No, give me the raw information any time, and I'll do the understanding myself, if you don't mind.
    • by greg_barton ( 5551 ) * <greg_barton@yah o o .com> on Tuesday November 09, 2004 @10:15PM (#10772873) Homepage Journal
      The fact of the matter is that if you want to get more information about something, it is easy to go to an outside source to look it up.

      The semantic web isn't about human usability. It's about building machine intelligence and knowledge.
      • by Fnkmaster ( 89084 ) * on Tuesday November 09, 2004 @10:37PM (#10773030)
        The semantic web isn't about human usability. It's about building machine intelligence and knowledge.

        Right, but the problem is if it's unusable for humans to _create_ that content, or to map it from human knowledge-space into machine-parseable format, then it doesn't matter if it's well-engineered from the machine's perspective. That's why adoption of the semantic web has been so poor (outside of applications that could just as well be filled with any ole' XML dialect, like RSS or RDF descriptors used to package Firefox extensions, and so on).

        Nobody wants to hire a team of ontological engineers to map information they already have in human accessible form into some highly structured, machine parseable format, and pay them to keep that information up-to-date. Mind you, companies only started paying people to put stuff up on the web when it became clear there was demand, and the early adopters of the web were individuals and academics, but the web was accessible from day one - I put up my first personal web page when I was 15 years old or so, and it took me about an hour to figure out how to do it.

        Also remember that big companies spend tens of millions of dollars hacking together some HTML for their website. Imagine how much they would have to pay to get people smart enough to construct ontologies and RDF data versions of all of their content. Yowsers!
      • by Jagasian ( 129329 ) on Tuesday November 09, 2004 @11:39PM (#10773427)
        I am sure I will be modded as a troll, but somebody needs to say it, somebody needs to stop these guys.

        The "Semantic web" is the latest snake oil being pawned by the AI community.

        Nothing is worse than an AI-type. They make big claims and never deliver. They overly anthropomorphize all aspects of computation, fooling themselves into a false understanding of all that is related to computer science. For example, Emacs is "intelligent" because it includes a broken implementation of the lambda-calculus, an implementation that doesn't even properly implement beta-reduction? Since when has breadth-first search, depth-first search, and other search algorithms had anything to do with intelligence? They are just procedures for solving problems. The list goes on and on...

        These people regularly try to disprove things such as the undecidability of various problems, the incompleteness of various logics, etc... as these undeniable mathematical proofs point to the fact that computers cannot be intelligent. Sure it is fun for sci-fi movies, but it surely isn't real science.

        The semantic web is nothing more than AI-types recycling their same old crap. RDF and OWL, the two most popular scripting languages for the semantic web, are just "semantic nets", an AI concept from the 1970s, rehashed. Yup, when you can't sell any snake oil, just rename it to something else and profit... and profit they do, but does society ever see the AI-types' promises come to fruition?

        Give me a break! Give the world a break. Computer science is just that: a science, and so the pseudoscience that is AI must be addressed by more people in the field of computer science.
        • I know! The gall of these people, thinking that a physical system could produce a sentient being! How could they push this snake oil?
          • by Jagasian ( 129329 ) on Wednesday November 10, 2004 @12:34AM (#10773798)
            They have yet to produce a sentient computer. They haven't come any close to doing so. There is no science to their field. There is no systematic process for obtaining such a goal.

            Furthermore, computers are a very restricted form of physical system, and therefore they are limited. There are problems that are not computable, yet humans solve them on a regular basis. Even though many people like to anthropomorphize aspects of computation, there is very little in common between a human and a computer.
            • There is no science to their field.

              I think it's a bit too early yet to criticise the field of AI for not being scientific. Remember, we humans messed around with electricity for about 70 years before we even found a use for it, let alone understood what caused it. We need to explore, daydream, and play around for a bit before we can get down to some serious science. And, by "a bit," I mean around 50 more years.

              And we don't even have powerful enough computers yet to play with. It's kind of like criticising blacksmiths for not having a systematic way to create a 747. ...there is very little in common between a human and a computer.

              Uh, yeah. That's why the problem is so hard.
              • by Jagasian ( 129329 ) on Wednesday November 10, 2004 @02:00AM (#10774186)
                The claim that computers are not yet powerful enough is a generic claim from AI proponents, as to why their goals have yet to be realized. In fact, in the early 1990s, an Indian Turing Award winner was an AI guy. His speech was on all that is AI. Read it and you will see exactly what I am ranting about. At the end of his speech he says that the AI community's goals will be realized once we have a "giga-computer", i.e. a computer that is the price of a PC, yet can execute at a gigahertz.

                Yup, Turing Award winner AI man says it himself. In his speech he basically displays all that there is to this type of person and their psuedoscience.
            • by d34thm0nk3y ( 653414 ) on Wednesday November 10, 2004 @03:23PM (#10779543)
              They have yet to produce a sentient computer. They haven't come any close to doing so. There is no science to their field. There is no systematic process for obtaining such a goal.

              The assertion that the point of AI research is to build a sentient computer just shows your lack of knowledge of the field.
            • by DonGar ( 204570 ) on Wednesday November 10, 2004 @06:46PM (#10781881) Homepage
              Can you name a single problem which is undecidable, but which humans can solve? And I mean actually solve, not just come up with an answer which is "good enough".

              I'm not trying to argue, I'm really curious.
        • by INT 21h ( 7143 ) on Wednesday November 10, 2004 @05:25AM (#10774847) Journal
          Funny that... those AI-types come up with a new algorithm, heuristic or idea (compilers, parsers, A*, lisp...) and ten years later it is mainstream and no longer considered AI. Face it, AI is the 'basic research'-wing of CS.
          • by Jagasian ( 129329 ) on Wednesday November 10, 2004 @12:36PM (#10777635)
            Yeah, Lisp, a broken (they didn't even understand proper beta-reduction) reinvention of the lambda-calculus 20 to 30 years after mathematicians created it. Pretty damn innovative. What will they come up with next, arithmetic? A* is nothing more than a contrained search, and it is a hack which only works well in toy games... not real world problems. I wouldn't go bragging about A* as a cornerstone of AI. Parsers were already created in the fields of computational linguistics and automata theory before they were reinvented by the AI community. The same goes for compilers.

            Anyway, just because a painter claims to be researching AI, doesn't make his paintings a product of the field of AI, the same goes for wannabe mathematicians, programming language designers, etc.

            If there is a pure research wing of CS, it would be mathematics. See metamathematics, proof theory, and number theory for good examples. At its best, the field of AI has simply reinvented mathematical concepts from formal logic (semantics nets... oops, they are a fragment of FOL) to lambda-calculi (lisp... oops, we just reinvented the wheel known as the lambda-calculus).
    • by Anonymous Coward on Tuesday November 09, 2004 @10:18PM (#10772893)
      I [reference.com] do [reference.com] not [reference.com] know [reference.com] what [reference.com] you [reference.com] are [reference.com] talking [reference.com] about [reference.com].
    • by base_chakra ( 230686 ) * on Tuesday November 09, 2004 @10:28PM (#10772955)
      The question is about whether we really need a World Wide Web that looks like Wikipedia with links to every word and generally just a jumbled mess of blue and purple text ....
      Imagine you are a reading a book, but each word is connected by string to a dictionary reference, and each dictionary reference definition is tied to the definitions of the words in the definition. You'd end up with a huge, eventually circular mess


      Although your concerns about user interface are well-taken, you seem to be thinking about this strictly in terms of hyperlinks as presented by web browsers, which is a rather limited view. Behaviors could be user-defined, hidden, and abstracted in virtually endless ways.

      For example, even now double-clicking any word in the Opera web browser activates a context-sensitive menu with such options as Dictionary, Encyclopedia, Search, Translate, etc.
    • by medelliadegray ( 705137 ) on Tuesday November 09, 2004 @10:28PM (#10772963)
      "The question is about whether we really need a World Wide Web that looks like Wikipedia with links to every word and generally just a jumbled mess of blue and purple text."

      While your description of this is quite unflattering, i think it would be useful to have the ability to say--highlight a word or phase and by right clicking on it--get the option of:encyclopedia lookup, dictionary, thesarus, etc. I believe in my context, that could be extremely useful.
    • by mikael ( 484 ) on Tuesday November 09, 2004 @10:38PM (#10773036)
      The question is about whether we really need a World Wide Web that looks like Wikipedia with links to every word and generally just a jumbled mess of blue and purple text. No matter how you cut it, the problem lies in having too much information immediately available.

      You could always have your own private dictionary of words that you wanted hyperlinked whenever possible, and also an ignore list for the most popular words. But things would get tricky for movie titles which are wordplay on some other concept.
    • Imagine you are a reading a book, but each word is connected by string to a dictionary reference, and each dictionary reference definition is tied to the definitions of the words in the definition. You'd end up with a huge, eventually circular mess of string and you couldn't realistically get any enjoyment out of the book.

      Ever read Infinite Jest (or anything else) by David Foster Wallace? QED


    • by Spoing ( 152917 ) on Tuesday November 09, 2004 @11:27PM (#10773337) Homepage
      1. The question is about whether we really need a World Wide Web that looks like Wikipedia with links to every word and generally just a jumbled mess of blue and purple text.

      Like that, no. Though we do need meta data and a browser or search engine to support that meta data.

      Not for the current popular stuff that's out there -- the web browsers and search engines work well enough for that.

      Where meta data and ways to search it is interesting is in media files such as audio, video, and images. This isn't like a book search in a card catalog, though. Transient stuff such as audio blogs covered by RSS with enclosures (Podcasting) would need to be indexed. Currently, a standard search engine would be a week or two behind this media -- making it much less valuable.

    • This is pretty much my reaction. Look at the screenshot [mit.edu] mentioned, and ask yourself how that mesh of lines will help you find the best BBQ sauce recipe or download porn faster. Maybe I'm not seeing the forest for the trees. Did I say porn? I meant Elizabethan poetry.
    • by RAMMS+EIN ( 578166 ) on Wednesday November 10, 2004 @05:06AM (#10774801) Homepage Journal
      Now that is about the worst argument I've heard against the semantic web. Just because the links are there doesn't mean you will be distracted by them. There is nothing that says that links are always blue and underlined. In fact, in your scenario, they would more likely look like regular text, but you could select a word or group of words and use a context menu to get a definition. Remember, the semantic web only provides semantics, not appearance. If and how to display the information it provides is a separate issue.
    • by CrazyWingman ( 683127 ) on Wednesday November 10, 2004 @09:26AM (#10775616) Journal
      Um - you have been sucked into the "standard browser display" zone. There is absolutely no reason that all of your links must be underlined, colored, or otherwise highlighted. There is also no reason not to have the information readily available. The idea that should be extracted from your post is that developers should resist the temptation to clutter the display with alerts that more information is available.
    • There is an old saw, "Make things as simple as possible, but no simpler."

      I believe that the "old saw" actually comes from Albert Einstein who said Everything should be made as simple as possible, but not simpler.

      The Semantic Web, while an interesting idea, tries to make things too easy, beyond the point of usefulness.

      I don't believe that is what Albert Einstein intended. I believe that what he was saying was, when modeling the world, make the model as simple as you can without introducing inacuracy due to an over simplified model.

    • by Bob Uhl ( 30977 ) on Wednesday November 10, 2004 @02:02PM (#10778647)
      But folks like Wikipaedia.

      And as others have noted, your argument is simply against intrusive display of links.

      Regardless, that's not what the Semantic Web is about. Its concern is presenting data in such a form that machines can reason about it and perform useful services for users. This is not really AI (which is good, since AI is impossible): it's just intelligent description of data. Done right, an RDF engine could perform such tasks as 'find the cheapest trip to Memphis from Denver next week, with three to four nights and a luxury car' without needing to know anything about trips, Memphis, Denver, hotels, cars, taxes or aught else--that information would be made available in the Semantic Web, and the engine would simply perform its reasoning.

      It's uncertain if this would work or not, but it's a valiant effort.

  • Solution space? (Score:2, Interesting)

    by Anonymous Coward on Tuesday November 09, 2004 @10:01PM (#10772755)
    Isn't RDF much like the laser use to be? A solution looking for a problem?
  • RDF a load of crap (Score:4, Insightful)

    by Anonymous Coward on Tuesday November 09, 2004 @10:02PM (#10772767)
    enough people have said it, but it's worth while saying again. RDF is totally flawed and will never meet the vision of W3C. The whole idea that an RDF resource is true and authorative is just silly. Look at what happened to HTML metadata tag. I got abused instantly and search engines stopped using them. RDF rules is monotonic, which is just totally silly. that basically means any rules written in RDF will timeout if the data isn't already on that particular server. W3C should just give up already on RDF and move on.
    • by Anonymous Coward on Tuesday November 09, 2004 @10:19PM (#10772904)
      "enough people have said it [All related to the OP], but it's worth while saying again. RDF is totally flawed and will never meet the vision of W3C [And that is?]. The whole idea that an RDF resource is true and authorative is just silly [Just like the present web]. Look at what happened to HTML metadata tag. I got abused instantly and search engines stopped using them [They're used, just not alone]. RDF rules is monotonic, which is just totally silly. that basically means any rules written in RDF will timeout if the data isn't already on that particular server [Can you say local, and intranet?]. W3C should just give up already on RDF and move on. [Just like the advice we give those KDE guys]"

      Read this.

      http://www.oreilly.com/catalog/pracrdf/index.html/ [oreilly.com]

      Maybe you'll learn something.
    • by Yosi ( 139306 ) on Tuesday November 09, 2004 @10:28PM (#10772967) Journal
      There are those who worry about these things.
      Much work on the semantic web has been with n3 [w3.org]
      N3 is a superset of rdf, allowing for quoting of groups of triples (known a formulae). In n3, you can say things about groups of n3 triples, including about their trustworthiness.

      For instance, you can say:
      [is log:semantics of <documentURI> ] a :untrustworthyInformation .
      essentially saying that the formula which is the semantics of the given document if of a class :untrustworthyInformation, which your n3 parser may attach special meaning to.

      There are many who are very wary on n3 for precisely the same reasons.

      Note that I will always plug n3, given that I'm heavily involved with cwm [w3.org].
  • Gee thanks... (Score:5, Insightful)

    by Fnkmaster ( 89084 ) * on Tuesday November 09, 2004 @10:15PM (#10772869)
    After looking at that screenshot, it's sooo clear to me the value that the semantic web brings to us (mirrored here [hardgrok.org] as their server appears to be flaking out a bit). If anything, this makes it crystal clear why the semantic web hasn't really taken off, other than in the much more limited form of RSS feeds.

    A network of random connections of semantic concepts embodied as URIs is just not a friendly form of data for humans to manipulate directly, and I don't think it every will be. That's right, I don't believe this is really an issue that's solvable with slightly better tools. I think ultimately the management of and connection of ontologies is something that computers will have to learn to do themselves.

    It's just too hard to expect normal human beings to describe knowledge in any way other than the way we are used to. The web is only as popular as it is because HTML is a simple, appearance-based way to markup documents (yes, I realize strictly speaking HTML isn't supposed to describe many aspects of appearance per se, but there's no denying that it comes from that root). We understand bold and italics (and even strong and em), but ask somebody to generate two concepts by constructing URIs for them and relating them in subject-predicate form and they are going to look at you and drool.

    Even programmers aren't used to the idea of describing knowledge - it's one thing to tell a computer what to do, it's another thing to tell a computer how to know about something that you know.

    Alright, I know I'm opening myself up to the flames here, so flame away. Anyway, I think the "semantic web" will need to wait for tools like Cyc et. al. to come along far enough to construct and relate their own ontologies out of English text, and until then all we will see is stuff like RSS or RDF files in Firefox extensions to describe deployment conditions (i.e. stuff that can be done with any arbitrary XML dialect that doesn't really qualify as the "semantic web" to me).
    • by Anonymous Coward on Tuesday November 09, 2004 @10:28PM (#10772964)
      "Even programmers aren't used to the idea of describing knowledge - it's one thing to tell a computer what to do, it's another thing to tell a computer how to know about something that you know."

      Yea. Just try getting a programmer to explain the latest thing they're working on. "Well you see it does this, and if you click on that, something happens. It's all too complicated to explain, sorry."
      • by Fnkmaster ( 89084 ) * on Tuesday November 09, 2004 @10:44PM (#10773070)
        Sorry, I don't think you understood my meaning, probably because you have zero familiarity with the semantic web outside of reading headlines on Slashdot.

        Of course programmers are very good at being precise in describing algorithms, but describing knowledge in subject-verb-object format is not so easy. You can't just describe your algorithm, you have to relate each of the base concepts used in the algorithm to existing ontologies, or create your own ontologies for them. Describing an algorithm in pseudocode, or in a structured language with straightforward syntactic rules is relatively easy. Doing the same using RDF is HARD because you aren't just implementing the algorithm, you are describing to a computer what the algorithm does and how to use it.

        I encourage you to try working with some of the semantic web technologies a little bit and form your own opinions, as I have done (admittedly 1.5-2 years back, so things may have come along a bit since then, but I doubt the fundamentals have changed).
        • by Jagasian ( 129329 ) on Tuesday November 09, 2004 @11:58PM (#10773559)
          Are you saying that RDF is a Turing complete programming language? How the hell would I write, for example, a C++ interpreter in RDF? Also in what sense can you describe what an algorithm does using RDF? For example, lets say I devised a new sophisticated compression algorithm. How do I describe that in RDF?
          • by Fnkmaster ( 89084 ) * on Wednesday November 10, 2004 @12:49AM (#10773897)
            No, not exactly, RDF isn't a programming language, it's a knowledge description language. You wouldn't write a C++ interpreter in RDF any more than you would write one in English, since that's not what they do. But you can describe a C++ interpreter in English and you can describe a C++ interpreter in RDF.

            Everything breaks down to subject-verb-object tuples. RDF is supposed to be general enough to describe, well, any and all knowledge.

            So you could imagine a description of RSA in RDF pseudo-code:

            p-isA-prime
            q-isA-prime
            n-hasfactor-p
            n-hasfa ctor-q
            n-numfactors-2
            phi-hasfactor-(p-1)
            phi-h asfactor-(q-1)
            phi-numfactors-2
            e-greaterthan-1
            e-lessthan-phi
            e-isRelativelyPrime-phi ...
            and so on.

            Obviously, all my RDF verbs then need to be defined - in other words, I need a basic ontology for some functions. But, yes, it's possible to describe an algorithm in RDF in a way that would theoretically allow an RDF parser to make inferences about it, or to execute the algorithm, given a sufficient understanding of the ontology.

            It's just a somewhat awkward way to go about describing an algorithm, and quite time consuming.
            • by Jagasian ( 129329 ) on Wednesday November 10, 2004 @01:06AM (#10773965)
              But flat UNICODE text is enough to describe all knowledge. I fail to see what is gained.
              • by Fnkmaster ( 89084 ) * on Wednesday November 10, 2004 @01:26AM (#10774051)
                Only that it's hard to structurally relate concepts in a flat text file dump. A flat text file dump + a little bit of structure is XML. From that structure you gain easier, more consistent machine parseability. Can you exchange data without XML? Of course. Now, take the structure of XML and add relationships - in other words, we have a "person" element in document one and a "name" element in document two - if both documents have a relationship back to a common ontology, then the elements can be programmatically associated with or mapped to each other.

                And theoretically (very theoretically), our algorithm in a generic RDF description could be used by a sufficiently sophisticated RDF parser to autogenerate source code in a real programming language that it "understood", like C++.

                Whether the benefits you get from being able to do this stuff in theory are worth the massive up front effort involved in RDF-izing enough knowledge to be useful is the real question.
                • by Jagasian ( 129329 ) on Wednesday November 10, 2004 @02:07AM (#10774210)
                  And theoretically (very theoretically), our algorithm in a generic RDF description could be used by a sufficiently sophisticated RDF parser to autogenerate source code in a real programming language that it "understood", like C++.


                  You do realize that there are computability issues that would prevent such automatic programming?

                  Whether the benefits you get from being able to do this stuff in theory are worth the massive up front effort involved in RDF-izing enough knowledge to be useful is the real question.


                  This is a common claim used by proponents of AI.

                  1940s: "It is just around the corner."

                  1970s: "We just need more links in our semantic nets."

                  1980s: "We just need more facts input into our expert system."

                  early 1990 Turing Award winner for AI: "We just need more processing power. All we need is a giga-PC."

                  2000s: "We just need more data in RDF."
                  • by Fnkmaster ( 89084 ) * on Wednesday November 10, 2004 @02:39AM (#10774373)
                    You do realize that there are computability issues that would prevent such automatic programming?

                    No, I'm not away. Please elucidate. You can write invalid programs in any language, or metalanguage, and it's impossible to validate that the program that your code generator generates will halt or not, obviously.

                    I am not saying it is a practical application, all I'm saying is you can in theory go from an XML-ish algorithmic description to a functional piece of software, so you can clearly do the same with RDF, just more awkwardly (see XUL, for an example of an XML system that has a straightforward mapping to GUI code in several programming languages).

                    You'd have to be crazy to actually want to program like this using RDF, as I said, and I'm not defending the semantic web, just explaining it since I interpreted your original questions to be genuine. If you're looking for a semantic web fanboy to bash, you're looking in the wrong place.

                    This is a common claim used by proponents of AI.

                    No, not at all. I wasn't suggesting that the system becomes self-aware or does anything magical at all. I said there are some theoretical benefits, which seem modest, but there is a massive upfront cost, and it doesn't seem like the benefits are worth the effort (which is presumably why nobody has really adopted the semantic web stuff outside of academia).

                    As for your skepticism about AI in general, I think it's a bit misplaced. Much of what the previous generation considered AI becomes part of the standard repertoire of computer algorithms. I've used several algorithms dredged out of old AI papers for real-world problems (such as Charles Forgy's Rete nets). The thing is it's laughable to think that these algorithms have anything to do with real "artificial intelligence".

                    AI is merely a discipline withing computer science devoted to solving problems with computers that are usually solved by humans, and that aren't naturally suited to the computational regime of computers. To say that somebody is a "proponent" of AI presumably implies something about their belief in the possibility of strong AI in the near-term future, and I never said anything of the sort.
    • by neltana ( 795825 ) on Tuesday November 09, 2004 @10:37PM (#10773025)
      So, in the future, we will all be able to use the Semantic Web to make pretty pictures out of our data. Spiro-Graph and the Etch-A-Sketch will be things of the past. My kids will now be able to make pretty abstract art from my checkbook. Technology is wonderful. I'm one of those people who is waiting for a killer app before I get on the Semantic Web bandwagon. In the meantime, I'm just going to regard it as the source of more computer science acronyms. What I suspect will happen is the next great thing AFTER the Semantic Web will, perhaps, borrow a concept or two from it. That will be the greatest thing since sliced bread.
    • by Craigory ( 553911 ) <<ude.cnu.sc> <ta> <sllafc>> on Tuesday November 09, 2004 @10:46PM (#10773082)
      There are a lot of misconceptions about the Semantic Web going on here. 1) It's not a way to insert a lot of links into HTML the way Wikipedia does. 2) It's not some sort of replacement for HTML or describing things in good old fashioned text. The idea is that you describe knowledge in a way that is repurposable (ie: a computer can prove theorems about it, etc). It's more like an SQL database than an HTML web page. Your secretary is *not* supposed to hand-edit RDF files. There may be RDF files in the *back-end* of the application he uses, however. The Semantic Web will make it easier for two corporations to merge their databases, for example. The URIs associated with RDF entities and relationships need not be associated with any viewable web page. RTFM.
      • Re:Gee thanks... (Score:3, Informative)

        by Fnkmaster ( 89084 ) * on Tuesday November 09, 2004 @11:12PM (#10773260)
        Your comments have no relevance to mine - I have done a fair amount of work with semantic web technologies before (well, compared to most people out there anyway), my comments were a response to my personal experiences, not some random misconceptions formed by reading Slashdot articles.

        1) I never said anything of the sort. RDF/Semantic Web technologies have nothing to do with inserting links into HTML.

        2) I never said it was a replacement for HTML. I just said it wasn't likely to be adopted because of the difficultly of creating content in a properly structured, ontologically connected RDF format.

        Of course your secretary isn't supposed to hand-edit RDF files, but somebody has to not only write code that dumps stuff from a database into RDF (easy - not really any different from dumping into any ole' XML format) but map all the stuff into relevant ontologies (not easy), where "easy" is defined in terms of being comprehensible enough to permit adoption outside of academia.

        3) It's only easier for two corporations to merge databases if all the entities therein are connected by direct or indirect ontological relationships. People have to build these relationships. That was the whole point of my post.

        4) I said nothing about URIs being associated with viewable web pages. Stop inserting random straw man attacks.

        Apparently you are the one who needs to RTFM instead of getting up on your high horse there, buddy. Not everybody on Slashdot is as ignorant as you presume.
  • by aixou ( 756713 ) on Tuesday November 09, 2004 @10:16PM (#10772880)
    Niice. I've always wanted to know what's going on in Steve Job's head.
  • by digitalgimpus ( 468277 ) on Tuesday November 09, 2004 @10:22PM (#10772924) Homepage
    it's nice seeing the first screenshot on a Mac.

    Just something friendly about that.
  • by xmas2003 ( 739875 ) on Tuesday November 09, 2004 @10:30PM (#10772980) Homepage
    The Incredible Hulk [komar.org] had fun [reference.com] with his halloween decorations [komar.org] but that's a warmup [demon.co.uk] for his christmas lights [komar.org] where he plays [plays.com] RoShamBo [komar.org] when not helping [helpout.com] out Google Compute. [powder2glass.com]
  • by 3770 ( 560838 ) on Tuesday November 09, 2004 @10:45PM (#10773077) Homepage

    Make it the goal of next years International Obfuscated C Code Contest.

    I'm sure we'll get a really cryptic one liner that actually is a fully functional RDF browser.
  • by Nooface ( 526234 ) on Tuesday November 09, 2004 @10:48PM (#10773098) Homepage
    Browsing metadata is the next frontier in the evolution of the web. Some of the other RDF browsers popping up include Gnowsis [gnowsis.org], MIT Haystack [mit.edu], and Fenfire [nongnu.org].

    With the growth of the Internet, the value of data itself is dropping, while the value of metadata (i.e. "data about data") increases, introducing a need for tools that can manipulate metadata. That is what RDF is all about - standardizing a way to represent metadata. It is not a standard for the metadata itself...those standards will be determined the same way everything else is on the Internet: with the best solutions rising to the top.

    The most common objections to this scenario?
    a) "Nobody will bother entering metadata". Wrong...it's already happening. Users are voluntarily generating metadata all the time. Just check out sites like flickr [flickr.com] (photo blogging) and del.icio.us [del.icio.us] (collaborative bookmarks), not to mention Amazon reviews and Ebay ratings.
    b) "RDF tags will just be abused with spam, trolls, and other useless info". A variety of techniques are emerging that are designed to protect the integrity of user-contributed data, including trust metrics [moloko.itc.it] like Slashdot's own distributed moderation [umich.edu] (PDF) or Advogato [advogato.org].
    • by Ars-Fartsica ( 166957 ) on Wednesday November 10, 2004 @01:49AM (#10774138)
      I say the future is in statistical analysis of language, you say it is in metadata.

      DJIA/NASDAQ traded firms specializing at least partly in statistical recognition of text as a business model:

      GOOG,YHOO,MSFT etc etc

      DJIA/NASDAQ firms using metadata as a business model:

      When is the future supposed to arrive again?

  • by kidlinux ( 2550 ) <duke@space b o x . n et> on Tuesday November 09, 2004 @10:48PM (#10773102) Homepage
    Check out Semaview Inc. [semaview.com] who's making a business of RDF. They've already got one good product [eventsherpa.com] out. They're somewhat OSS friendly, too.

    Personally, I think eventSherpa is pretty neat.

    (Disclaimer: I know the CEO.)
  • by dynamic_cast ( 250615 ) on Tuesday November 09, 2004 @11:00PM (#10773178) Homepage
    Why would the Robotech Defence Force need there own browser?
  • by Chuck Bucket ( 142633 ) on Tuesday November 09, 2004 @11:00PM (#10773180) Homepage Journal
    load Slashdot faster, or block popups?

    j/k

    CB
  • innovative? (Score:1, Funny)

    by Anonymous Coward on Tuesday November 09, 2004 @11:05PM (#10773216)
    This http://simile.mit.edu/welkin/images/screenshot.gif
    screen shot reminds me of my big college sophomore year project. Connecting lots of pretty lines together in hopes of impressing people by calling it a neural network. I have to give props though for getting the lines to be anti-aliased.
  • by dpm ( 156773 ) on Tuesday November 09, 2004 @11:24PM (#10773325)
    I promoted RDF a fair bit back in the late 1990s and even wrote one of the first libraries for it. I think that the idea of machine-readable data on the web is a very good one (and probably more scalable than the whole Web Services thing), but six or so years later, I don't think that RDF is it.

    The trouble is that RDF (and OWL) try to do too much, getting all tangled up in the arcana of knowledge representation, and the Semantic Web thing has only muddied the waters further -- the screenshot is a stunning graphic representation of the mess that RDF has gotten itself into (I'll assume that it's serious, since it's a long time until 1 April).

    All we really need for a data web is a bunch of XML files online that make references to each other for machines to follow, the same way that web pages make links -- in other words, a data web would be a distributed database, the same way that the document web is a distributed hypertext system. RDF reminds me more of the complex pre-HTML hypertext systems of the late 1980s than of the successful, simple formats and protocols that drive the Web.
    • by Ars-Fartsica ( 166957 ) on Wednesday November 10, 2004 @01:58AM (#10774170)
      Statistical text analysis and link analysis are a superior technique because it presumes the author could be BSing. The entire document must contribute to the corresponding query value, not just keyphrases which could or could not be true. This is why Google is a $50 billion company and no RDF firm ever will be.
    • by Fnkmaster ( 89084 ) * on Wednesday November 10, 2004 @02:19AM (#10774298)
      Well, you would certainly know, ease of use is critical to winning developer mindshare and promoting adoption of technologies - and I would point to SAX as a great example of this for promoting use of XML in the Java community.

      It does seem to me that the key thing is to promote ad-hoc use of a relatively standardized mechanism for relating XML document structures to other XML document structures. Forget about waiting for somebody else to build relevant ontologies, reconstructing the entirety of human knowledge from the ground up, or any of that stuff. What people could reasonably do today is relate XML schema one to XML schema two because they need to connect widget A with widget B. Make the adoption of this technology as low cost as possible.

      Just like adding a few anchor tags to a basic HTML document is an easy way to relate some human readable information to other human readable information, relating XML document types to other XML document types should be "easy".

      Then the only big problem is to find a few applications that would actually demonstrate the benefits of doing this clearly. Yes, it is effectively a distributed XML database of sorts, but what is it good for? RSS has real applications for end users, so it has caught on. Without some software to demonstrate the benefits of linking up your XML data structures, people just won't bother with it. It seems specific, realistic use cases are what's needed here (and what seems terribly lacking from all the W3C RDF documentation as well). How does the distributed, semi-structured database that results provide use to me beyond what I have now with lots of disparate XML documents out there, when you cut out the truly grandiose notions behind RDF and the full-fledged semantic web?

      I'm too tired to come up with convincing arguments right now, so hopefully somebody else will fill in the blanks here.
  • Narcissism (Score:5, Insightful)

    by pico303 ( 187769 ) on Tuesday November 09, 2004 @11:25PM (#10773329)
    Almost everybody here seems to be missing the point: RDF isn't for you--it's for your computer. The point of RDF and the Semantic Web is to structure knowledge so that programs can interact with one another to perform better, even in some cases simulating intelligent decisions. Unless you're working in developing Semantic Web technology, you should never have to look at an RDF document.

    It's not a wiki. It's not a new way to see metadata. It's your softwares' version of the WWW.

    It's not always about you humans.
  • by Anonymous Coward on Tuesday November 09, 2004 @11:28PM (#10773339)
    1) people like to masturbate.

    2) some people like to look at pictures of naked girls while masturbating.

    3) some people like to think about graph theory while masturbating.

    The semantic web is the unfortunate result of #3.

    Now, while I have no problem with any of these behaviors, I do ask that people in group #2 to keep their sticky dirty magazines under their bed, not on their coffee tables; and people in group #3 to likewise keep their inventions locked in the closet, and not release them to standards bodies or working groups.

    So when you see someone in a clear frenzy of sexual excitement talking to you about "ontologies" and "reification", simply smile politely, and call the police.

    Remember, these people are the exception, not the norm, in an otherwise healthy society.
  • by SavvyPlayer ( 774432 ) on Tuesday November 09, 2004 @11:49PM (#10773497)

    One trouble regarding many semantic visualization techniques involving large datasets is: the more visually appealing a graph is rendered, the less useful it often becomes. Many projects undertaken over the past 6 years (including Welkin) have focused on 2- and 3-dimensional renderings of a dataspace, using lines, proximity, node-shape, fly-over metadata display, etc. to classify and relate nodes, only to find there is no room left for persistent display of the textual metadata that ultimately drives a user toward the content he/she is looking for.

    Marcos Weskamp's Newsmap [marumushi.com] (slashdot [slashdot.org]) on the other hand demonstrates an excellent balance of form and function, emphasizing textual metadata over symbolic graphic representation. How might this approach be applied specifically to RDF? One possibility: 5 axes rendered in a 2d visual space: color (category), saturation (relevance), size (interest), x/y position (age) and text (metadata). Just a thought anyway.

  • by CosmicDreams ( 23020 ) on Wednesday November 10, 2004 @12:40AM (#10773842) Journal
    I have been looking for a tool (that's better than Document inspector) to troubleshoot while I'm trying to code in RDF. I was hopeing for a debugger so I wouldn't have to test so many cases through multiple steps, but being able to see the structure may help some.

    Too bad it doesn't take the XUL rules into consideration when redering maps like the one shown in the screenshot. Do you know if they are going to open development up anytime soon?
  • by Post ( 113251 ) on Wednesday November 10, 2004 @10:23AM (#10776117)

    People who look at these browser screenshots and decide that the semantic web is/will be a mess stop thinking too early.

    This graph-like presentation is just one way to show semantics, and it only works for certain things, like topic maps.

    I'm sometimes using tools like outliners and the Brain [thebrain.com] (insert pun here) to present ideas and their relationships. This is not the way you would want to e.g. read/present a complex manual.

    Other, more complex forms of presentation are required - and possible. Ted Nelson had a lot of ideas regarding hypertext and presentation of relationships that have never turned into products. I'm working on my own little, Xanadu-ish project that aims to make navigation in structured text easier. The benefit is not presentation "A" or "B" - but the fact that you will be able to tweak the presentation according to what you need to know. This requires semantics, which in turn requires new tools both for the author, not (only) for the reader.

    One day, we will look back and wonder how we could live with an Internet where a search engine had to guess if we are looking for Lotus The Car or Lotus The Flower or Lotus The Software Company, or where separating articles by an author from those about him was nearly impossible. No-one in their right mind can claim this is good enough for the future.

  • Ten years ago, we used to say, "we are angry at this or that issue." Now, we say, "we are frustrated". The entire idea of trying to reduce our language to assertions of logic is silly because as soon as we do so, we immediately try to change the definition of one of the composing elements in order to get out of it.

    Here's the biggest problem with the web. Most people that have web sites have them to sell stuff, and they DON'T WANT their stuff to be easily searchable and diced and sliced. All this interoperability and data exchange stuff XML, Web Services, Internet Tools, even old COM Objects and CORBA objects and even older RPC all failed, has failed, and will always fail because people don't want you to make it easy to compare you against someone else. It's not a question of cost.

    It's just stupid to do it. What, I'm going to pay someone to make it easier for customers to choose someone else for any product? That's the most retarded thing in the world.

    If you wanted to make the next big web, make one where people CAN'T compare content from your site to someone elses and have zones of it be franchized off for exclusivity. Like, I'd make a browser where every page sent is just a giant bitmap, and that way, it couldn't be scraped at all.

<<<<< EVACUATION ROUTE <<<<<

Working...