Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
The Internet GUI Software

Welkin: A General-Purpose RDF Browser 189

Posted by timothy
from the what-am-I-thinking-about dept.
Stefano Mazzocchi writes "Many consider the Semantic Web to be vaporware and others believe it's the next big thing. No matter where you stand, a question always pops up: Where is the RDF browser? The SIMILE Project, a joint project between W3C, MIT and HP to implement semantic interoperability of metadata in digital libraries, released today the first beta release of a general purpose graphic and interactive RDF browser named Welkin (see a screenshot), targetted to those who need to get a mental model of any RDF dataset, from a single RSS 1.0 news feed to a collection of digital data."
This discussion has been archived. No new comments can be posted.

Welkin: A General-Purpose RDF Browser

Comments Filter:
  • by otisg (92803) on Tuesday November 09, 2004 @08:58PM (#10772738) Homepage Journal
    Considering the big 1.0 for Firefox is out, I would think people who wanted their Semantic Web browsing software to be wide-spread would implement it as a Firefox plugin, no?
  • by Dancin_Santa (265275) <DancinSanta@gmail.com> on Tuesday November 09, 2004 @09:00PM (#10772747) Journal
    The question is about whether we really need a World Wide Web that looks like Wikipedia with links to every word and generally just a jumbled mess of blue and purple text. No matter how you cut it, the problem lies in having too much information immediately available.

    Imagine you are a reading a book, but each word is connected by string to a dictionary reference, and each dictionary reference definition is tied to the definitions of the words in the definition. You'd end up with a huge, eventually circular mess of string and you couldn't realistically get any enjoyment out of the book. The fact of the matter is that if you want to get more information about something, it is easy to go to an outside source to look it up. It does not need to be easier, because by making it easier than it must be you necessarily end up cluttering the thing you want to illuminate.

    There is an old saw, "Make things as simple as possible, but no simpler." The Semantic Web, while an interesting idea, tries to make things too easy, beyond the point of usefulness. The lack of content on the Semantic Web is a testament to the uselessness of such an over-engineered web space.
    • by Anonymous Coward
      "There is an old saw, "Make things as simple as possible, but no simpler." The Semantic Web, while an interesting idea, tries to make things too easy, beyond the point of usefulness. The lack of content on the Semantic Web is a testament to the uselessness of such an over-engineered web space."

      Or a testament to people's inability to understand new paradigms.
    • Hear, hear. Wikipedia is about the most hyperlinking that I could stomach. It is useful in that specific application, but the notion of the Semeantic Web is silly. For crying out loud, people on /. don't RTFA, let alone verify all definitions of the words in the summary.
    • make it so that they don't look like different links.. until you press some button or triple click on the word or whatever..

      so.. invisible strings that you can see if you wish.
    • Welkin is simply a PoC, IMO. It just attempts to prove that you can link information together in a fairly suitable way. This is always the first step in any new technology. Other products could, and probably should, use it for different purposes.

      Your main objection lies in that it does not filter information, but adds to the mass information overload humans experience daily. However, this can be changed simply. Welkin seems to dump all data at once. The code could be changed so you could traverse ideas. I
    • by mat catastrophe (105256) on Tuesday November 09, 2004 @09:12PM (#10772842) Homepage
      but about how quickly can Microsoft turn it into a security hole for your friends and family?
    • by sonsonete (473442) on Tuesday November 09, 2004 @09:12PM (#10772844) Homepage
      The point of the Semantic Web lies not in making information readily available to people browsing the internet but in providing semantic context with which computers can work. A person reading a document in a browser is not expected to follow links attached to every word. Rather, a computer program is expected to be able to use this information to learn the meaning behind the sting of characters.
      • And how is the meaning behind a string of characters given? For example, lets say you want to give the meaning behind a strong of characters that describes to a human the proof of Skolem's Paradox.
          • And how is the meaning behind a string of characters given? For example, lets say you want to give the meaning behind a strong of characters that describes to a human the proof of Skolem's Paradox.

          It's given by marking it up. The computer doesn't need to know anything about the proof of anything, just like Google doesn't know anything about porn, and yet when you search for "big boobs", it knows what to return. *wink*

          The point isn't that a computer program will ever "know" what Skolem's paradox is, in

          • Yes, but all this assumes that people agree on exact precise ways of representing everything. Just as in flat UNICODE text, you can describe the same thing in multiple ways, in RDF you can also describe the same thing in multiple ways that still differ in RDF's semantics. For example, there are multiple ways to encode a ternary predicate in terms of binary predicates. Each one of these represenations will not just differ in syntax, but also in RDF's semantics. Nothing is gained!

            RDF and the semantic web
            • It's hardly true that nothing is gained. RDF statements express contraints on the class of models with which they are consistent. You seem to want everything in a normal form. The closest you get to that is standardized ontologies.
      • Well, maybe computers should just get smart enough to figure it out for themselves, instead of turning a billion web authors into markup slaves.
      • 'Rather, a computer program is expected to be able to use this information to learn the meaning behind the sting of characters. '

        Exactly - and while it seems like a worthwhile mental exercise, I would like to point out that this kind of construction has an immense potential for controlling information and shaping people's opinions. Just imagine advertisers getting their foot in somewhere in this; suddenly the machine's 'understanding' is coloured by commercial or political interests.

        No, give me the raw in
    • The fact of the matter is that if you want to get more information about something, it is easy to go to an outside source to look it up.

      The semantic web isn't about human usability. It's about building machine intelligence and knowledge.
      • The semantic web isn't about human usability. It's about building machine intelligence and knowledge.

        Right, but the problem is if it's unusable for humans to _create_ that content, or to map it from human knowledge-space into machine-parseable format, then it doesn't matter if it's well-engineered from the machine's perspective. That's why adoption of the semantic web has been so poor (outside of applications that could just as well be filled with any ole' XML dialect, like RSS or RDF descriptors used to
      • I am sure I will be modded as a troll, but somebody needs to say it, somebody needs to stop these guys.

        The "Semantic web" is the latest snake oil being pawned by the AI community.

        Nothing is worse than an AI-type. They make big claims and never deliver. They overly anthropomorphize all aspects of computation, fooling themselves into a false understanding of all that is related to computer science. For example, Emacs is "intelligent" because it includes a broken implementation of the lambda-calculus, an
        • I know! The gall of these people, thinking that a physical system could produce a sentient being! How could they push this snake oil?
          • They have yet to produce a sentient computer. They haven't come any close to doing so. There is no science to their field. There is no systematic process for obtaining such a goal.

            Furthermore, computers are a very restricted form of physical system, and therefore they are limited. There are problems that are not computable, yet humans solve them on a regular basis. Even though many people like to anthropomorphize aspects of computation, there is very little in common between a human and a computer.
            • There is no science to their field.

              I think it's a bit too early yet to criticise the field of AI for not being scientific. Remember, we humans messed around with electricity for about 70 years before we even found a use for it, let alone understood what caused it. We need to explore, daydream, and play around for a bit before we can get down to some serious science. And, by "a bit," I mean around 50 more years.

              And we don't even have powerful enough computers yet to play with. It's kind of like critic
              • The claim that computers are not yet powerful enough is a generic claim from AI proponents, as to why their goals have yet to be realized. In fact, in the early 1990s, an Indian Turing Award winner was an AI guy. His speech was on all that is AI. Read it and you will see exactly what I am ranting about. At the end of his speech he says that the AI community's goals will be realized once we have a "giga-computer", i.e. a computer that is the price of a PC, yet can execute at a gigahertz.

                Yup, Turing Awar
                • d00d, you're getting into a religious fervor here. I refer you back to my blacksmith analogy.

                  You're saying to the blacksmith, "HAHA! You haven't made a 747 yet!"

                  Blacksmith says, "I don't have the proper tools. Maybe if I had a better hammer..."

                  You: "You blacksmiths always say that. HAHA! You'll never make it!"

                  Eventually a 747 was made. I'll bet, if you'd been around then, you would have criticised the airplane makers every step of the way...
                  • But here's the thing, if I was saying this to a blacksmith in 500AD (a point where blacksmithing had been around for so long as to seem forever) and asked him to create a 747 he would still be whinning about bad tools for well over another millenium. This is the difference between AI bullshit and the rest of the world ... yes, you might be able to do it in another 1000 years, but so what it might as well be physically impossible as far as planning for it now.

            • They have yet to produce a sentient computer. They haven't come any close to doing so. There is no science to their field. There is no systematic process for obtaining such a goal.

              The assertion that the point of AI research is to build a sentient computer just shows your lack of knowledge of the field.
            • Can you name a single problem which is undecidable, but which humans can solve? And I mean actually solve, not just come up with an answer which is "good enough".

              I'm not trying to argue, I'm really curious.
        • Funny that... those AI-types come up with a new algorithm, heuristic or idea (compilers, parsers, A*, lisp...) and ten years later it is mainstream and no longer considered AI. Face it, AI is the 'basic research'-wing of CS.
          • Yeah, Lisp, a broken (they didn't even understand proper beta-reduction) reinvention of the lambda-calculus 20 to 30 years after mathematicians created it. Pretty damn innovative. What will they come up with next, arithmetic? A* is nothing more than a contrained search, and it is a hack which only works well in toy games... not real world problems. I wouldn't go bragging about A* as a cornerstone of AI. Parsers were already created in the fields of computational linguistics and automata theory before t
    • by Anonymous Coward on Tuesday November 09, 2004 @09:18PM (#10772893)
      I [reference.com] do [reference.com] not [reference.com] know [reference.com] what [reference.com] you [reference.com] are [reference.com] talking [reference.com] about [reference.com].
    • The question is about whether we really need a World Wide Web that looks like Wikipedia with links to every word and generally just a jumbled mess of blue and purple text ....
      Imagine you are a reading a book, but each word is connected by string to a dictionary reference, and each dictionary reference definition is tied to the definitions of the words in the definition. You'd end up with a huge, eventually circular mess


      Although your concerns about user interface are well-taken, you seem to be thinking abo
    • "The question is about whether we really need a World Wide Web that looks like Wikipedia with links to every word and generally just a jumbled mess of blue and purple text."

      While your description of this is quite unflattering, i think it would be useful to have the ability to say--highlight a word or phase and by right clicking on it--get the option of:encyclopedia lookup, dictionary, thesarus, etc. I believe in my context, that could be extremely useful.
    • The question is about whether we really need a World Wide Web that looks like Wikipedia with links to every word and generally just a jumbled mess of blue and purple text. No matter how you cut it, the problem lies in having too much information immediately available.

      You could always have your own private dictionary of words that you wanted hyperlinked whenever possible, and also an ignore list for the most popular words. But things would get tricky for movie titles which are wordplay on some other concep
    • Imagine you are a reading a book, but each word is connected by string to a dictionary reference, and each dictionary reference definition is tied to the definitions of the words in the definition. You'd end up with a huge, eventually circular mess of string and you couldn't realistically get any enjoyment out of the book.

      Ever read Infinite Jest (or anything else) by David Foster Wallace? QED


      1. The question is about whether we really need a World Wide Web that looks like Wikipedia with links to every word and generally just a jumbled mess of blue and purple text.

      Like that, no. Though we do need meta data and a browser or search engine to support that meta data.

      Not for the current popular stuff that's out there -- the web browsers and search engines work well enough for that.

      Where meta data and ways to search it is interesting is in media files such as audio, video, and images. This isn't

    • This is pretty much my reaction. Look at the screenshot [mit.edu] mentioned, and ask yourself how that mesh of lines will help you find the best BBQ sauce recipe or download porn faster. Maybe I'm not seeing the forest for the trees. Did I say porn? I meant Elizabethan poetry.
    • Now that is about the worst argument I've heard against the semantic web. Just because the links are there doesn't mean you will be distracted by them. There is nothing that says that links are always blue and underlined. In fact, in your scenario, they would more likely look like regular text, but you could select a word or group of words and use a context menu to get a definition. Remember, the semantic web only provides semantics, not appearance. If and how to display the information it provides is a sep
    • Um - you have been sucked into the "standard browser display" zone. There is absolutely no reason that all of your links must be underlined, colored, or otherwise highlighted. There is also no reason not to have the information readily available. The idea that should be extracted from your post is that developers should resist the temptation to clutter the display with alerts that more information is available.
    • There is an old saw, "Make things as simple as possible, but no simpler."

      I believe that the "old saw" actually comes from Albert Einstein who said Everything should be made as simple as possible, but not simpler.

      The Semantic Web, while an interesting idea, tries to make things too easy, beyond the point of usefulness.

      I don't believe that is what Albert Einstein intended. I believe that what he was saying was, when modeling the world, make the model as simple as you can without introducing inacurac

    • But folks like Wikipaedia.

      And as others have noted, your argument is simply against intrusive display of links.

      Regardless, that's not what the Semantic Web is about. Its concern is presenting data in such a form that machines can reason about it and perform useful services for users. This is not really AI (which is good, since AI is impossible): it's just intelligent description of data. Done right, an RDF engine could perform such tasks as 'find the cheapest trip to Memphis from Denver next week, wi

  • Solution space? (Score:2, Interesting)

    by Anonymous Coward
    Isn't RDF much like the laser use to be? A solution looking for a problem?
  • RDF a load of crap (Score:4, Insightful)

    by Anonymous Coward on Tuesday November 09, 2004 @09:02PM (#10772767)
    enough people have said it, but it's worth while saying again. RDF is totally flawed and will never meet the vision of W3C. The whole idea that an RDF resource is true and authorative is just silly. Look at what happened to HTML metadata tag. I got abused instantly and search engines stopped using them. RDF rules is monotonic, which is just totally silly. that basically means any rules written in RDF will timeout if the data isn't already on that particular server. W3C should just give up already on RDF and move on.
    • by Anonymous Coward
      "enough people have said it [All related to the OP], but it's worth while saying again. RDF is totally flawed and will never meet the vision of W3C [And that is?]. The whole idea that an RDF resource is true and authorative is just silly [Just like the present web]. Look at what happened to HTML metadata tag. I got abused instantly and search engines stopped using them [They're used, just not alone]. RDF rules is monotonic, which is just totally silly. that basically means any rules written in RDF will time
    • by Yosi (139306)
      There are those who worry about these things.
      Much work on the semantic web has been with n3 [w3.org]
      N3 is a superset of rdf, allowing for quoting of groups of triples (known a formulae). In n3, you can say things about groups of n3 triples, including about their trustworthiness.

      For instance, you can say:

      [is log:semantics of <documentURI> ] a :untrustworthyInformation .

      essentially saying that the formula which is the semantics of the given document if of a class :untrustworthyInformation, which your n3 pars

      • The point of metadata criticisms is that "untrustworthy" would have to be assigned to practically every tag. So, what is the point? In any case its just another opinion. Is dmoz.org trustworthy? Is it if I say so in my n3? Maybe, maybe not.
  • Gee thanks... (Score:5, Insightful)

    by Fnkmaster (89084) * on Tuesday November 09, 2004 @09:15PM (#10772869)
    After looking at that screenshot, it's sooo clear to me the value that the semantic web brings to us (mirrored here [hardgrok.org] as their server appears to be flaking out a bit). If anything, this makes it crystal clear why the semantic web hasn't really taken off, other than in the much more limited form of RSS feeds.

    A network of random connections of semantic concepts embodied as URIs is just not a friendly form of data for humans to manipulate directly, and I don't think it every will be. That's right, I don't believe this is really an issue that's solvable with slightly better tools. I think ultimately the management of and connection of ontologies is something that computers will have to learn to do themselves.

    It's just too hard to expect normal human beings to describe knowledge in any way other than the way we are used to. The web is only as popular as it is because HTML is a simple, appearance-based way to markup documents (yes, I realize strictly speaking HTML isn't supposed to describe many aspects of appearance per se, but there's no denying that it comes from that root). We understand bold and italics (and even strong and em), but ask somebody to generate two concepts by constructing URIs for them and relating them in subject-predicate form and they are going to look at you and drool.

    Even programmers aren't used to the idea of describing knowledge - it's one thing to tell a computer what to do, it's another thing to tell a computer how to know about something that you know.

    Alright, I know I'm opening myself up to the flames here, so flame away. Anyway, I think the "semantic web" will need to wait for tools like Cyc et. al. to come along far enough to construct and relate their own ontologies out of English text, and until then all we will see is stuff like RSS or RDF files in Firefox extensions to describe deployment conditions (i.e. stuff that can be done with any arbitrary XML dialect that doesn't really qualify as the "semantic web" to me).
    • by Anonymous Coward
      "Even programmers aren't used to the idea of describing knowledge - it's one thing to tell a computer what to do, it's another thing to tell a computer how to know about something that you know."

      Yea. Just try getting a programmer to explain the latest thing they're working on. "Well you see it does this, and if you click on that, something happens. It's all too complicated to explain, sorry."
      • Sorry, I don't think you understood my meaning, probably because you have zero familiarity with the semantic web outside of reading headlines on Slashdot.

        Of course programmers are very good at being precise in describing algorithms, but describing knowledge in subject-verb-object format is not so easy. You can't just describe your algorithm, you have to relate each of the base concepts used in the algorithm to existing ontologies, or create your own ontologies for them. Describing an algorithm in pseudoc
        • Are you saying that RDF is a Turing complete programming language? How the hell would I write, for example, a C++ interpreter in RDF? Also in what sense can you describe what an algorithm does using RDF? For example, lets say I devised a new sophisticated compression algorithm. How do I describe that in RDF?
          • No, not exactly, RDF isn't a programming language, it's a knowledge description language. You wouldn't write a C++ interpreter in RDF any more than you would write one in English, since that's not what they do. But you can describe a C++ interpreter in English and you can describe a C++ interpreter in RDF.

            Everything breaks down to subject-verb-object tuples. RDF is supposed to be general enough to describe, well, any and all knowledge.

            So you could imagine a description of RSA in RDF pseudo-code:

            p-isA-
            • But flat UNICODE text is enough to describe all knowledge. I fail to see what is gained.
              • Only that it's hard to structurally relate concepts in a flat text file dump. A flat text file dump + a little bit of structure is XML. From that structure you gain easier, more consistent machine parseability. Can you exchange data without XML? Of course. Now, take the structure of XML and add relationships - in other words, we have a "person" element in document one and a "name" element in document two - if both documents have a relationship back to a common ontology, then the elements can be program
                • And theoretically (very theoretically), our algorithm in a generic RDF description could be used by a sufficiently sophisticated RDF parser to autogenerate source code in a real programming language that it "understood", like C++.

                  You do realize that there are computability issues that would prevent such automatic programming?

                  Whether the benefits you get from being able to do this stuff in theory are worth the massive up front effort involved in RDF-izing enough knowledge to be useful is the real ques

                  • You do realize that there are computability issues that would prevent such automatic programming?

                    No, I'm not away. Please elucidate. You can write invalid programs in any language, or metalanguage, and it's impossible to validate that the program that your code generator generates will halt or not, obviously.

                    I am not saying it is a practical application, all I'm saying is you can in theory go from an XML-ish algorithmic description to a functional piece of software, so you can clearly do the same with
    • So, in the future, we will all be able to use the Semantic Web to make pretty pictures out of our data. Spiro-Graph and the Etch-A-Sketch will be things of the past. My kids will now be able to make pretty abstract art from my checkbook. Technology is wonderful. I'm one of those people who is waiting for a killer app before I get on the Semantic Web bandwagon. In the meantime, I'm just going to regard it as the source of more computer science acronyms. What I suspect will happen is the next great thin
    • There are a lot of misconceptions about the Semantic Web going on here. 1) It's not a way to insert a lot of links into HTML the way Wikipedia does. 2) It's not some sort of replacement for HTML or describing things in good old fashioned text. The idea is that you describe knowledge in a way that is repurposable (ie: a computer can prove theorems about it, etc). It's more like an SQL database than an HTML web page. Your secretary is *not* supposed to hand-edit RDF files. There may be RDF files in the *
      • Re:Gee thanks... (Score:3, Informative)

        by Fnkmaster (89084) *
        Your comments have no relevance to mine - I have done a fair amount of work with semantic web technologies before (well, compared to most people out there anyway), my comments were a response to my personal experiences, not some random misconceptions formed by reading Slashdot articles.

        1) I never said anything of the sort. RDF/Semantic Web technologies have nothing to do with inserting links into HTML.

        2) I never said it was a replacement for HTML. I just said it wasn't likely to be adopted because of t
  • by aixou (756713) on Tuesday November 09, 2004 @09:16PM (#10772880)
    Niice. I've always wanted to know what's going on in Steve Job's head.
  • it's nice seeing the first screenshot on a Mac.

    Just something friendly about that.
  • by xmas2003 (739875) on Tuesday November 09, 2004 @09:30PM (#10772980) Homepage
    The Incredible Hulk [komar.org] had fun [reference.com] with his halloween decorations [komar.org] but that's a warmup [demon.co.uk] for his christmas lights [komar.org] where he plays [plays.com] RoShamBo [komar.org] when not helping [helpout.com] out Google Compute. [powder2glass.com]
  • by 3770 (560838) on Tuesday November 09, 2004 @09:45PM (#10773077) Homepage

    Make it the goal of next years International Obfuscated C Code Contest.

    I'm sure we'll get a really cryptic one liner that actually is a fully functional RDF browser.
  • by Nooface (526234) on Tuesday November 09, 2004 @09:48PM (#10773098) Homepage
    Browsing metadata is the next frontier in the evolution of the web. Some of the other RDF browsers popping up include Gnowsis [gnowsis.org], MIT Haystack [mit.edu], and Fenfire [nongnu.org].

    With the growth of the Internet, the value of data itself is dropping, while the value of metadata (i.e. "data about data") increases, introducing a need for tools that can manipulate metadata. That is what RDF is all about - standardizing a way to represent metadata. It is not a standard for the metadata itself...those standards will be determined the same way everything else is on the Internet: with the best solutions rising to the top.

    The most common objections to this scenario?
    a) "Nobody will bother entering metadata". Wrong...it's already happening. Users are voluntarily generating metadata all the time. Just check out sites like flickr [flickr.com] (photo blogging) and del.icio.us [del.icio.us] (collaborative bookmarks), not to mention Amazon reviews and Ebay ratings.
    b) "RDF tags will just be abused with spam, trolls, and other useless info". A variety of techniques are emerging that are designed to protect the integrity of user-contributed data, including trust metrics [moloko.itc.it] like Slashdot's own distributed moderation [umich.edu] (PDF) or Advogato [advogato.org].
    • I say the future is in statistical analysis of language, you say it is in metadata.

      DJIA/NASDAQ traded firms specializing at least partly in statistical recognition of text as a business model:

      GOOG,YHOO,MSFT etc etc

      DJIA/NASDAQ firms using metadata as a business model:

      When is the future supposed to arrive again?

  • by kidlinux (2550) <duke@spacebo[ ]et ['x.n' in gap]> on Tuesday November 09, 2004 @09:48PM (#10773102) Homepage
    Check out Semaview Inc. [semaview.com] who's making a business of RDF. They've already got one good product [eventsherpa.com] out. They're somewhat OSS friendly, too.

    Personally, I think eventSherpa is pretty neat.

    (Disclaimer: I know the CEO.)
  • Why would the Robotech Defence Force need there own browser?
  • load Slashdot faster, or block popups?

    j/k

    CB
  • innovative? (Score:1, Funny)

    by Anonymous Coward
    This http://simile.mit.edu/welkin/images/screenshot.gif
    screen shot reminds me of my big college sophomore year project. Connecting lots of pretty lines together in hopes of impressing people by calling it a neural network. I have to give props though for getting the lines to be anti-aliased.
  • by dpm (156773) on Tuesday November 09, 2004 @10:24PM (#10773325)
    I promoted RDF a fair bit back in the late 1990s and even wrote one of the first libraries for it. I think that the idea of machine-readable data on the web is a very good one (and probably more scalable than the whole Web Services thing), but six or so years later, I don't think that RDF is it.

    The trouble is that RDF (and OWL) try to do too much, getting all tangled up in the arcana of knowledge representation, and the Semantic Web thing has only muddied the waters further -- the screenshot is a stunning graphic representation of the mess that RDF has gotten itself into (I'll assume that it's serious, since it's a long time until 1 April).

    All we really need for a data web is a bunch of XML files online that make references to each other for machines to follow, the same way that web pages make links -- in other words, a data web would be a distributed database, the same way that the document web is a distributed hypertext system. RDF reminds me more of the complex pre-HTML hypertext systems of the late 1980s than of the successful, simple formats and protocols that drive the Web.
    • by Ars-Fartsica (166957) on Wednesday November 10, 2004 @12:58AM (#10774170)
      Statistical text analysis and link analysis are a superior technique because it presumes the author could be BSing. The entire document must contribute to the corresponding query value, not just keyphrases which could or could not be true. This is why Google is a $50 billion company and no RDF firm ever will be.
      • Which is also why abuses with link farms (google bombing) are rampant. Google results are not as accurate / clean as it once was.

        Google can be considered passive statistical text analysis. What we now need is an active way for machines to determine the value of documents and their context.
    • Well, you would certainly know, ease of use is critical to winning developer mindshare and promoting adoption of technologies - and I would point to SAX as a great example of this for promoting use of XML in the Java community.

      It does seem to me that the key thing is to promote ad-hoc use of a relatively standardized mechanism for relating XML document structures to other XML document structures. Forget about waiting for somebody else to build relevant ontologies, reconstructing the entirety of human know
  • Narcissism (Score:5, Insightful)

    by pico303 (187769) on Tuesday November 09, 2004 @10:25PM (#10773329)
    Almost everybody here seems to be missing the point: RDF isn't for you--it's for your computer. The point of RDF and the Semantic Web is to structure knowledge so that programs can interact with one another to perform better, even in some cases simulating intelligent decisions. Unless you're working in developing Semantic Web technology, you should never have to look at an RDF document.

    It's not a wiki. It's not a new way to see metadata. It's your softwares' version of the WWW.

    It's not always about you humans.
  • by Anonymous Coward on Tuesday November 09, 2004 @10:28PM (#10773339)
    1) people like to masturbate.

    2) some people like to look at pictures of naked girls while masturbating.

    3) some people like to think about graph theory while masturbating.

    The semantic web is the unfortunate result of #3.

    Now, while I have no problem with any of these behaviors, I do ask that people in group #2 to keep their sticky dirty magazines under their bed, not on their coffee tables; and people in group #3 to likewise keep their inventions locked in the closet, and not release them to standards bodies or working groups.

    So when you see someone in a clear frenzy of sexual excitement talking to you about "ontologies" and "reification", simply smile politely, and call the police.

    Remember, these people are the exception, not the norm, in an otherwise healthy society.
  • One trouble regarding many semantic visualization techniques involving large datasets is: the more visually appealing a graph is rendered, the less useful it often becomes. Many projects undertaken over the past 6 years (including Welkin) have focused on 2- and 3-dimensional renderings of a dataspace, using lines, proximity, node-shape, fly-over metadata display, etc. to classify and relate nodes, only to find there is no room left for persistent display of the textual metadata that ultimately drives a user

  • I have been looking for a tool (that's better than Document inspector) to troubleshoot while I'm trying to code in RDF. I was hopeing for a debugger so I wouldn't have to test so many cases through multiple steps, but being able to see the structure may help some.

    Too bad it doesn't take the XUL rules into consideration when redering maps like the one shown in the screenshot. Do you know if they are going to open development up anytime soon?
  • People who look at these browser screenshots and decide that the semantic web is/will be a mess stop thinking too early.

    This graph-like presentation is just one way to show semantics, and it only works for certain things, like topic maps.

    I'm sometimes using tools like outliners and the Brain [thebrain.com] (insert pun here) to present ideas and their relationships. This is not the way you would want to e.g. read/present a complex manual.

    Other, more complex forms of presentation are required - and possible. Ted Nelso

  • Ten years ago, we used to say, "we are angry at this or that issue." Now, we say, "we are frustrated". The entire idea of trying to reduce our language to assertions of logic is silly because as soon as we do so, we immediately try to change the definition of one of the composing elements in order to get out of it.

    Here's the biggest problem with the web. Most people that have web sites have them to sell stuff, and they DON'T WANT their stuff to be easily searchable and diced and sliced. All this interope

The most exciting phrase to hear in science, the one that heralds new discoveries, is not "Eureka!" (I found it!) but "That's funny ..." -- Isaac Asimov

Working...