Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
The Internet Technology

Web Redesigned With Hindsight 270

Randy Sparks writes "Tim Berners-Lee has been speaking about his vision for the Web. He proposed the Semantic Web six years ago and it's taken that long for the W3C to ratify his plans for Resource Description Framework (RDF) and the OWL Web Ontology Language (OWL). Effective the Semantic Web is the Web as we know it put into database form and with added metadata. You can read more about it over on MacWorld and see a Semantic Web proof-of-concept at the Web Archive."
This discussion has been archived. No new comments can be posted.

Web Redesigned With Hindsight

Comments Filter:
  • by nizo ( 81281 ) on Thursday May 20, 2004 @02:45PM (#9207099) Homepage Journal
    In hindsight, it would have been better to design the web with major help from the porn industry, since that is mostly what it is used for anyway.
    • Really? Perhaps you also think that it would have been better to design P2P File Sharing Apps with major help from the music industry, since that is mostly what those rare used for anyway.;)
    • by ron_ivi ( 607351 )
      This sounds a lot like this an earlier Tim Berners-Lee effort. It was an awesome language that really did a nice job at combining rich OO programming with a markup-language; but too bad the company [slashdot.org] that took it over made the licensing of the language so painful it never caught on. Anyway, that project did make some really cool demos [curl.com] of what the technology is capable of.
  • by 3Suns ( 250606 ) on Thursday May 20, 2004 @02:45PM (#9207105) Homepage
    You can't just "redesign the web" !!

    Just who the hell does this "Tim Berners-Lee" guy think he is, anyway!?
    • by nacturation ( 646836 ) <nacturation AT gmail DOT com> on Thursday May 20, 2004 @02:54PM (#9207240) Journal
      You forgot to say "Sir!" :)
    • I don't know, but I bet his "meta-chlorian" reading is off the charts! ;)
    • You can't just "redesign the web" !!

      Just who the hell does this "Tim Berners-Lee" guy think he is, anyway!?


      perhaps, Al Gore?
    • by Pxtl ( 151020 ) on Thursday May 20, 2004 @03:57PM (#9208061) Homepage
      Excuse me, but can they stop overdesigning HTML? Its a freaking pseudo-layout language. The whole beauty of it is that complete newbs can learn to text-edit it. Now, with all the crufty front matter, its impossible to hand-write html that will pass a verifier. Many of the more useful layout features that don't have anything to do with style classes are being put into css instead of html proper. HTML is a dead simple concept, and as such should be a newbie tool. Instead, its just getting increasingly baroque. It really doesn't need more crap.

      Now, the http system itself - that could do with some upgrades. More support for "push" content is what it needs - like slashdot telling _me_ when there is new news so my browser can refresh, and sending me a diff instead of the full new page. Or support for distributed file hosting. Or some way to recieve HTTP requests from behind a NAT (even if it requires an external name server to help you along) without forwarding ports to yourself (if thats at all possible). My knowledge of network topology is limited at best, but if I can get ICQ messages while behind a nat, why can't I serve HTML? Its still just receiving unrequested data - messages in one case, requests for content in the other.
      • Excuse me, but can they stop overdesigning HTML? Its a freaking pseudo-layout language.

        I agree with the pseudo part.
      • by websensei ( 84861 )
        "..useful layout features that don't have anything to do with style classes..." ??


        you're joking, right?

        "beauty of... complete newbs... text-edit"

        gack. if you think what you see when you view source in your average web page is beautiful, you sir, are beyond help.

        html *should* be simple -- but in practice it's bloated, convoluted, and full of things that have only to do with presentation. the markup should simply describe the content. css should describe how it looks. it's cleaner, more readable, *eas
        • by Pxtl ( 151020 ) on Thursday May 20, 2004 @04:47PM (#9208742) Homepage
          Yes, it is beautiful. Why? 'cause it was written by a twelve year old who read a three page hand out her teacher gave her on "how to make a webpage", and she's been learning by tinkering since then.

          People are not coders. People are users. Users want to just use things - not muck around with research, not have to learn whole new lexicons for each task, just get stuff done. HTML is practically the only pure-text system they seem to do that in anymore - everything else is covered in complex guis. To many people, html is the bridge to programming. With that bridge lost, they might never want to use anything that's not pure wzywig, and there aren't many programming languages like that.

          Like it or not, HTML has become the learning ground for many budding computer users.

          My CSS complaints came out wrong - what I was complaining about CSS was that originally, everything that could be done in CSS could be done in HTML as well. You could write proper, stripped HTML and use robust CSS, or you could just do the whole damn thing in ugly, ugly HTML, and still have access to the whole featureset. Now there are features that exist only in CSS beyond simply defining classes of things that already occur in HTML. So, newb html-only users end up with an incomplete feature set. If CSS was more intuitive this wouldn't be a problem, but currently it is far too cryptic to push onto an uninformed user. As a result, learning users stick to pure HTML, and thus are stuck with half a feature set.
          • If CSS was more intuitive this wouldn't be a problem, but currently it is far too cryptic to push onto an uninformed user.

            Please explain to me how this:

            &lt;BODY BGCOLOR="#000000" TEXT="#000000" LINK="#006666" VLINK="#000000" TOPMARGIN="0" LEFTMARGIN="0" MARGINWIDTH="0" MARGINHEIGHT="0"&gt;

            is better, or more readable, than:

            body {
            background-color: black;
            color: black;
            margin: 0px;
            }

            a:link {
            color: #006666;
            }
            a:visited {
            color: black;
            }

      • Um, sorry to burst your bubble, but I hand-write HTML all the time, and it always validates; usually the first time. My co-workers who use tools almost always have to tinker with the HTML for hours just to get it to display correctly in every browser we have to support, forget validating.

        Also you don't seem to understand what a NAT does at all.

  • by Skevin ( 16048 )
    Shouldn't that be WOL, and not OWL?

    I thought OWL (Ordinary Wizarding Levels) belonged in Hogwart's.

    Solomon
  • ...are available on SemWebCentral [semwebcentral.org]. There's even an OWL mode for Emacs [semwebcentral.org]!

    There are also some tutorials and such-like [semwebcentral.org].
  • by 14erCleaner ( 745600 ) <FourteenerCleaner@yahoo.com> on Thursday May 20, 2004 @02:47PM (#9207118) Homepage Journal
    The web is popular because it's easy to create web pages. The semantic web stuff strikes me as something that only someone with a PhD in semantics could love. IMO it violates the KISS principle.
    • nah, those of us working on the guts of it with the BS'es and MS'es in CS like it because it keeps us employed.

      But, then again, I'm reading slashdot... hmm...
    • by The Ultimate Fartkno ( 756456 ) on Thursday May 20, 2004 @02:56PM (#9207270)


      What are you, anti-Semantic?

      Racist.

    • You don't know how correct you are. This issue came up during DAML development (predecessor to OWL), but I don't think it was ever addressed.
      This stuff is very unintuitive if you don't have a graduate degree from Stanford
    • by truthsearch ( 249536 ) on Thursday May 20, 2004 @03:12PM (#9207474) Homepage Journal
      The semantic web does keep it simple. It's supplimental to current web pages and is optional. It simply adds more data for computers to read. It's something very basic that leaves the opportunity for much more complex things later. Anyone who can't understand a triple - a subject, verb, and object - probably failed second grade english.
    • Nice attempt at FUD, but once all of the Semantic Web technologies are in place, applications (analogous to an HTML editor) will be made that hide nearly all of the complexities of serializations and ontologies. Most of the Semantic Web's benefits will be realized under the hood for desktop users.

      So the interface will be simple; the foundation is more sophisticated because it attempts solves a complex problem.

      Besides, if you like Googling around to find reviews of products and then determining the credibi
      • I think you are missing the point of a parent, that is not FUD by any count. While having all those "applications" is noce, the fact remains that all I need to put together a site (and pretty much of *any* complexity) is a text editor. Having to do the maze of RDF/OWL/whatever by hand is hardly easy.

        That said, as mentioned elswhere here, as long as this new stuff is just an optional "refinement", it may come in quite handy.
    • The web is popular because it's easy to create web pages. The semantic web stuff strikes me as something that only someone with a PhD in semantics could love. IMO it violates the KISS principle.

      Never fear, Microsoft will soon update Frontpage to generate code in the new semantic language, making it so no one has to think about the actual code they're writing, and bringing web development back to the masses once again. Nevermind that it will have FrontPage Semantic Extensions and be integrated into the
    • > The web is popular because it's easy to create web pages.

      Not so sure about that. I don't design pages, so I don't care how easy it is. I'm not sure that the sites I actually use daily (bbc, amazon, ebay, slashdot,google) are easy to create. Sure, any muppet can knock up a bit of html, but all the rest of it is probably a bit of work. What put me off the web for a while was all the cheesy `here's my cat and dumpy, toothy girlfriend - oh, and here are some links which don't work and an `under construct
  • My company [networkinference.com] sells a Semantic Web product that is W3C compliant. Mainly based on the work of our chief scientist, Ian Horrocks [man.ac.uk].
  • by Mz6 ( 741941 ) * on Thursday May 20, 2004 @02:48PM (#9207128) Journal
    and the article tells me absolutely nothing about what the technology actually does. About the only thing I saw was:
    "The aim of the Semantic Web is to add metadata to information placed online, to allow it to be readable by machines. That context would enable automation of a variety of interactions. An online catalog could, for instance, connect to a user's order history and preferences to a calendar, to automatically pick out available delivery times.".

    Wow... just simply amazing.. *sigh*

    Anyone care to shed some light (or links) onto what RDF and OWL actually do?

    • by telbij ( 465356 ) on Thursday May 20, 2004 @03:07PM (#9207394)
      Anyone care to shed some light (or links) onto what RDF and OWL actually do?


      Anything you want! It's inspired by zombo.com [zombo.com]
    • by telbij ( 465356 ) on Thursday May 20, 2004 @03:12PM (#9207476)
      Seriously though, if you really want to know, read this [infomesh.net] instead of asking the unwashed Slashdot masses.
    • Anyone care to shed some light (or links) onto what RDF and OWL actually do

      The purpose of these projects is to generate funding for researchers who missed out on the dotcom boom.

      Seriously, tho'. TBL himself took his HTTP and HTML to a serious hypertext conference once (don't recall which one offhand) and they basically laughed at him. His technique was laughably primitive, they thought. So, he went back to CERN, sat down at his NeXT workstation, and just implemented the damn thing and let it loose on the
    • by Otto ( 17870 ) on Thursday May 20, 2004 @03:27PM (#9207680) Homepage Journal
      RDF specifies how you can assign properties to things. Like the "manufacturer" of that computer you're looking at is "Dell" or the "creator" of this post was "Otto" or what have you. It lets you describe facts about things.

      RDF Schema lets you describe general classes of things. Like that "Otto" is a "person" which makes him a member of "livingPeople" which is a subset of "allPeopleWhoEverLived" and so on. It lets you group things into vocabularies.

      OWL lets you define relationships between those vocabularies and draw interferences using those relationships. Since "Otto" and "Mz6" are each a "person", they're the same type of thing. Since this thing is a "computer" that was "manufactured" by "Dell" which is a "company", then it is not a "person" because "companies" are not in the schema of "people".

      That sort of thing, broadly put. Anyway, it lets you define stuff in such a way that a computer can understand it and draw meaningful conclusions about the relationships there. The examples are pretty vague, I grant you, but it has potential. Needs a lot of advance work defining everything to get anything particularly useful out of it though.
    • by LionKimbro ( 200000 ) on Thursday May 20, 2004 @04:21PM (#9208376) Homepage
      I feel for you.

      RDF is a way to make webs of information. Think "web" as in "world wide web"- one thing points to another thing points to another thing, and it can all point back to the original thing. (In Computer Science, this is a "graph." [wikipedia.org])

      OWL is a way to help computers reason over these graphs. You can give hints like, "If you hear people talking about POBOX's over in this one system, that's the same thing as people talking about PO-BOX's over in this other system. Note that OWL isn't AI technology; It's just an assistant to programmers working on making smarter programs.

      As for all the jargon coming out of the W3C: Yes; It is a problem. [w3.org] I don't know if they are working on it or not, but I hope they are..!
  • by oskillator ( 670034 ) on Thursday May 20, 2004 @02:48PM (#9207129)
    I wonder if he's going to spell REFERRER correctly this time.
  • In related news, Verisign was quoted as saying, "Move along now. Nothing to see here..."
  • ...Time Berners-Lee? I thought he got knighted.
  • Well... (Score:2, Insightful)

    by Auckerman ( 223266 )
    Good thoughts, it's a shame that Microsoft's bundling of IE with Windows makes anything the WWW Consortium largely irrelevent, even when the specs come from MS themselves (CSS).

    That being said, relying on publisher embedded meta-data to be relevent on the WWW is probabally wrong. Someone, somewhere, is going to try to lie in that metadata as a way of making money.
    • It's not IE that holds back W3C standards. It's that the standards are positively byzantine, and so complicated to implement that it's simply not worth the effort.
  • We all know (Score:3, Funny)

    by Prince Vegeta SSJ4 ( 718736 ) on Thursday May 20, 2004 @02:49PM (#9207150)
    that slashdotters would prefer that each and every website be redisgned. Further, they would like to espouse their desire to have the entire WEB be redisigned (starting with Slashdot [slashdot.org]) with what many /.'ers feel is the ultimate Web Developer Tool [macromedia.com]
  • admittedly (Score:4, Insightful)

    by WormholeFiend ( 674934 ) on Thursday May 20, 2004 @02:50PM (#9207165)
    The macworld article isnt very informative to someone who've never heard of this "next generation" web, but it seems like they want to add it on top of the existing WWW.

    Why cant someone just invent a new similar, improved web that is separated from the current WWW, with its own specific browser, and implement the various ins, outs and whathaveyous to keep the riffraff from exploiting it in very annoying ways?
    • I am one of those that hasn't heard too much in the way of next generation web. However, I think creating 2 different WWWs, if you will, would only make things even more confusing than they already are. That means that businesses now have to maintain 2 different websites, perhaps 2 different teams to monitor both systems, etc...
      • not really, i mean, we already have FTP servers, Usenet, the WWW, instant messagers, the email system...

        speaking of which, a separate web could also have a new form of electronic mail which could be spam-proof, and this could be a real incentive for the masses to start using this new interface.
        • Re:admittedly (Score:3, Informative)

          by kalidasa ( 577403 ) *
          Email has nothing to do with the web. They are two different systems using the same Internet protocols. And we used to have two different webs: the WWW, and it's older cousin, gopher. Know anyone using gopher anymore?
          • Email has nothing to do with the web

            erm, what? you can send email through a web interface.

            besides, I'm not proposing that a new system be identical to the old one.

            I'm just thinking that future iterations of Internet-based networks and content-delivery interfaces should co-exist at first with the current ones, compete with them, and eventually take over due to various improvements...

            A major improvement to the Internet I'd like to see is the elimination of spam and other shameless, annoying exploiters.

            E
            • Re:admittedly (Score:4, Insightful)

              by spectral ( 158121 ) on Thursday May 20, 2004 @03:41PM (#9207865)
              Just because there's a web interface doesn't mean that they're inextricably linked. email works over its own protocols. Just because there's a bridge between the web and those protocols doesn't mean if you redesign the web, you redesign email too. That's like saying if you redesign the web, you have to redesign UPS, since they have a web interface to their shipping controls.
        • However, taking this down to the average consumer level they would barely, if ever, use FTP and Usenet. A new email system alone might be enough to move the masses to a new system, however, if that's the only incentive couldn't a standard be developed to fix the current one that we are using?
          • couldn't a standard be developed to fix the current one that we are using?

            how do you implement the new fix though? if the system still basically works for the lowest common denominator user, there's no incentive (or knowledge or motivation) for that user to upgrade.

            if there is a competing network that looks and feels better to use, then people HAVE to use new tools to access it.

            the contrast I'd like to see is something like when people had to migrate from BBSes to ISPs.
    • by rthille ( 8526 )
      Why cant someone just invent a new similar, improved web that is separated from the current WWW, with its own specific browser, and implement the various ins, outs and whathaveyous to keep the riffraff from exploiting it in very annoying ways?

      We did. Oh, you haven't heard of it? Sorry, um, nevermind I've mispoken.
  • by 4of12 ( 97621 ) on Thursday May 20, 2004 @02:53PM (#9207233) Homepage Journal

    This kind of thing goes to show how much difference can be made by getting the initial trajectory right.

    A few small changes at the start can lead to BIG consequences later as the inertia of the whole mess gets going.

    Anyone else out there with a really great idea? Do us all a favor and think as far ahead as you can before you release it on the world. Even then, it will still eventually not be going in the optimal direction.

    • Absolutely, and I would have been happier if the title of this article was: Web Redesigned With Foresight .

      I hate it when w3c or whoever designs a standard without the foresight to even allow appropriate growth and backwards compatibility in the future without ugly hacks.

      --jeff++

    • a) I doubt Tim Berners-Lee was really thinking that the Web would become anything like what it is today--more just another service alongside gopher etc. Among other things, who could have foreseen the massive increase in public internet usage that the Web arguably precipitated?

      b) If he had made it more complex to begin with, it would have been harder to sell the idea, harder to implement, and therefore it's possible it wouldn't have taken off as quickly and easily as it did. Part of the reason why it's be
    • This big idea is supplemental to the basic web. Wouldn't it have been much worse if web pages required these extra tags of information? I don't see anything wrong with adding optional features to expand on a current system. This isn't a rewrite here. Lee didn't get the trajectory wrong. His original basic idea has taken off on its own. He and others later came up with additional ideas that may or may not add value. There's nothing wrong with that.
  • by pHDNgell ( 410691 ) on Thursday May 20, 2004 @02:56PM (#9207265)
    pages full of mySQL errors. *sigh* I need to find something else to do.
  • If he didn't someone else will.
  • by LionKimbro ( 200000 ) on Thursday May 20, 2004 @03:02PM (#9207345) Homepage
    For those wondering what the Semantic Web is behind all the computer babble:

    The Semantic Web Cereal Box analogy [w3.org]

    Plain Talk.
  • "We envisioin [w3photo.org] a royalty-free archive of conference pictures..."

    = 9J =

  • by arvindn ( 542080 ) on Thursday May 20, 2004 @03:06PM (#9207382) Homepage Journal
    Tim Berners-Lee had been saying right from the beginning that viewing a web page should be integrated with creating it. In the early 90s, of course, the infrastructure was just not there, but when the technology did catch up, look how wikis have succeeded! Of course, it is the social aspect as much as the technical that makes wikis like the good old 'pedia [wikipedia.org] what they are, and I doubt if Berners-Lee anticipated that, but nevertheless I'd say that the success of wikis proves him to be a true visionary.
    • Tim Berners-Lee had been saying right from the beginning that viewing a web page should be integrated with creating it.

      Well, the very first web browser (WorldWideWeb.app) was also a WYSIWYG page editor. It helped that it didn't even support IMG tags, but I'm not sure that what you're quoting is more than that.
  • by gwernol ( 167574 ) on Thursday May 20, 2004 @03:07PM (#9207390)
    The Semantic Web is a great idea. Having consistent, wide-spread meta-tagging of information on web pages we enable a slew of very, very cool technologies. For example:
    • Intelligent search engines that produce much better results than Google etc. because they can index the meaning of documents, not the words they contain.
    • Agent technology that can retrieve information for you, price compare items you are shopping for and automate a number of interesting processes.
    • Automatic clustering of website around subjects of interest to create much richer knowledge-oriented navigation.
    But the Semantic Web project can't succeed as it is currently specified. It is working towards standards for storing and managing the meta-content required for this Brave New World but doesn't tackle the much harder problem of how to create meta-content that is consistent and pervasive. At present this is left to individual web page authors with no mechanism to ensure consistency. Without consistency, the Semantic Web is doomed. If I tag a web page as being about "software engineering" and another person uses the tag "computer programming" the Semantic Web can't tell they are about the same thing.

    In a world where an estimated 70% of web pages don't even have a title isn't it rather unrealistic to expect most web page authors will learn a complex new representation like RDF and consistently tag their pages with it?

    Clay Shirky has a very good [shirky.com] article on this. I recommend reading it before you get too excited about the Semantic Web.
    • This is why groups come up with schemas [schemaweb.info] and ontologies [daml.org] to share.
      • by gwernol ( 167574 ) on Thursday May 20, 2004 @04:19PM (#9208351)
        This is why groups come up with schemas and ontologies to share.

        But that doesn't solve the problem, it just moves it to a different place. In this case we're just moving the "software engineer" vs. "computer programmer" problem up to the ontology level. How do I map between ontologies? Unless there is a single unified ontology that everyone agrees to use, you have to explain how to map between disparate ontologies declared by different groups. The ontologies will overlap, try to define the same underlying concept in different ways in different contexts and so on.

        Let's assume we have one universal ontology that everyone agreed to use (by the way the Cyc Project [cyc.com] has been working on this problem for 25 years and isn't close to creating the complete ontology you'd need). Then all we have to do is assume that every web developer was skilled and disciplined enough to accurately tag their content with the right meta-content from the ontology. It also requires the ontology to be unambiguous and obviously applicable. I'll not be holding my breath.

        This all rests on the assumption that the world can be unambiguously described and that meta-tagging is a context-independent operation. This is a obviously unreliable assumption. A much better approach would be to make context-dependence and ambiguity core assumptions and try to deal with those issues at the most fundamental level. Until the Semantic Web addresses these issues head-on its going to remain an interesting academic project that has no real-world applicability or adoption.
      • This is why groups come up with schemas and ontologies to share.

        The wonderful thing about standards is that there are so many of them to choose from.
  • by Gargamell ( 716347 ) * on Thursday May 20, 2004 @03:12PM (#9207470) Homepage Journal
    Hi there,
    Kind of a late reply here, but i had to take care of some emails.
    Anyways, I used RDF in a proprietary OWL-like software company for the purpose of organizing content repositories in a formal language that would span the domain of the company i was working for.
    14erCleaner noted that the web is popular b/c it is so easy to create web pages. I would have to agree with this, and also add that the reason why the RDF and OWL spec are important is along the lines of what nizo posted about the web being all about porn! There is SO much content, and yet to derive any kind of automated meaning from all of it, would be a task that is almost out of the scope of realisticly ever completing. There is no standard to the structure of documents, nor how one document may relate to another.
    The RDF and OWL specs provide a framework that do exactly that. Berniers-Lee and the RDF working group essentially lay down what is infact (sorry 14erCleaner, but a 20 yr old intern got it pretty easily) a simple (yet ambiguous) way of describing something. It is like this. Something-RelatesTo-Something. Read the spec and keep that in mind, and that is the basis of what they have described. The OWL i am not as familiar with (too busy building a proprietary one!!)
    anyways, enuf rant, i would encourage everyone to read what he has to say, and most of all, if you are a web author, use the RDF spec! imagine if instead of using google to do a text search for whatever was on your mind, you could write a sql statement that actually represented the structure of resource web pages on the internet and brought you to a list of documents relating EXACTLY to the Something-RelatesTo-Something sentence you had entered as your query! That is the true possibility of this "redesign"!
    ~not there any longer, but a good plug for this technology - they are making ontologies for health care purposes, basically all the info surrounding the care of a premature baby! Can't get a more noble cause than that!
    http://www.cstlink.com/
    • >imagine if instead of using google to do a text search for whatever was on your mind, you could
      >write a sql statement that actually represented the structure of resource web pages on the internet

      Gee, I bet that would catch on just as well as end-users doing ad hoc queries in SQL.

      Serious question: Who would service this query request? Would that be a new form of search that a search company like Google might provide?

      >and brought you to a list of documents relating EXACTLY to the Something-Relate
  • Weaving the Web (Score:5, Informative)

    by Milo Fungus ( 232863 ) on Thursday May 20, 2004 @03:12PM (#9207475)

    The semantic web was discussed at some length in Weaving the Web - The Original Design and Ultimate Destiny of the World Wide Web [w3.org] by Tim Berners-Lee. I picked up that book for something like $5 at my university's bookstore in the discounted rack. That's one of the more interesting books I've read about computer history, and it got me thinking a lot about web standards. I have since learned CSS and XHTML and I've vowed to never go back to proprietary "HTML" hacks. The new way is better, anyway.

    The semantic web doesn't make a lot of sense to people who were introduced to the web through commercial means in the mid-to-late 90's (which is most people). But it makes perfect sense in light of what Berners-Lee was originally trying to do with the web. It has gone a long way to degenerating into Just Another Way to Market Stuff to Millions of People®.

    Two points were most interesting to me in Weaving the Web:

    • The original web server and browser written by Berners-Lee was a read/write interface. The browser was an HTML editor, and you could edit pages that you viewed from the server. This makes absolutely no sense to us now, because we've been trained to think of the web as a publishing medium instead of a collaborative medium. The early popular browsers, most notably Mosaic, didn't support editing. This bothered Berners-Lee and he continually requested that they add this feature. He was still thinking of a collaborative web, moving in the direction of the semantic web. The Mosaic (and later Netscape) developers were thinking more about commercialization.
    • Tim Berners-Lee at one time was suggesting to CERN (who owned the intellectual property rights to his browser and server, as well as the http protocol) that they relase it all under the GPL. His main goal was to "get it out there" so that more people could work on it, use it, and improve it. It was explained to him that businesses would be reluctant to develop web technologies because of the viral nature of the GPL, so it was released under a BSD-style license that CERN approved.
    • How is the GPL vs. BSD-style license important? First off, a BSD-style license does not prohibit use in a GPL'd product, unless there are additional restrictions.

      Second, I think CERN was quite right. Practically every common protocol, service, etc., have had reference implimentations released under a friendly license like the GPL. If TCP/IP was GPL'd, we might be using IPX on the internet, because Microsoft wouldn't have been able to port TCP/IP.
    • Much as I respect Mr Berners-Lee's achievements, I don't think that *every* idea he has about the web is in some way automatically wonderful. Like most great inventors, he had a few very good ideas. Also like most inventors, he probably has his fair share of ideas that suck.

      I would say his vision of the "writable web" is one of those. If the web had become one huge wiki at an early stage, the ensuing chaos would have ensured the medium would never have been adopted by the general public in the way that it
  • Snake oil (Score:4, Interesting)

    by Alomex ( 148003 ) on Thursday May 20, 2004 @03:35PM (#9207784) Homepage
    The semantic web is the return of the snake oil salesman of the 70s and early 80s who highjacked AI research with undeliverable promises of intelligent machines "just around the corner".

    To this date, serious AI researchers are still paying the price of this scientific fraud, which makes cold fussion look like a prank.

    Tim Berners-Lee is a good person and not a computer scientist so he has neither the knowledge nor enough malice to understand the pack of thieves he has surrounded himself with.

    I'm not the only one saying this:

    Semantic web is doomed to failure precisely because it is being pushed by a group with a reputation for talking rather than doing.

    http://slashdot.org/comments.pl?sid=108295&thres ho ld=-1&commentsort=1&tid=95&mode=nested&cid=9207128
  • The real problem is that people are creating so-called semi-structured data in the first place. This is a band-aid approach to try and make sense of the large amounts of junk on the web. Unfortunately, it won't work (or won't work without significant headache/difficulty).

    The real solution is a system of distributed RDBMS. You create your content in the DBMS and then the DBMS serves it to clients which also have a DBMS system embedded in it.

    If the client is a 'web' browser they then display the format
    • Yes, my thoughts exactly.

      I came across this a while ago.... didn't look too much into it yet but it seems to take into account one very important (and very unfortunate) aspect of the net, data ownership and *price*.

    • Yes, my thoughts exactly.

      I came across this a while ago.... didn't look too much into it yet but it seems to take into account one very important (and very unfortunate) aspect of the net, data ownership and *price*.

      Dumbass me, forgot the link:

      http://mariposa.cs.berkeley.edu/
  • I already saw the box for it [uni-passau.de]
  • I don't need anymore "hindsight"...I've got goatse for that!
  • Metacrap (Score:5, Insightful)

    by fawcett ( 58045 ) on Thursday May 20, 2004 @04:01PM (#9208113)
    Readers might enjoy Cory Doctorow's essay, Metacrap: Putting the torch to seven straw-men of the meta-utopia [well.com], on why the Semantic Web will never succeed. His key points:
    • People lie
    • People are lazy
    • People are stupid
    • Mission: Impossible -- know thyself ("People are lousy observers of their own behaviors. Entire religions are formed with the goal of helping people understand themselves better; therapists rake in billions working for this very end.")
    • Schemas aren't neutral
    • Metrics influence results
    • There's more than one way to describe something
    • Sounds like a business plan to me (at least any good business model would count on at least three from your list, especially the first three). Don't forget to add:

      ????

      Profit

  • There have been meta tags for a long time, and they were rendered meaningless just about the time someone thought of putting tags designed for who he'd like to attract, rather than the actual page content. That's why you can't win great search engine placement with a few meta tags - as was briefly possible once upon a time. I have one client who still refuses to understand that - rejects my suggestions to actually write up their pages to simply contain the terms they would like, when entered into search eng
  • Doesn't anyone around here remember the short but exciting life of the <meta> tag?

    The idea was that this tag would be very useful for searching purposes and for tagging the page with keywords. This idea went down in flames really quickly -- guess why? Because people cheated and put "attractive" keywords into their meta tags regardless of what the page was about.

    I still haven't seen anyone explain why the Semantic Web wouldn't be completely full of umm... syllogisms on the lines of "Buy Viagra here"
    • I still use meta tags, and it makes a difference. I like them. It's a bitch to see unrelated stuff get listed before your pages, but hey, that'll happen in any system of classification.
  • rule base features? (Score:2, Interesting)

    by Antilles ( 49894 )
    given the nature of what they are trying to do with the sematic web stuff, and how using some sort of tagging/xml schema to define relationships, does this set the stage for a rule base like set of interactions to autmatically execute when the proper relationships are created that meet pre-defined rules? this would allow interactions between servers to happen naturally and allow for self-organizational-like qualities to 'emerge' from the web.

    or i'm just a dreamer.

    just a thought though...
  • sadly... (Score:5, Insightful)

    by merdark ( 550117 ) on Thursday May 20, 2004 @04:19PM (#9208340)
    Having access to tons of annotated data is a wonderfull dream. I could see academic institutions going for this, but not corporations for the most part.

    You see, corporations don't WANT you to be able to access data easily. One of the major driving factors of the current web is advertising. Basically, this is something none of us want to see, but with web pages it's easy to try and force us to see it. Properly annotated data would kill advertising as we know it, something the corporations will not let happen.

    Also, corporations do not want us to be able to easily compare data either. Take prices for instance. Many stores have promises like "we'll match any price". This worked on the basis that it's hard and tedious to go check other prices and people will think "well, hey, if they are making this promise surely they already have the lowest price otherwise everyone would be calling them on it". Well, no, most people will not go check for lower prices, and if they do and end up finding lower prices elsewhere, they will often buy elswhere. Easy price comparisons are not something online stores want to allow.

    Ulitmatly, most sites want to force you to look at data they want you to look at (ads). I doubt we'll ever see all web data in a nice annotated form allowing us to view only what we are interested in.
    • You see, corporations don't WANT you to be able to access data easily. One of the major driving factors of the current web is advertising. Basically, this is something none of us want to see, but with web pages it's easy to try and force us to see it. Properly annotated data would kill advertising as we know it, something the corporations will not let happen.

      And corporations are going to stop people annotating data... how?

      They may use FUD attacks, denounce you as a terrorist or what have you, but it's way

  • http://validator.w3.org/check?uri=http%3A%2F%2Fw3p hoto.org%2F&verbose=0

    Running the site on the w3 validator brings up 53 errors.

    If your pushing a standard, why not follow the standards all ready existing.
  • All that work (Score:3, Insightful)

    by zpok ( 604055 ) on Thursday May 20, 2004 @04:39PM (#9208633) Homepage
    I admit, I use Golive for my websites. Because it does most of the work for me - and together with some scripted exporting and stuff I hardly have to touch the code, and it's nicely compliant and lean. I *can* code. I just don't really enjoy it, and it's not worth it for the amount of work I do nowadays.

    I'd love to jump on the next thing, and I see the use of all this meta stuff. I try to treat meta tags with respect btw, and only use them on relevant pages.

    But for this to take off, you'd need tools that organize the meta data FOR you. So that you only have to edit it lightly, to take out the silliness. Akin to using automated translation.

    Which begs the question: why not make search engines and agents smarter instead?

    I mean, I can't be the only lazy person here, can I? And I have sort of an interest in the stuff, so I'd probably do what's required, but most people wouldn't I'm sure.

    If I were a betting man, I'd put my money on agents - even after all the bullshit and the failed expectations from the late '90s. I'd love to have some clever agents do my searches for me, and on the mac, there are already some pretty clever programs available for free (http://www.devon-technologies.com/)

    (yeah, I'm too lazy to put this post in HTML too, so sue me ;-)
  • Technologies like this are typically doomed to failure, because they violate one key precept-
    computers should work for us, not the other way around.

    Few people will bother with the effort of semantically marking up their documents, and
    fewer still will do so in a way that is consistent in any way to be useful.

    Computers / programmers will need to become better at analyzing human communication, anything else
    hardly seems worth the effort.

    Nice idea though.

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...