Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
The Internet Technology

Tim Berners-Lee Discusses the Future of the Web 112

maximus1 writes "In an interview with IT World, Tim Berners-Lee explains his vision of the Semantic Web. He says: 'The Semantic Web is going to take off particularly when we see people using it for data processing, when we see people using it in more and more things, adding personal data, adding files to government data.' His position on net neutrality: 'We've seen cable companies trying to prevent using the Internet for Internet phones. I am concerned about this, and am working, with many other committed people, to keep it from happening. I think it's very important to keep an open Internet for whoever you are. This is called Net neutrality. It's very important to preserve Net neutrality for the future.' And a fun tidbit — He mentions his 1989 memo to his boss at CERN that described his vision for the Web."
This discussion has been archived. No new comments can be posted.

Tim Berners-Lee Discusses the Future of the Web

Comments Filter:
  • Another year... (Score:2, Insightful)

    by Anonymous Coward
    ...another "Tim Berners-Lee discusses the semantic web" article.
    • Everyone who cares about the net should read the text of the original proposal. It is found at http://www.w3.org/History/1989/proposal.html [w3.org]. This shows us what the original intent of the net was, a storehouse of information. No wonder he always speaks of a semantic web, that's really the original vision. I guess we always knew that, but seeing it in the original text is quite interesting.
    • Well, I don't understand all and sundry writing off folks who have created history and are impacting our lifes to quite an extent .

      No, Tim understands what he *wanted* the web to be. He's a very intelligent man, but he is by no means the difinitive word on what the web means. The people that use the web are. The web is a place defined by the people who view and put content on it. Those people have found uses for the web that Berners-Lee never imagined.

      If you DID READ the actual article, he simply st
  • by Skyshadow ( 508 ) * on Tuesday July 10, 2007 @12:03PM (#19815107) Homepage
    I predict that, in the future, the web will be used to for vast amounts of pornography, insane conspiricy theories, niche interest "news" sites that protect their users from anything that might challenge their worldview and to allow regular people to flourish in the utter jackassery that results from anonymity.

    It will also have an interesting side effect where long-time users sit down to write a post intended to be humorous and end up making themselves a little depressed.
    • by iknownuttin ( 1099999 ) on Tuesday July 10, 2007 @12:05PM (#19815145)
      I predict that, in the future, the web will be used to for vast amounts of pornography, insane conspiricy theories, niche interest "news" sites that protect their users from anything that might challenge their worldview and to allow regular people to flourish in the utter jackassery that results from anonymity.

      Dude, you are so anti-semantic!

    • Re: (Score:2, Insightful)

      by Seumas ( 6865 )
      The future of the internet is less individual freedom, more commercializati1on. It's all about the multi-billion dollar broad-stroke websites. If you're not eBay, AOL, Digg, youtube or myspace, you're just some whacked-out schmuck wasting time broadcasting your dumbass show on the public access channel at two in the morning that nobody will ever watch.

      The internet was about the individual in the 90s. The 21st century is all about corporations and commercialism, while convincing individuals that it's really
      • by choongiri ( 840652 ) on Tuesday July 10, 2007 @12:39PM (#19815605) Homepage Journal

        The 21st century is all about corporations and commercialism, while convincing individuals that it's really "their" society, political systems, freedom, etc.

        There, fixed that for you.

        • Society, political systems, etc... are not really about individuals to begin with; it is more about the collective. If corporations exert undue influence, it is because enough people don't care enough to stop it. I read something once where someone claimed middle-class suburbia is the pinnacle of human civilization; members of this set as a group really don't have much to get all too worked up about. Whining about DRM is kind of a luxury if you don't have any water or somebody keeps setting off truck bombs
      • Re: (Score:3, Insightful)

        by Arthur B. ( 806360 )
        Commercialization *is* the expression of the individual freedom of the shareholders of eBay, AOL, Digg youtube or myspace and the individual freedom of their customers. Individual freedom is about freedom, not about [insert random subculture].
        • Re: (Score:2, Insightful)

          by Seumas ( 6865 )
          Hard to have individual freedom when one or two organizations control two thirds of the internet.
          • How exactly is it "hard"? The only way for freedom to be hard to have is through external coercion. Did those big companies actually do anything to you?
          • OK, explain to me what your definition of "control" is. I've created several websites - several brand new ones just in the past few months. Nobody stopped me. I've placed pretty much whatever I wanted on those websites. Nobody stopped me. Thousands of people are visiting those websites. Nobody is stopping them. Nobody is 'controlling' their 'clicks'. Nobody has tried to shut my sites down. Nobody has tried to coerce or prevent others from accessing my sites. ANYONE CAN DO THIS. I'm just not getting the "con
            • OK, explain to me what your definition of "control" is. I've created several websites - several brand new ones just in the past few months. Nobody stopped me. I've placed pretty much whatever I wanted on those websites. Nobody stopped me. Thousands of people are visiting those websites. Nobody is stopping them. ... Nobody has tried to shut my sites down. Nobody has tried to coerce or prevent others from accessing my sites.

              And you can thank the current fragile state of Net Neutrality for that.

      • Huh? Nobody has removed any of the technologies or infrastructure for creating an 'old-style' Internet, with personal webpages etc. ... everyone is now free to choose between the old and new, how is it "less" freedom to have more options than ever? I guess most people just genuinely seem to prefer the new 'socially networked' Internet, and that's why they're popular. Nobody is forcing them down anyone's throats.
    • My own predictions (Score:5, Insightful)

      by morgan_greywolf ( 835522 ) on Tuesday July 10, 2007 @12:33PM (#19815527) Homepage Journal
      - No one will ever figure out what Tim Berners-Lee is rambling on about with the semantic Web thingie.
      - The Net will continue to feature more video, become even more interactive, and the difference between local apps and the Internet will continue to be blurred little-by-little.
      - Blogs will continue in various fashions, from vlogs (video blogs) to audlogs (audio logs) to iBlogs (blogs with highly-interactive content, including even 3D simulated environments). Apple will sue the first person that uses the term 'iBlog'.
      - Devices will continue to converge. Specialized devices will exist, and regular desktop and laptop computers will continue to exist, but the differences between them will blur as it becomes apparent that the only difference from a practical standpoint will be form factor and user interface.
      - The telcos will become less relevant as Net connectivity becomes all that matters.
      - THe mafiaa becomes irrelevant as people become increasingly connected to artists.
      - Spam will become ever increasingly more annoying as advertising will even start popping up on your roll of toilet 'paper'.
      • by tryfan ( 235825 ) on Tuesday July 10, 2007 @12:45PM (#19815691)
        > Apple will sue the first person that uses the term 'iBlog'
        A quick Google search shows that you're safe, anyway.
      • Re: (Score:2, Interesting)

        by jack455 ( 748443 )
        Later in this thread I posted about the semantic desktop being part of a new Linux release. Unfortunately I was incoherent and seemed offtopic. However I replied to myself somewhat more intelligently in an attempt to clarify.

        I'm basically theorizing that with KDE and Mozilla, among many others, combining to support the Semantic Desktop and web; with Apple having implemented KDE code in Dashboard and Safari and working with them, the Semantic Web has a chance to at least be tried. One day Opera and IE will s
        • Later in this thread I posted about the semantic desktop being part of a new Linux release. Unfortunately I was incoherent and seemed offtopic. However I replied to myself somewhat more intelligently in an attempt to clarify.
          Have you invented time travel or something?
      • How about,

        Heaps of people remain on slow connections to the Internet across the world meaning that they are cut off from more and more of this new "good" Internet.

        Considering that it is impossible to get cheap broadband (and the only way to get broadband is satellite) in so many places in Australia (a "first world" "developed" country) and the situation is apparently similar in the USA, I think we should focus on actually getting people connected before we start going on about video and interactive web prog
    • the web will be used to for vast amounts of pornography

      ... Surfed to via (illegal, because they use encryption) darknets, because porn was long since forbidden to be in the open as it was finally set in stone that age confirmation dialogs don't verify a visitor's age well enough anyway, and many such sites don't even use a verification. Thus, skin analysis filters were made to automatically sue people under Internet subnets belonging to countries under that jurisdiction, that keeps influencing the rest of

    • by l0b0 ( 803611 )
      Well, you know what they say [google.com]
    • Comment removed based on user account deletion
  • by User 956 ( 568564 ) on Tuesday July 10, 2007 @12:04PM (#19815119) Homepage
    And a fun tidbit -- He mentions his 1989 memo to his boss at CERN that described his vision for the Web.

    That vision is nonsense. I don't see any Web 2.0 buzzwords on that paper anywhere.
  • He acts like he owns the place or something, not content to just be a part of a global phenomenon.. I kid, I kid, he's far too humble a genius and should be installed as the global overlord, pronto!
  • rejected (Score:2, Insightful)

    by r00t ( 33219 )
    At best, nobody gives a damn.

    Businesses actively work to prevent other sites from scraping content. They certainly aren't going to spend extra effort to support it!

    Users care about presentation. Looks are everything. Web developers know this, or at least the marketing people in charge of web design know it.
    • Re:rejected (Score:5, Insightful)

      by That's Unpossible! ( 722232 ) on Tuesday July 10, 2007 @12:55PM (#19815821)
      Businesses actively work to prevent other sites from scraping content. They certainly aren't going to spend extra effort to support it!

      Give me a break ... Have you ever heard of RSS feeds? Cutting edge companies ARE already supporting this, including giants like Google, Yahoo and Microsoft.

      In fact, Google is a model here. They are making it ridiculously easy to get access to data in all kinds of formats. I can create a google spreadsheet and actually share individual cells and ranges of cells with anyone else on the internet, and it even retains the dynamic calculations from the main spreadsheet even when you aren't displaying the rest of the cells. It's actually ridiculously cool if you think about it.

      The smart companies absolutely will make it easier and easier to access their data in all kinds of formats.
      • Hasn't Excel had that for several years now through web-published sheets and sharing controls?

        Seriously, I'm not much of an Office user nor do I use Google Apps. Is there something new or different about the way google does it, beyond the just 1 calorie, not evil enough difference?
        • Where are you publishing from Excel? Presumaby you need to setup your own host, get it working with publishing, etc. With Google docs it's just right there, ready to go, 5 seconds later you're sending a link to your friend or posting on your site.

          I am talking about a live view of your document from Google Docs. Set aside the awesome ability to be able to share it with collaborators where you're all editing it at the same time. This thing also lets you publish cells and cell ranges, for anyone to view or for
          • Ahh, not really an equivalent good when you put it that way. Companies DO have to set up their own host for published documents. The changes made ARE live as far as I know, can be references from multiple sheets/books, and has a fairly simple and so-far-hasnt-kicked-anyone-i-know-in-the-throat type version control.

            Seems the major difference is Cost of Operating the host (including MS License fun) versus Desire for exclusive control over the documents themselves (as in physically). Guess it depends on people
      • In fact, Google is a model here. They are making it ridiculously easy to get access to data in all kinds of formats.
        ok, i'm sort of offtopic here, but how come i can't have imap access to my gmail account? having only pop access is kind of annoying (for example, since i don't know of a way to check what got caught by the spam filter without having to go to the website...)
    • Re:rejected (Score:5, Insightful)

      by kebes ( 861706 ) on Tuesday July 10, 2007 @01:03PM (#19815963) Journal

      Businesses actively work to prevent other sites from scraping content. They certainly aren't going to spend extra effort to support it!
      True enough. But one of the main points of "Web 2.0" is user-generated content and participatory media. Although businesses make contributions to the usefulness of the web, user-generated content is becoming more and more useful and powerful. Just look at the impact of Wikipedia, web-forums, free software, creative commons, etc.

      These user-driven efforts are where the tagging and semantic web will probably start. If Wikipedia contributors care to take the time to write good articles, then surely they will also be willing to semantically tag articles. (In fact Wikipedia already has alot of semantic tagging.) Similarly creative commons artists are actively tagging their works with machine-readable creative-commons tags. Social sites like Flickr are also doing alot of useful tagging.

      So businesses may resist it... but as long as users care about it (and are given easy to tools to make it happen--like wikis), then this semantic web can be created. Once it expands, businesses will have to play along or risk being left behind and ignored by the web-users who come to depend upon the power of the Semantic Web. So, whether they like it or not, businesses will have to connect to the semantic web and add to its descriptive power, or else they will lose all their customers.

      And, yes, I'm keenly aware of the flip-side, which is that businesses will then try to commoditize and monetize these technologies, sometimes in bad ways, like Spam. It will be interesting to see how it plays out. But I don't think businesses will be able to stop it.

      Users care about presentation. Looks are everything.
      I disagree. Or rather, I think that describes only some users. There are plenty of users who are care about content. (Wikipedia and free software are examples of the resultant projects.) So even if many (or most) users don't care about the semantic web, as long as some dedicated group does care, then it will expand and everyone (including users who don't care about the underlying implementation details) will benefit.
      • by xant ( 99438 )


        Users care about presentation. Looks are everything.

        I disagree. Or rather, I think that describes only some users. There are plenty of users who are care about content. (Wikipedia and free software are examples of the resultant projects.) So even if many (or most) users don't care about the semantic web, as long as some dedicated group does care, then it will expand and everyone (including users who don't care about the underlying implementation details) will benefit.

        There's a

  • From TFA:

    So, for example, if you are looking at a Web page, you find a talk that you want to take, an event that you want to go to. The event has a place and has a time and it has some people associated with it. But you have to read the Web page and separately open your calendar to put the information on it. And if you want to find the page on the Web you have to type the address again until the page turns back. If you want the corporate details about people, you have to cut and paste the information from a
    • Really, they've had that sorta stuff down for over a decade, since they were competing with Lotus Domino etc. It's not Web 3.0, its standard enterprise tools. It seems TBL is trying to encourage some sort of intelligent copy/paste function that dumps into some sort of all purpose aggregator or something, it's hard to tell. Still, he's the first person I've heard using the 2.0 and 3.0 metaphors in any sensible way, so that's something.
    • Well, I don't know... Picture a website where I post my address in a certain XML format. If google provides a hook for it and I have a valid google acct. token, it can add that address to my contact list (after a validation?). And this is obviously possible with Outlook if MS provided the right hooks. I used the google example because then, 'your personal Web' is online, private (Google does no evil) and it travels with you at no cost to you. Google Calendar can do the same.

      I wouldn't call it impossible. It
    • Achieving it for 'stuff' in general, which seems to be the aim of the Semantic Web, is probably flat-out impossible.

      Why should it be impossible?

      I always thought that this kind of thing is what standards are for. So lets create a 5-tuple with (date, place,event,persons,data), push this data through some xml into something called OEDF (Open Event data format) and Voila, tag it to every mentioning of said event. You just have to click ok 5 times(remember the first time Fry visited the Net in the year 3000), if your app detects such an oedf-object anywhere, and voila, with the magic of Ajax, Web2.0 and some scripting the

      • Re: (Score:1, Interesting)

        by Anonymous Coward

        So lets create a 5-tuple with (date, place,event,persons,data)

        Well an event already has a place and a time/date associated with it. So we have (#event -> #time -> "11am") and (#event -> #place -> "Meeting Hall). So all you are left with for saying that a person is attending/did attend is another relationship, (#person -> #attend -> #event). So you've expressed the information in a series of relationships between two entities - which is exactly what RDF does. Suddenly you don't need

        • The trouble with the Semantic Web is that TBL is always talking about the end goal. The end goal seems unobtainable to many people.

          Is this really a problem with the Semantic Web?

          Seems to me that its a problem with the people that are reacting emotionally to TBLs descriptions of the ultimate goal without paying attention to the progress in that direction, and with the people who think that the Semantic Web is somehow all or nothing such that if the vision is less than entirely acheived, the effort toward it

      • You just solved it for date-based events. The GP talked about "stuff in general". Now come up with a solution that encompasses all possible future data requirements (not XML, since that is not specific to any single application).

        That said, I think impossible might be too strong a word, but it's certainly a moving target.
    • by Bombula ( 670389 )
      Achieving it for 'stuff' in general, which seems to be the aim of the Semantic Web, is probably flat-out impossible.

      I doubt it's impossible. You just need some intelligent filtering algorithms. Think of Google's fault-tolerant searches: if I accidentally spell 'Mississippi' as 'Misisipi' Google will ask, 'did you mean Mississippi?'. It's not exactly rocket science. And it's not much of a leap from that to software that can look at a web page displaying, say, times and dates and addresses and, even tho

      • Oh, I don't know, I think it's pretty close to impossible. People are working on the easy stuff - names, addresses, events, locations. I think there will be real progress there, and we'll get some useful software. But semantic modelling is extremely hard, because people do not work on semantics. Most people don't, in their heads, categorise things in ways that a computer make make head nor tail of. Semantic models that work for computers require deep heirarchies, relatively few relation types, and fixed deg
    • Of course, it took MS quite a while to achieve this in the reasonably constrained environment of office automation, and even then it was a huge achievement that many companies failed hideously at. Achieving it for 'stuff' in general, which seems to be the aim of the Semantic Web, is probably flat-out impossible.

      I dunno about "impossible": web browsers, MIME types, and helper applications have done quite a bit of it for a lot wider variety of disparate types of linked information than MS Office has. I'm not

    • by Monchanger ( 637670 ) on Tuesday July 10, 2007 @01:32PM (#19816309) Journal
      Not really.

      You're talking about OLE, where Microsoft only allowed the combination and transfer of data objects (and otherwise reusing application code) from one application to another. You could take an Excel worksheet and paste it into a Word document. That's pretty cool, and at useful once in a long while, but it's hardly smart enough to be compared to Semantic Web. The web equivalent is simply embedding images and Flash games- i.e. Web 1.0.

      At work I get many emails about upcoming internal conferences, tech talks, vendor presentations and such. They all come in the form of an Outlook email, but contain data including event title, date/time, location, and more recognizable bits of information. But when I drag the email onto a calendar folder to create a "Meeting" object, none of the data is put in the appropriate fields. That's the kind of thing Semantic Web is supposed do.

      The stuff Microsoft had was useful, but it's obsolete today. It only provided the ability to share data between one application and another application. Today we need to share data between any of millions of applications (web sites), and we can't afford to write dedicated code for each one of those. We need the Semantic Web.

      > Achieving it for 'stuff' in general, which seems to be the aim of the Semantic Web, is probably flat-out impossible.
      "Ingenuity and resourcefulness" my foot. You don't even make an argument against it, not to mention any attempt at proof. Since don't even understand what the Semantic Web is about, how could you possibly dismiss it so casually?

      But I must stop and thank you. Pessimists like you make us real technologists so much cooler. It's great to hear people say "it can't be done," because it makes solving those problems so much sweeter. My prediction: expect some serious in-your-face fist-pumping.
  • "Semantic Web" is right up there with old buzzwords like "Push technology" and "Voice over IP".

    Over hyped before they had a decent implementation, and now that we use them everywhere we find we still don't have flying cars.
  • Whew (Score:5, Funny)

    by kensai ( 139597 ) on Tuesday July 10, 2007 @12:39PM (#19815601) Homepage
    For a second there I thought he said Symantec Web and said to myself "We're all doomed."
    • Sure, but then you were probably thinking of Symantec Web 2007....
      The new Symantec Web 2008 is much better, and can do so much more in one package!
  • by i am kman ( 972584 ) on Tuesday July 10, 2007 @12:44PM (#19815669)
    Sure - life would be so much easier if everyone spoke the same language and all businesses worked together for a common good. And everyone used Linux and open standards and shared data. But, then again, any structured approach would work well in this environment or in other closed communities where everyone agrees on XML and API standards already.

    But give me something to work with the vast amounts of unstructured information out there - not just the generic header information surrounding the really interesting stuff. I'm just hoping that Web 3.0 focuses more on this area to support a real information revolution rather than just over-formatting the already semi-structured pieces of data that we already know about.
    • by tjstork ( 137384 )
      But give me something to work with the vast amounts of unstructured information out there

      Google is spending a ton of money working on exactly that.
      • Google is spending a ton of money working on exactly that.

        Yeah - but I was thinking of something beyond keyword or proximity search. Something, er, semantic. But actually semantic, not like the semantic web. Something that could spot correlations across complex documents or organize the information beyond a top 10 list of hits or actually answer questions. While useful, keyword search hardly provides the rich semantic environment needed to organize the world's information.

        I'm sure Google is working on t
  • by MarkWatson ( 189759 ) on Tuesday July 10, 2007 @12:53PM (#19815795) Homepage
    In spirit, I see commonality between Larry Lessig's desire to build a commons of information that can be shared and built on, and Tim Berners-Lee's desire to build a a platform for data integration that people can build new applications on. For all of my enthusiasm for the semantic web (I have had RDF meta data on my web site for many years), there are some tough problems, including:
    1. trust: how do we keep people from publishing purposefully wrong meta data?
    2. how do we reason with a web's worth of data? Even with recent advances in technology for descriptive logic reasoner's, reasoning with web scale data is not even close to being possible. Even the RDF extracted from Wikipedia is way too large to reason over.
    3. tension between formal standards and "grass roots" bottom up approaches that work, but may not scale. I expect that some "grass roots" efforts will become very popular and perhaps replace RDF and OWL as the semantic web data model. Speaking of which, one of my favorite ideas that I have seen widely discussed: extending HTML/XHTML so that meta data is encoded in standardized attribute names representing agreement/disagreement, trust level, type of linked information, time stamp, etc. Combine this with RDF, but have a better way to embed RDF into HTML and XHTML.
    • For all of my enthusiasm for the semantic web (I have had RDF meta data on my web site for many years), there are some tough problems, including:
      1. trust: how do we keep people from publishing purposefully wrong meta data?

      We don't. RDF triples are claims about specific resources, including other websites, data sources, or even specific other RDF triples. No reason you can't used signed RDF to make accountable claims about the trustworthiness of resources (metadata sources at any level of specificity down to

  • Lessee what we got here on the abstract...
    • Hyperlinking?... Check!
    • Linked Servers?... Check!
    • new feeders?... Check
    • Hierarchical data systems?... Check!
    • Document management systems?... Check!
    • Interoperability (see the little bubbles for computer conferencing, vax, etc.)?... Check!

    Wouldn't it be cool to read the rest of the document for other prior net related prior art?
    • I'm glad that someone else thinks this. He didn't 'invent' in any sense that would get a patent - it was an obvious combination of existing ideas. Gopher/WAIS, for example, were independent, contemporaneous developments but were clearly going to develop into something like the WWW. Oh and SGML was created in the 70's too.
  • The future of the Web...hmmm.. that's a toughie...

    1) Porn - check
    2) Email - check
    3) Spam - check
    4) Viruses and Trojans - check
    5) 99.8% of all blogs being dull, pointless and full of misplaced ego - check

    Semantics - nope: people will still mix up 'effect' and 'affect', and use 'loose' when they mean 'lose'

    Next!
  • by Anonymous Coward
    Net Neutrality should be an Amendment to Constitution.

    The reason why? Because as in amendment it would be the only way to protect the internet against a political party taking over and changing everything, and then other parties making the freedom of the internet a political football. One year the internet could be free, then the next it could be not free, then the next... would be a guess depending on how much money the cable and telephone companies can spend to keep their "keep the internet not free" ca
  • the best way to get this to take off is to get some of these ideas implemented on sites like wikipedia.org and youtube. the true power of the semantic web will show itself in large scale applications such as these.
  • The definition of "taking off" is that people of using it. So he basically said that we will know that people are using it when we see that people are using it.
  • by jilles ( 20976 ) on Tuesday July 10, 2007 @02:46PM (#19817167) Homepage
    The semantic web is being invented now. Only not by Tim Berners Lee et al. The W3C has been side tracked for quite some time by this semantic web thing. Time has been wasted on pointless things such as XHTML, RDF, OWL, etc. Outside the labs, in the real world, a lot more progress is being made. There's millions of geotagged photos, places, wikipedia articles, etc. You can search for hcalendar events on Yahoo, hresumes on linked in, people on facebook and pictures of cats on flickr. Social networks are all about meta information. These applications are now starting to link and integrate each other. That effectively is the birth of the semantic web. It will be a heterogenous patchwork of information applications and services.

    If you want a glimpse of what the semantic web will look like, fire up Google Earth. Sure it is proprietary but it is also massively distributed meta information from all over the internet aggregated into one coherent view overlayed on top of the world. Imagine that based on open standards, and you get an idea of where we could be going.

    Emerging standards such as microformats, atom, openid may lack the glamour of all encompassing ontologies and the mighty AI of reasoning engines and what not. But, the bottom line is that they are a hell of a lot more practical and pragmatic, solve real problems, and you can use them right now. These emerging standards are not perfect or even complete but people are definitely using them to enrich information on the internet by cross referencing; by tagging; by labeling etc. Defacto standardization outside W3C by killer applications is driving this lower case semantic web. The best thing the W3C could do and currently does not is to endorse, facilitate and promote this work.

    Tim Berners Lee of course contributed his bit by inventing the web browser + very naive markup language (aka HTML 1.0) in 1989. I give him credit for his vision then but this article reads like a very confused mix of ideals and vague concepts and does not seem visionary at all. The man tries to explain things in terms of databases, files and links and somehow the wizards at MIT are going to provide the magic pixie dust that turns it into something beautiful. That's nice but the how part remains ever elusive.

    • You do realize that RSS originally stood for "RDF Site Summary", I hope.

      Generic containers like Atom and microformats are useful, but we really lack an interoperable medium for conveying managed data - ie. Stuff that's been normalized for manipulation and integrity. Not that it should be the only form for all data, but that most data should be able to be gleaned into something like RDF.

      The world of course will progress without RDF or SPARQL, but they certainly look to remove a fair amount effort in inter
      • Generic containers like Atom and microformats are useful, but we really lack an interoperable medium for conveying managed data - ie. Stuff that's been normalized for manipulation and integrity. Not that it should be the only form for all data, but that most data should be able to be gleaned into something like RDF.

        Right. RDF is for relationships a lot like what XML is for structure of data, a common way of expressing things so that tools that don't need to know or care about the ultimate use of data can pr

      • by jilles ( 20976 )
        I know where it came from. You have to admit though, RSS 2.0 has very little to do with RDF anymore, despite the name. I think RSS showed that RDF just wasn't the solution to the problem it tried to address.
  • (Looking back from the future)

    I remember the Web. That was when there were still ISPs and telecoms, right? Back when the big corporations tried to figure out how to triple, and quadruple charge for everything. When governments started taxing every packet. Back before the Mesh. Yeah, that sucked.

  • That would be SIR Tim Berners-Lee, thank you very much.
  • adding files to government data

    uh, no thanks. I think you'll be wrong on that one, Tim.

  • Tim Berners-Lee is smart and a fun guy to listen to. But he doesn't have any idea of what the web or computers or cereal boxes will look like in even 2 decades time than I do. Every time I hear anyone talking about what the future will be like, I always remind myself that the jet-packs never arrived either....

When you are working hard, get up and retch every so often.

Working...