Please create an account to participate in the Slashdot moderation system


Forgot your password?
The Internet

Berners-Lee Pushes Linked Data In MIT Course 102

ErMKutz writes "WWW inventor Tim Berners-Lee is championing linked data — the idea of assigning web addresses to individual pieces of data to enable more intelligent information searches — much like he did now-ubiquitous Internet standards such as HTML and HTTP. But the ethic hasn't quite taken off yet, so he and a group of Boston tech and entrepreneurial all-stars are launching an MIT class to teach students linked data mechanics and fast-track the technology to market. They're combining engineering and entrepreneurial education in the hopes of launching viable linked data businesses or open source code at the conclusion of the course." I hope this shows up on OpenCourseWare.
This discussion has been archived. No new comments can be posted.

Berners-Lee Pushes Linked Data In MIT Course

Comments Filter:
  • by Anonymous Coward on Wednesday June 16, 2010 @05:26PM (#32595464)

    Please bring back the BLINK tag.

    • Re: (Score:2, Funny)

      Also, make it a WC3 standard to have at least 3 Marquee's on each page.

      • by SpzToid ( 869795 )

        This is standards compliant CSS: .blink-text {
                text-decoration: blink;


    • by PDX ( 412820 )

      What about copying the Dewey decimal index like this would access encyclopedias. You could copy the existing system used by librarians all over the continent.

      • Re: (Score:3, Insightful)

        by pyite ( 140350 )

        You could copy the existing system used by librarians all over the continent.

        Except that not all libraries use it. A lot (like the university I graduated from) use the Library of Congress Classification [].

        • Re: (Score:3, Informative)

          You mean a lot of libraries in the states use it. Rest of the world is quite happy with the Dewey decimal system.

        • Yup. Even here in Europe, the Dewey classification is regarded - by librarians managing large libraries - as outmoded. But still, you've got a point. Both the LOC and Europe's largest libraries support querying them through OPAC interfaces. Replacing Dewey indexes in your idea with OPAC urls would be pretty cool.
    • Dear AC, you already have an equivalent via scripting...
      var t = document.getElementById("mytag");
      if (t && = ( === "visible") ? "hidden" : "visible";
      }, 1500);

      -- Brendan Eich
    • Please bring back the BLINK tag.

      Why use the BLINK tag when we got Flash instead?

    • by arielCo ( 995647 )
      Hey, that's <BLINK>NOT</BLINK> a great idea!
    • Please bring back the BLINK tag.

      Perhaps this time around it'll be an abbreviation for a "Berners link". (That ought to cause anyone's eyes to close and then open again.)

    • <span id="blink">I’m blinking! Yeah!</span>
      <script type="text/javascript">
      document.blink = function (state) {
      var b = document.getElementById("blink") = state
      state = (state == "hidden") ? "visible" : "hidden"

      Or as a single data URL link:


  • Linked Data #1 (Score:3, Interesting)

    by OzPeter ( 195038 ) on Wednesday June 16, 2010 @05:30PM (#32595536)
    So how do they connect
    • by jedidiah ( 1196 )

      What happens when the URL is no longer valid?

      • by decipher_saint ( 72686 ) on Wednesday June 16, 2010 @05:37PM (#32595648)

        "Mad Libs", but with data!

        1. Create way to link data
        2. Link as much data together as possible
        3. ???
        4. Profit!

        All joking aside...

        I think this is a [HTTP404] idea, with tons of [HTTP404]! And makes me think of [POETIC IMAGE NUMBER 37 NOT FOUND]...

      • That won't happen, there's HTTP 3xx. Of course if you move or discontinue something, you'll use those.

        (Now seriously, if something disappears and you can't fix it, then there's nothing else you can do (other than removing the link). On one hand this is sad, but on the other hand it's this interdependence that makes web great.

      • by linumax ( 910946 )

        Partly depends on how "well connected" the data is.

        For instance, in case of Friend of A Friend [] if I am your one and only friend and basically gateway to the outside then me going offline means you are not live anymore either. This of course is a rare case. Also, as with many other problems related to Semantic Web and Linked Data, this is more of an engineering challenge rather than a fundamental flaw. There are different (proposed) strategies (some borrowed from current web) that deal with all the potentia

    • Data Weaving - the MIT equivalent of Basket Weaving for credit.

    • Are you asking a question?
    • The problem is that we more or less already have that. It's a type of URI plus some sort of permalink. While the engineering aspect is presumably what the hubbub is about, the reality is that this isn't really any different than creating a link to a short snippet of XML for some bit of info. Similar to I don't know an HTML webpage for a single paragraph or less.
  • Linked data #2 (Score:3, Interesting)

    by OzPeter ( 195038 ) on Wednesday June 16, 2010 @05:31PM (#32595554)
    Chunks of data that are
  • Linked data #3 (Score:3, Interesting)

    by OzPeter ( 195038 ) on Wednesday June 16, 2010 @05:32PM (#32595572)
    apparently related?
  • Are HTML named anchors an example of data-naming? At least some browsers will render a resource around an anchor, if its name is given in the URL.

    Applied to the web (and with a way to join two pieces of data) this can lead to a HTML-supported bottom-up approach, with no need for "a special way to #include files". People could then create welcome.html-piece, toc.html-piece, blogpost.html-piece and say index.html is *.html-piece.

  • This sounds pretty much like deep linking. which per wikipedia is

    Deep linking, on the World Wide Web, is making a hyperlink that points to a specific page or image on a website, instead of that website's main or home page. Such links are called deep links.

    I remember hearing about a couple of lawsuits which were raised because of deep linking and i dont see how this is any different.
    I can faintly hear the lawyer sharpening their tools right now......

    • Re: (Score:3, Interesting)

      by bsDaemon ( 87307 )

      It sounds like it's more related to this [] TED talk, rather than skipping over a "content provider's" "branding" to "steal" their "content". The model would likely require a more active sense of purpose towards participation and making the data available, rather than having stuff online and some random person linking to it without "permission"

      • Aahh.. now that's much better. I couldn't find this inside the article which was all about "Linked data is the idea of assigning Web addresses to individual chunks of information, rather than just to documents, so that these chunks can interlink and lend meaning to one another." which reads more like deep linking than open data.

        What is covered in the TED is far more agreeable on, though it is more about generating collaborative data (like openstreetmaps) rather than *linking* data per say.
        What Tim seems to

      • by WNight ( 23683 )

        I know you aren't justifying that attitude or anything, but for the benefit of those who don't understand what's actually going on...

        You contact a webserver and ask it for data, often that data is a file which it sends to you. That file often is (roughly) HTML and includes other links to other files, often on the same webserver.

        Skipping or stripping out an element of a webpage is really just not downloading some of the extra files.

        "Rewriting" someone's webpage to strip the ads is equivalent to opening the b

        • I agree. And when someone started using a big image from my cheaply hosted forum as their signature for another forum, I just enabled image hot-linking protection (basically, a .htaccess rule saying "if referer isn't*, don't provide the content").

          There's no reason not to disable deep linking if you don't want people using it - criminalizing it is completely absurd.

  • by Thud457 ( 234763 ) on Wednesday June 16, 2010 @05:45PM (#32595758) Homepage Journal
    What the hell?! Is this something I'd have to read TFA to understand?!!
  • One problem -- from a business perspective -- of linking data in a machine-understandable way is that it makes it much easier for third parties to use that data. At first that may seem like a good thing, but for many companies the data are the entire business. If a third party can quickly aggregate related data from many sites in a way that is more useful than the individual sites, those sources suffer. We're seeing this tension already with Google vs. publishers, where the data in question are news stor
    • Lets take hardware component manufacturers ... the good ones which don't hide all their real datasheets and models behind annoying representatives. Lets take Linear Technologies, if LTSpice could simply pull up to data models from their website that would be more convenient than having to install them separately. A lot of databases would benefit from using DNS+Web instead of proprietary solutions.

    • by grcumb ( 781340 )

      One problem -- from a business perspective -- of linking data in a machine-understandable way is that it makes it much easier for third parties to use that data. At first that may seem like a good thing, but for many companies the data are the entire business.

      It's true that data linking is detrimental to some business models. That is a weakness in the business models, not in data linking. They're victim to the classic fallacy that data is worth more as a secret than when it's shared.

      The simple fact is that

  • Some of us what are old enough see so many "new" things that are repackaged "old" things that have been either forgotten about or simply over looked. Methinks this is another example. The implementation details may be different, but this idea was first promulgated in *1960*! [] refers...
  • by Anonymous Coward

    There are inherent dangers of this level of linkage. One week,
    person clicks link X: "We are at war with eurasia",
    next week, clicks the same link "We were never at war with eurasia".

    No one else see this? Archiving all information on the internet is one thing, but singularly cataloging and tagging every piece of information so that it can be accessed so easily is....well, dangerous.
    While the irony of posting this on the internet is not lost on me, where all the collective information of mankind is at my finge

    • That one day is 1969-10-29, right?

      I mean, the danger was always there, it's more a feature than a design error.

      You can always trust an archive, or at least write a "fetch timestamp", when writing serious stuff, like wikipedia articles. (Anyway, an URL bibliography item should always say when it was fetched.)

      I don't know if, on the other hand, by linking smaller portions of data, we aren't making it easier to find and track that kind of changes.

      It is hard to read a hundred-paragraph document to track 3 or 4

    • by grcumb ( 781340 )

      There are inherent dangers of this level of linkage. One week,
      person clicks link X: "We are at war with eurasia",
      next week, clicks the same link "We were never at war with eurasia".

      No one else see this?

      Uh... yeah, but that's not inherent to data linking, that's inherent to digital information. Electronic data is mutable and therefore evanescent in nature. Period.

      The entire history of digital information storage is a dialectic between data's innate mutability and the need for enduring records. Data linking is a (single) step toward the latter end of the continuum.

  • by slasho81 ( 455509 ) on Wednesday June 16, 2010 @06:26PM (#32596274)
    The concept of linking huge amount of publicly accessible data is obviously worthwhile. The problem with the Linked Data movement is the current implementation. It is a total mess. The insistent attempts to pre-standardize open data have created a horrible bureaucratic monster. RDF, RDFS, RDFa, N3, RIF, SWRL, OWL, SPARQL, FOAF, SIOC, and a few others I forgot on top of XML. Every time you encounter a field with so many acronyms you know something horribly wrong is being developed. The consultants and enterprise "experts" will have a field day with this.
    • by blair1q ( 305137 )

      I disagree. Assigning a URI to every piece of data is a duplication of data and a waste of bandwidth, both of the network and of the people designing the linking system.

      As Google has proved, the web is content-addressable, and URIs are relegated to being routing tokens. They're exposed to the user, but they really don't need to be.

      But consider what happens if we cause the routing also to be performed on a content-addressable basis... No more URIs. Your request for the information is routed based on request

    • Re: (Score:2, Funny)

      His engineering approach is built upon REST. The principle issue is that of indexing all this linked data so that 1) the schema is stable, 2) it is locatable, 3) it is useful, 4) managed ACLs. I am sure there are a host of potential issues.
      • by slasho81 ( 455509 ) on Wednesday June 16, 2010 @08:48PM (#32597420)
        Ha! REST and ACL. I knew I forgot some important acronyms there. ;-)

        Joking aside, having a stable schema is important, but not letting the "ecosystem" decide what's best on its own is damaging because it leads to over-engineering (or "enterprising") by a minority which most often does not know what's best for the majority. It also leads to numerous unusable bureaucratic documents which take the wind out of anyone who's trying to do anything useful. I'm talking from personal experience here.

        Know the saying "premature optimization is the root of all evil"? Well, premature standardization is a special case of that.
    • Every time you encounter a field with so many acronyms you know something horribly wrong is being developed.

      I suggest you never get involved with the US military. More TLAs than you can shake a stick at.

  • I wonder if there make the links rel="no-follow" :)
  • should thank the gods for his good fortune and not hog the stage.

  • Changes are already made within OpenSim And Second Life that fetch assets not by file protocol and UD, but by HTTP. Opensim(second life free version) can even load entire region data via http URL(implemented) within the last month.

    For those that dont know, 1/3 of Linden Labs have been made redundant, and some bloggers suspect this is in response to the sudden reliasation that its all about HTTP and not about propriertry viewers any more.

    In the day I used to load TGAs via C. Its interesting noting the so
  • Grumble. (Score:5, Interesting)

    by goodmanj ( 234846 ) on Wednesday June 16, 2010 @10:04PM (#32597916)

    As a college professor, I believe that the primary goal of a class should not be to advance your personal agenda. Feel free to share your opinions with your students, but your primary purpose is to inform and inspire, not to brainwash.

    I'm clearly in the minority on this one.

    • And you are also a minority among Slashdot posters.

    • Well, if that's an optional course, I don't see how is that wrong.

    • "Advancing your personal agenda" is something that all teachers should aspire to. Why should I teach an subject matter if I don't believe the world will benefit? Because of a paycheck? Fuck no, I aint no sell-out. The problem is, most teachers are. And they brainwash just as much, if not more ( because it's more subconscious and systemic ).

      And I'm sure Tim Berners Lee will field complaints and encourage formal discussions with any students questioning the subject matter, something that sell-out brainwashin

  • There are a lot of companies and organizations trying to champion linked data, but linked data is nothing if those same companies and organizations don't adopt standards and push them ubiquitously. That was the motivation behind []. It's a semantic data set of interrelated semantic concepts from various sources, but with a pretty impressive line of companies backing it up and implementing it.

Information is the inverse of entropy.