Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Networking The Internet Upgrades

UCLA, CIsco & More Launch Consortium To Replace TCP/IP 254

alphadogg writes Big name academic and vendor organizations have unveiled a consortium this week that's pushing Named Data Networking (NDN), an emerging Internet architecture designed to better accommodate data and application access in an increasingly mobile world. The Named Data Networking Consortium members, which include universities such as UCLA and China's Tsinghua University as well as vendors such as Cisco and VeriSign, are meeting this week at a two-day workshop at UCLA to discuss NDN's promise for scientific research. Big data, eHealth and climate research are among the application areas on the table. The NDN effort has been backed in large part by the National Science Foundation, which has put more than $13.5 million into it since 2010.
This discussion has been archived. No new comments can be posted.

UCLA, CIsco & More Launch Consortium To Replace TCP/IP

Comments Filter:
  • Huh (Score:1, Interesting)

    by koan ( 80826 ) on Thursday September 04, 2014 @07:17PM (#47830845)

    No mention of the NSA or GCHQ, one wonders what their contribution will be to a system that tracks you World wide.

  • by Anonymous Coward on Thursday September 04, 2014 @07:36PM (#47830975)

    There is a talk on youtube from 2006 by Van Jacobson that describes this idea before it was called named data networking. It is really neat, and I am surprised that it has taken so long for somebody to actually try to implement it.

    http://www.youtube.com/watch?v=oCZMoY3q2uM

  • I get what you're saying, but I don't get how NDN is supposed to replace TCP/IP. Sure, it replaces many things done with UDP, and it even can do some things better than TCP, but it's not going to be replacing IPvX any time soon, just as TCP and UDP and ICMP etc. can happily co-exist.

    What I find interesting is that there's been an implementation of NDN/IP for YEARS -- it's called Freenet [freenetproject.org]. Something tells me that the sponsoring groups wouldn't like to see this particular implementation be the first thing to try out their new network layer however....

  • by uCallHimDrJ0NES ( 2546640 ) on Thursday September 04, 2014 @07:46PM (#47831031)

    I don't think we're going to stop the progression you are describing. The method by which it is achieved may not be the one being discussed by UCLA and Cisco, but it's clear now that what slashdotters call "the Internet" is doomed and has been since all of those rebellions in northern africa/mideast a couple years ago. What most end-users call "the Internet" is just getting started, but certainly the application of it is as a control and monitoring system against dissent rather than a catalyst promoting freedom of information. The point where we have some hope of rallying the population to activism is the point where content providers and governments try to do things like completely disallow offline storage media. But not before then, because the population just plain doesn't understand what they have or what is at stake.

  • by Anonymous Coward on Thursday September 04, 2014 @08:26PM (#47831267)

    All the internet is NOT "give me data named thus." For example, this "NDN" doesn't seem to support logging in to a particular computer, you know, so that you can administer it. It doesn't seem to support sending a file to a particular printer. Maybe it might make an interesting overlay on IP, replacing existing content distribution techniques, like Akamai, but I'm not seeing it replace IP.
          -- david newall

  • by EmagGeek ( 574360 ) on Thursday September 04, 2014 @09:01PM (#47831475) Journal

    In a nutshell, this is applying DRM to all of your connection attempts. You will only be able to make connections that are "authorized" by TPTB.

    No more free and open networking.

  • Magnet Links (Score:4, Interesting)

    by Anonymous Coward on Thursday September 04, 2014 @09:11PM (#47831539)

    Since every single goddamned one of you has used magnet links, you should be comfortable with the idea of requesting objects rather than discussions with particular hosts. Taking this idea and running with it is NDN. It's an excellent network research subject.

    It facilitates caching, multipathing... with some more work perhaps network coding to get close to the min-cut bound. Bittorrent is super successful because it's all about the content. Let's give a similar protocol a chance at changing the net.

  • This looks terrible. (Score:5, Interesting)

    by SuricouRaven ( 1897204 ) on Friday September 05, 2014 @03:09AM (#47832613)

    It looks like they started out with Content Addressible Networking, which is a great idea. Massive bandwidth savings, improved resilience, faster performance, power savings, everything you could want. But then rather than try to impliment CAN properly alongside conventional networking they went for some ridiculous micro-caching thing, over-complicated intermediate nodes that enforce usage rules, some form of insane public-key versioning system validated by intermediate nodes and generally ended up with a monstrosity.

    CAN is a great idea. NDN is a terrible implimentation of CAN. The main selling points include having DRM capability built into the network itsself, so if you try to download something not authorised for your country the ISP router can detect and block it. A simple distributed cache would achieve the same benefits with a much simpler design.

    There's the core of a great idea in there, burried deep in the heap of over-engineered complexity that appears designed not to bring benefits to performance but rather to allow ISPs to readily decide exactly what content they wish to allow to be distributed and by whome. This thing is designed to allow the network devices to transcode video in real time to a lower bitrate - putting that kind of intelligence in the network is insane!

  • Re: Not a chance (Score:4, Interesting)

    by bluefoxlucid ( 723572 ) on Friday September 05, 2014 @10:35AM (#47834363) Homepage Journal

    NDN looks like a scheme to tag data and change networks from "addressing a particular node" to "addressing data". This is like changing the Post Office such that a person addresses a particular letter sent to them, rather than having a house number where letters get delivered.

    Computer addresses with DNS on top make sense: it's easy to subdivide and route, and name translation allows humans to interact with it. NDN looks like it's trying to make the names the addresses, and make the URIs the names, and make the routers act as caches, and hope it all works; but then how do I address a *computer*? How do I ask for anything other than HTTP?

    NDN looks like p2pwww stuff I designed back in 2004, except trying to implement as a network protocol on the routers, rather than an application protocol on the nodes. Even then, I specified digital signatures, encryption, and network namespace isolation: you could have an ICANNWeb which signed certificates for each name (i.e. Microsoft) and, on ICANNWeb, you would put out a message (P2P) for Microsoft://www/windowsxp/support.aspx and get back responses for (have|know|home)--node has a copy recent as per [date], node knows who has a copy recent as per [date], node knows the home is [address]--and select from there. Each resource would be digitally signed with generation date stamp and expiration date stamp, and a new generation date stamp overrides an earlier expiration date stamp.

    In short: you'd get on a Gnutella-like network, perform a search, and be told where the resource is. Data was such that you could identify newer, identical, and expired resources. Your node could say, "0-3 hops", then "4-6 hops", incrementally crawling the network; or "3 hops past first response, limit 10". Usually if a node knows another node has a copy, that other node also knows several (it got its copy somehow--by its own request). If a node locates nodes with multiple versions, it provides outdated nodes with provable evidence that they're outdated, so they can drop their caches and learn some other node has a more up-to-date copy. Likewise, when those nodes are queried, they will then re-query the nodes they know have copies, and update them: an update doesn't trigger this cycle--too much traffic.

    That's application-level. A locatable, self-caching network which encapsulates all resources in digital signatures and allows for namespaces. It sounds like that's what they're trying to accomplish, but in the transport layer.

E = MC ** 2 +- 3db

Working...