Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Networking The Internet Upgrades

UCLA, CIsco & More Launch Consortium To Replace TCP/IP 254

alphadogg writes Big name academic and vendor organizations have unveiled a consortium this week that's pushing Named Data Networking (NDN), an emerging Internet architecture designed to better accommodate data and application access in an increasingly mobile world. The Named Data Networking Consortium members, which include universities such as UCLA and China's Tsinghua University as well as vendors such as Cisco and VeriSign, are meeting this week at a two-day workshop at UCLA to discuss NDN's promise for scientific research. Big data, eHealth and climate research are among the application areas on the table. The NDN effort has been backed in large part by the National Science Foundation, which has put more than $13.5 million into it since 2010.
This discussion has been archived. No new comments can be posted.

UCLA, CIsco & More Launch Consortium To Replace TCP/IP

Comments Filter:
  • by Anonymous Coward on Thursday September 04, 2014 @07:09PM (#47830785)

    Just don't expect anyone to early adopt except the usual hypebots and yahoos. We can't even get rid of IPv4 and you want do replace TCP entirely.

    • Re: (Score:3, Insightful)

      Yeah. And replace UNIX, too. You know? Like Plan 9 and Windows NT.

      I ain't holdin' my breath.

    • by Enry ( 630 ) <enry@@@wayga...net> on Thursday September 04, 2014 @07:53PM (#47831079) Journal

      This. There's likely trillions of dollars invested in IPv4 that is going to be around for decades. Consider the Internet like highways and train track widths - we're stuck with it for a very long time.

      • by Bengie ( 1121981 )
        Most IPv4 hardware can't handle modern Internet speeds, which are increasing 50% every year. Some newer tech is improving closer 3x per year. You'll get left in the dust sticking with IPv4 only infrastructure hardware for big networks.
        • Umm, the "Internet of things" doesn't NEED "modern Internet speeds". Does your fridge or your sprinkler system or whatever need high speed? No, it just "needs" (for people who want that functionality), some kind of comparatively dirt slow communication path.

          That's not an argument FOR IPv4 directly, just that your "modern Internet speeds" argument directly doesn't necessarily justify throwing away decades' worth of hardware that is providing people functionality.

          • by santax ( 1541065 )
            These 'things' add up. I have no need for a expresso machine that is internet-contected, but I'm sure some marketing boy can sell it to my significant other. And I'm sure it will use most of it's packets to send data back to the marketing boy.
            • by TWX ( 665546 )
              Sounds to me like you need to revise your access lists and either block it outright, or if it needs that connection to run, QoS it down to where it's not a problem.
          • It may not need a lot of bandwidth, but I wonder what kind of data traffic one might expect of it. For measurements and data collection, for example, you may not want to transfer more that a few bytes from a single node every few seconds, but it means sending a packet every few seconds. Suddenly your data is like 10% of all the stuff you're actually transferring. And all the packets have to be routed and processed, even if they are small.
          • Does your fridge or your sprinkler system or whatever need high speed?

            Neither my fridge nor my sprinkler system - especially my sprinkler system - needs any kind of connectivity whatsoever except to spy on me and bombard me with ads where ever I go, both of which do require high speed.

            • That's why I specifically said "for people who want that functionality".

              I can see wanting your sprinkler system online -- to change it from your couch.. or heck, even from somewhere else (not everyone has automatic rain sensors).

              The common "fridge keeps track of what you have in it" idea would be great if it ALSO coordinated with the local grocery store ads that week..

              • by TWX ( 665546 )
                For someone that wants this kind of interoperability, they're going to be a lot better off having all of the various devices report to a centralized system, then letting that centralized system send notifications to the various clients like a cell phone or a computer. Also, given that the vast majority of the time the systems would be either idle or within expected parameters, there wouldn't be much of a need for excessive monitoring other than to verify keepalive. Only if the user wants explicit logging
        • by TheGratefulNet ( 143330 ) on Thursday September 04, 2014 @10:52PM (#47831933)

          citation needed.

          I disagree strongly that 'ipv4 hardware' (huh? what IS that, btw? does this imply that ipv6 is not in 'hardware'? how strange to describe things) is not up to modern network speeds. if anything, they can outrun any intermediate link in the chain from you to some random website. wan is still the slow part and always will be; but unless you truly get 1gig speeds to your door, your hardware will be more than enough for anything wan-based.

          I truly have no idea where you got this info from, but you are as wrong as could be.

          • 'ipv4 hardware' (huh? what IS that, btw? does this imply that ipv6 is not in 'hardware'? how strange to describe things)

            Not sure what he was on about but, yeah, IPv4 is always in ASIC on big gear and part of the slow IPv6 adoption curve is that there is a lot of big expensive gear deployed with IPv4 in ASIC and IPv6 is only done on the anemic CPU.

            We're probably 2 of 5 years into the required replacement cycle, but it is significant. One of the wrinkles with the recent Cisco "Internet is too big" bug was th

            • by mark-t ( 151149 )
              The reason for the slow ipv6 adoption is that the ISP's don't want to support because everything that anyone needs to access can be accessed by ipv4, and the endpoints don't want to switch to it because they would lose out on all of the ipv4-only connections, so either side sees ipv6 as a superfluous expense that offers zero gain for the forseeable future until such time as we are *literally* out of ip addresses, and the problem has scaled to such an extent that even NAT will not solve it. Then they'll
      • by Jeremi ( 14640 )

        This. There's likely trillions of dollars invested in IPv4 that is going to be around for decades. Consider the Internet like highways and train track widths - we're stuck with it for a very long time.

        I'm probably missing the point, but isn't NDN just a way to do content-addressable lookup of data? And if so, why would we need to throw out IPv4 in order to use it? We already have lots of examples of that running over IPv4 (e.g. BitTorrent, or Akamai, or even Google-searches if you squint).

        • by TWX ( 665546 )
          I expect that the point of an entirely new transmission protocol would be to get rid of all of the vulnerabilities in the current one, rather than having to try to work around them and possibly miss something.

          It's not like TCP/IP is the only protocol to have existed, there have been several that people have heard of and quite a few that most people don't know about. Even the OSI model itself was originally intended to be an implementation, rather than an abstraction, but DARPANET was so successful and r
        • When America introduced the Susan B Anthony dollar, it didn't fail because it was bad. It failed because the mint didn't remove the paper dollar from circulation combined with the fact that people in general don't like change. Canada introduced a dollar coin and removed the paper dollar from circulation, denying people the choice. The dollar coin has been successfully in circulation for at least 25 years. If you want to get people to adopt a new standard, don't give them the option to use the old one.
      • Comment removed (Score:4, Insightful)

        by account_deleted ( 4530225 ) on Thursday September 04, 2014 @09:56PM (#47831729)
        Comment removed based on user account deletion
        • by TWX ( 665546 )
          Is it wrong that I don't want my home devices to be reachable from the outside unsolicited?
          • by mark-t ( 151149 ) <markt.nerdflat@com> on Friday September 05, 2014 @02:13AM (#47832467) Journal

            You can do that with ipv6 anyways.. and without even bothering with NAT. home devices can be assigned addresses in a local range, and will not be accessible from outside any more than if they were NATted, since IP's in such ranges are explicitly designed by the protocol spec to not be routable. As long as your cable modem adheres to the spec, there is no danger of accessing it from the outside any more than if it were behind a NAT.

            Of course, in practice, I expect some kind of NAT solution will be in fairly wide use even in IPv6 anyways, since there will be no lack of use cases where you do not want your device to generally have a globally visible IP and be visible to the outside, but you may still have occasion to want to make requests of services in the outside world, using a local proxy to route the responses to those requests directly to your local IP, even though you do not have a global IP, much like NAT currently operates. This can also be solved by utilizing a global IP and configuring a firewall to block inbound traffic to that IP unless it is in response to a specific request by that device, but this is generally less convenient to configure properly than using a NAT-like arrangement.

            Notwithstanding, at least with IPv6, the number of IP's is large enough that every device that anyone might ever want to have its own IP actually can... instead of only satisfying the about 70 or 80% of users, like ipv4 does.

          • by heypete ( 60671 )

            Is it wrong that I don't want my home devices to be reachable from the outside unsolicited?

            Use a stateful firewall? NAT is not a firewall.

            Just because something has a globally unique IP address doesn't mean that it's globally reachable.

            • NAT is much simpler to use than setting up a firewall. And why would I want my personal network to use public IP addresses anyway?

              For SOHO environments NAT is the perfect tool.

              • NAT is NOT a firewall. Meaning that you haven't hid anything and you are not secure. Also NAT is a huge reason why IPSec doesn't work. It breaks the internet.
      • Pretty much. Seems like with Linux on the Desktop, incumbency is vastly underestimated, and the horse has already bolted with TCP/IP. Driving on the Right/Left might not be optimal, but it incumbent and it's too entrenched to change.
    • by binarylarry ( 1338699 ) on Thursday September 04, 2014 @08:19PM (#47831239)

      You know some kind of ill conceived "content protection" is going be built into this protocol.

    • Not being able to get rid of IPv4 might be a very good reason to replace TCP/IP entirely. How much traction do you *really* think IPV6 is going to get? My answer to that is something along the lines of "just enough until a better solution comes around."

    • Yeah, good luck with replacing TCP/IP. This is just a caching system.
  • Not a chance (Score:2, Insightful)

    by gweihir ( 88907 )

    Despite a few decades of research, TCP/IP is still the best thing we know for the task at hand. Yes, it is admittedly not really good at it, but all known alternatives are worse. This is more likely some kind of publicity stunt or serves some entirely different purpose.

    • Re:Not a chance (Score:4, Insightful)

      by thegarbz ( 1787294 ) on Thursday September 04, 2014 @07:35PM (#47830969)

      Despite decades of research the horse and cart are still the best thing we know for the task at hand. Yes, it's admittedly not really good, but all the known alternatives are worse. This is more likely some kind of publicity stunt or serves some entirely different purpose.

      Your statement as shown can be applied to the internal combustion engine, or any other technology. Rejecting any change out of hand without consideration is incredibly sad, if not dangerous to our species future prospects. Yes it's important to take everything with a grain of salt, but everything should be at least considered. It only takes one successful change to have a dramatic impact and improve the lives of many.

      This goes for all technology, not just this specific problem.

      • Just like the steam powered car. Those were so totally an awesome idea.
        • I never said you had to accept ideas, just consider them.

          • by gweihir ( 88907 )

            All these ideas have been considered and are continued to be considered. What do you think scientific publishing is? A joke? There is NOTHING THERE at this time. No candidate. New protocols are considered good if they are not too much worse than TCP/IP in general applications. Truth be told, most serious researchers have left that field though, as there is nothing to be gained and everything obvious (after a few years of research) has been discounted.

            Really, stop talking trash. You have no clue about the st

            • All these ideas have been considered

              Ahh I see now. There's no such thing as a new idea? Even if the old system has problems? Everything that can ever be invented has been invented.

              I have to be honest I didn't read past that first sentence. I can only imagine the rest of your post follows this completely retarded preposition.

              • by gweihir ( 88907 )

                You probably also believe that they will eventually discover the philosopher's stone, as they may just not have considered the right idea so far.

                Rally, this is science. There are border conditions for what is possible and there are no fundamental breakthroughs out of the blue. But there is another good word for people like you: "sucker".

                • Oh my god this made me laugh.

                  A sucker is someone who believes something without evidence. What I am is someone who poopoos an idea because I believe we've already figured out the best way of doing something. Trust me we haven't, and we never will. If we had time machines I would suggest going and talking to people with kerosene lamps and tell them one day that they will be able to light their houses through this magical (they will think it is) thing called electricity.

                  Will we find the philosopher's stone? N

      • by gweihir ( 88907 )

        So you would me following the research in that area for 25 years now call "without consideration"? That is pretty dumb. For the SPECIFIC PROBLEM at hand, there is currently no better solution, despite constant research effort for a few decades. That is why it will not be replaced anytime soon.

        I really hate mindless "progress fanatics" like you. No clue at all, insulting attitude and zero to contribute. Moron.

        • That depends, did you actually say you follow the research in the area for 25 years? Did you also look at the proposal in detail and make an assessment? Nope? Didn't think so!

          Dammit Jim I'm a progress fanatic not a mind reader.

          By the way the definition for progress is "development towards an improved or more advanced condition.".
          Based on this I personally think that everyone should be a progress fanatic and it will be sad when all the researches turn into middle managers and naysayers and the world will sto

      • Your statement as shown can be applied to the internal combustion engine, or any other technology. Rejecting any change out of hand without consideration is incredibly sad

        There are only so many hours in a day... ignoring/rejecting silliness out of ignorance is often a practical necessity.

        Yes it's important to take everything with a grain of salt, but everything should be at least considered.

        "Everything" ...sort of...includes magic unicorns and assorted demon things observed while trip-pin' on mushr00ms...

        See also trusted Internets, motor/generator free energy machines and application of ternary logic to prevent IPv4 exhaustion.

        It only takes one successful change to have a dramatic impact and improve the lives of many.

        Well paying out that $25k to play is sure to improve the life of someone.

        • That's the wonderful thing about our world. Not everyone needs to be an expert in everything. But if you proclaim to be then ignoring/rejecting silliness out of ignorance....

          Hang on this doesn't compute. If you're ignorant how do you know it's silly again?

          I'm not saying everyone needs to check everything about everything. Just that the experts consider the solution.

          On the other hand the parent is rejecting new ideas out of hand because it would be changing TCP/IP. That's not examining if a solution is silly

      • The main difference that I can see for this technology is that the routing takes places based on URL, instead of based on IP address, like it is today.

        It's hard for me to see this as a significant improvement. It might make caching somewhat easier, I guess, by pushing the caching mechanism down to the routing layer.

        How else is this an improvement? It seems like every problem they are trying to solve has been solved, and more elegantly, as long as you can see the beauty in the multi-layer stack. If you
        • Oh I agree it's probably not much of an improvement with technical merits. I was merely calling out the parent's attitude which appears to be that we should abandon all efforts to improve TCP/IP because we haven't had any luck in the past decade.

          That's not how science works.

          As for technical merits I don't think this standard has much that would warrant the incredible expense of implementing it.

  • by Eravnrekaree ( 467752 ) on Thursday September 04, 2014 @07:14PM (#47830823)

    This is basically designed to bring the old big media, broadcast ways to the internet. Hence, to basically destroy the Internet, allowing for mass reproduction of centrally created Corporate content, where independant voices are locked out. The protocol is designed for that, mass distribution of corporate created, centrally distributed content to an ignorant, consumption only masses which are treated with disdain and objects of manipulation by the elite. This is to bring big media and the stranglehold they had for so many years on information the public has access to back.

    With the Ipv6 transition needed its time to focus on that rather than on this plan to destroy the internet and turn it into the digital equivalent of 100 channels of centrally produced, elite controlled, one way cable television programming designed to psychologically manipulate and control a feeble and dim witted public.

    No thanks and get your #%#% hands of my internet.

    • by Taco Cowboy ( 5327 ) on Thursday September 04, 2014 @07:22PM (#47830885) Journal

      I was puzzled with the involvement of Tsinghua University of China with this thing

      After reading your comment it starts to make sense

      The China Communist Party needs to regain control of the Internet (at least inside China), that explains why they endorse this new scheme so much

    • by Melkman ( 82959 ) on Thursday September 04, 2014 @07:26PM (#47830909)
      Luckily I don't see this attempt to turn internet into TV taking off. They really seem to see it as an alternative to IP instead of a service running on top of it like the web. IP6 is a really small change compared to it and look at the snales pace with which that is being rolled out.
    • I get what you're saying, but I don't get how NDN is supposed to replace TCP/IP. Sure, it replaces many things done with UDP, and it even can do some things better than TCP, but it's not going to be replacing IPvX any time soon, just as TCP and UDP and ICMP etc. can happily co-exist.

      What I find interesting is that there's been an implementation of NDN/IP for YEARS -- it's called Freenet [freenetproject.org]. Something tells me that the sponsoring groups wouldn't like to see this particular implementation be the first thing to try out their new network layer however....

      • and it even can do some things better than TCP

        Like what? I've been trying to figure that out, I can't see anything.

    • by uCallHimDrJ0NES ( 2546640 ) on Thursday September 04, 2014 @07:46PM (#47831031)

      I don't think we're going to stop the progression you are describing. The method by which it is achieved may not be the one being discussed by UCLA and Cisco, but it's clear now that what slashdotters call "the Internet" is doomed and has been since all of those rebellions in northern africa/mideast a couple years ago. What most end-users call "the Internet" is just getting started, but certainly the application of it is as a control and monitoring system against dissent rather than a catalyst promoting freedom of information. The point where we have some hope of rallying the population to activism is the point where content providers and governments try to do things like completely disallow offline storage media. But not before then, because the population just plain doesn't understand what they have or what is at stake.

  • Different layers (Score:5, Insightful)

    by Anonymous Coward on Thursday September 04, 2014 @07:18PM (#47830855)

    They are also funding a study to replace roads with run-flat tires. Oh, right, different layers.

  • by Penguinshit ( 591885 ) on Thursday September 04, 2014 @07:29PM (#47830929) Homepage Journal
    Unfortunately, as we learned from the debacle of cellular communications, corporate inertia will either squash this or slow gestation until it's stillborn. There is a substantial investment in the current technology of TCP/IP and it still works "just good enough". This change in network would require installation of a twin network alongside the current, with slow adoption in the consumer side. That would be very expensive to build and maintain over numerous financial quarters and thus no MBA-centric company would ever do it in current corporate culture. This takes long-term thinking in a quarter-to-quarter environment. Thus it won't happen for a very long time.
  • by Anonymous Coward on Thursday September 04, 2014 @07:36PM (#47830975)

    There is a talk on youtube from 2006 by Van Jacobson that describes this idea before it was called named data networking. It is really neat, and I am surprised that it has taken so long for somebody to actually try to implement it.

    http://www.youtube.com/watch?v=oCZMoY3q2uM

    • Can we please make sure that this talk is well mirrored and universally known? We don't want any patents to be put on this technology to make a few people filthy rich and the rest pay through the nose if this ever succeeds.
  • A bunch of broke folks saddled with student loans are looking to replace UCLA and Cisco; but they didn't bother to announce it.

  • From the architecture page [named-data.net]:

    Note that neither Interest nor Data packets carry any host or interface addresses (such as IP addresses); Interest packets are routed towards data producers based on the names carried in the Interest packets, and Data packets are returned based on the state information set up by the Interests at each router hop

    Great, NAT-like state in every router...

  • by PPH ( 736903 ) on Thursday September 04, 2014 @08:07PM (#47831163)

    First, IPv6. If you can handle simple things like that, then we'll let you play with the important stuff.

    Oh yeah. Flying cars too.

  • by Anonymous Coward

    All the internet is NOT "give me data named thus." For example, this "NDN" doesn't seem to support logging in to a particular computer, you know, so that you can administer it. It doesn't seem to support sending a file to a particular printer. Maybe it might make an interesting overlay on IP, replacing existing content distribution techniques, like Akamai, but I'm not seeing it replace IP.
          -- david newall

    • For example, this "NDN" doesn't seem to support logging in to a particular computer, you know, so that you can administer it. It doesn't seem to support sending a file to a particular printer.

      How about, giving your printer a particular name, and giving your computer a particular name? I'm pretty sure they've thought about that particular problem.

  • by DarkDaimon ( 966409 ) on Thursday September 04, 2014 @08:27PM (#47831275)
    I'm glad they are starting this now so hopefully by the time we run out of IPv6 addresses, we'll be ready!
  • We can't even get TCP/IP v6 off the ground, and they want to try this?

  • by Anonymous Coward on Thursday September 04, 2014 @08:52PM (#47831417)

    How is this going to harm the everyday Internet user? I imagine at the very least it will make it more difficult for two random internet users to connect to each other, because all connections will probably have to be approved by Verisign or some other shit like that.

    Remember folks, the age of innovation is over. We are now in the age of control and oppression. Everything "new" is invented for one purpose and only one purpose - to control you more effectively.

  • by sirwired ( 27582 ) on Thursday September 04, 2014 @09:00PM (#47831461)

    I could totally see the two networks running simultaneously. It's completely accurate that TCP/IP sucks for mass content delivery; it's gigantic waste of bandwidth. And for point-to-point interaction this protocol would be massively inefficient.

    But why can the two protocols not run on top of the same Layer 2 infrastructure?

    • But why can the two protocols not run on top of the same Layer 2 infrastructure?

      Because once they do get it rolled out, only "terrorists" (properly pronounced 'tarrists') will be using IPv4 or IPv6.

    • I could totally see the two networks running simultaneously. It's completely accurate that TCP/IP sucks for mass content delivery; it's gigantic waste of bandwidth. And for point-to-point interaction this protocol would be massively inefficient.

      But why can the two protocols not run on top of the same Layer 2 infrastructure?

      Or use, you know, like multicast or something...?

      • Multicast is fine when every reciever wants the same thing at the same time. Good for broadcasting live events. Not very good for things like youtube, where millions of people will want to watch a video but very few of them simutainously, and those that do may want to pause it at any moment and resume playback hours later.

  • by EmagGeek ( 574360 ) on Thursday September 04, 2014 @09:01PM (#47831475) Journal

    In a nutshell, this is applying DRM to all of your connection attempts. You will only be able to make connections that are "authorized" by TPTB.

    No more free and open networking.

  • As I read the descriptions of NDN, I can't quite see what the difference between NDN and ip multicast is.

    If the problem is inefficient use of resources due to over replication, didn't multicast solve that? Add caching boxes, and hey! You just invented IPTV!

  • As long as you're replacing the "DNA" of the Internet, wouldn't replacing SMTP be a better thing to start with? (To prevent spam, or at least untraceable spam?)

    • by dbIII ( 701233 )
      I think we need that form of why a suggestion to stop spam is not new and is not going to be a silver bullet.

      The major flaw is any new bandwagon is going to have the spammers climbing aboard as early adopters. Any barriers to entry are going to be more difficult for the general public to negotiate than the spammers, since the spammers have the means to bot, buy or mule their way around them.
      With so much distributed malware around, as well as various other means, the spammers can send from trusted addresse
      • I think we need that form of why a suggestion to stop spam is not new and is not going to be a silver bullet.

        Please, no. That form has rejected far too many good solutions. It's issue is that it insists that we remove spam without changing how email works and what we use it for, as if we can expect something to change even though we refuse to change it. I recall one suggestion got that form as a reply with nothing but the "it won't work for mailing lists" box checked. Is it really too much to tell people running mailing lists to find some other means to do what they do, if it will eliminate spam for everyone e

        • Personally, I think XMPP has the problem solved well enough. Their general architecture is superior to email in terms of verifying that you really know where a message came from, so if you receive spam from user@example.com,

          XMPP is embarrassingly similar to email it only seems less spammy because nobody uses it.

          ...and because each server knows the contact list of its users, it has a good clue about whether that message is spam even before doing any content analysis

          Reputation analysis by more voodoo algorithms which assume server is big enough to develop any meaningful clue and not misinterpret results. I'm sick of algorithms... email at the very least used to be reliable...now it is anyone's guess whether a message will be silently dropped for no human understandable reason.

          because there's no culture of "spam is an unavoidable problem" in XMPP, nor is there even a culture of "bulk messaging must be allowed" and so no one can even claim ignorance about what their users are doing.

          More like a culture of denial. XMPP does NOT meaningfully address spam in any way that matters.

          but for now it seems the spammers don't even care about XMPP, probably because email isn't just low-hanging fruit, it's fruit that has fallen from the tree and has been rotting on the ground for years.

          Keep on d

        • by dbIII ( 701233 )

          Is it really too much to tell people running mailing lists to find some other means to do what they do,

          Yes, but also mainly due to my next point

          if it will eliminate spam for everyone else on the planet?

          Obviously your strawman example would no do such a thing if it was really that good because it would have been adopted and forced upon those with mailing lists. Let's please keep this an honest discussion without hysterical bullshit that insults the intelligence of the reader.

          As for your suggestion, it appe

  • Magnet Links (Score:4, Interesting)

    by Anonymous Coward on Thursday September 04, 2014 @09:11PM (#47831539)

    Since every single goddamned one of you has used magnet links, you should be comfortable with the idea of requesting objects rather than discussions with particular hosts. Taking this idea and running with it is NDN. It's an excellent network research subject.

    It facilitates caching, multipathing... with some more work perhaps network coding to get close to the min-cut bound. Bittorrent is super successful because it's all about the content. Let's give a similar protocol a chance at changing the net.

  • If Slashdot editors can't even get the technology headlines correct, how is it better than Reddit, Fark, or any other news aggregator site?

    Damn you guys have fallen far.

  • After reading the spec, it seems to me that this is a collapse of the HTTP (web) protocol down to the network/transport level. In effect, the internet would become one large heirarchical namespace where clients ("consumers") query the heirarchy of data by uri through Interest Packets and then some server somewhere sends back a Data Packet matching the specified interest. Alot like 20th Century TV, sounds like.

    Also there is a provision for packet signature using public-key RSA which makes me think that i

    • That's the main selling point. It gives routers a lot more information about what they are routing, allowing them to enforce usage rules. Things like 'only redistribute content signed by those who paid to use our new content distribution system' or 'Do not distribute media from Netflix tagged as licensed for distribution in the US only.'

      There's the core of a good idea. CAN is a great idea - power savings, bandwidth savings, faster internet, more reliable, hosting costs slashed. But this starts off with CAN

  • by sigmabody ( 1099541 ) on Thursday September 04, 2014 @11:38PM (#47832053)

    For those who don't see why this is bad, consider this:

    In order to route/cache by data, the data must be visible to the routing nodes; in essence, you would no longer be able to use end-to-end encryption. You could still have point-to-point (eg: encryption for wireless connections), but everything would be visible to routing nodes, by necessity. This means no more hiding communications from the government (who taps all the backbone routers), no TOR routing, no protection from MTM attacks, by design. You get the promise of more efficiency, at the cost of your privacy/freedom... and guess what, you'll get neither in this case, too.

    • Slight correction: It does include protection from MITM attacks: There's a hash for the content that the endpoint verifies. So it does prevent spoofing content, so long as the endpoint has the correct address. It does't stop your ISP from monitoring exactly what you are getting though - it makes that a whole lot easier, as there's no way the requests could be encrypted.

  • I need to read more about this. At first glance, it's kind of like BitTorrent, but at a lower level in the protocol stack. Or like Universal Resource Identifiers (remember those?) at a higher level. The general idea seems to be to make cacheing easier at the expense of making everything else more complex.

  • It looks like this would be more likely to be an overlay to TCP/IP than to replace it, with the idea of 'protected' content distribution being a driver.

    Of course, as with any other content distribution mechanism, there will no doubt be ways to copy it once it reaches your living room (or wherever).

  • This looks terrible. (Score:5, Interesting)

    by SuricouRaven ( 1897204 ) on Friday September 05, 2014 @03:09AM (#47832613)

    It looks like they started out with Content Addressible Networking, which is a great idea. Massive bandwidth savings, improved resilience, faster performance, power savings, everything you could want. But then rather than try to impliment CAN properly alongside conventional networking they went for some ridiculous micro-caching thing, over-complicated intermediate nodes that enforce usage rules, some form of insane public-key versioning system validated by intermediate nodes and generally ended up with a monstrosity.

    CAN is a great idea. NDN is a terrible implimentation of CAN. The main selling points include having DRM capability built into the network itsself, so if you try to download something not authorised for your country the ISP router can detect and block it. A simple distributed cache would achieve the same benefits with a much simpler design.

    There's the core of a great idea in there, burried deep in the heap of over-engineered complexity that appears designed not to bring benefits to performance but rather to allow ISPs to readily decide exactly what content they wish to allow to be distributed and by whome. This thing is designed to allow the network devices to transcode video in real time to a lower bitrate - putting that kind of intelligence in the network is insane!

E = MC ** 2 +- 3db

Working...