Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
The Internet Technology

The Internet's Broken. Who's Going To Invent a New One? 162

aarondubrow writes: "The Internet has evolved to support an incredibly diverse set of needs, but we may be reaching a point at which new solutions and new infrastructure are needed in particular to improve security, connect with the Internet of Things and address an increasingly mobile computing landscape. Yesterday, NSF announced $15 million in awards to develop, deploy and test future Internet architecture in challenging real-world environments. These clean-slate designs explore novel network architectures and networking concepts and also consider the larger societal, economic and legal issues that arise from the interplay between the Internet and society.

Each project will partner with cities, non-profit organizations, academic institutions and industrial partners across the nation to test their Internet architectures. Some of the test environments include: a vehicular network deployment in Pittsburgh, a context-aware weather emergency notification system for Dallas/Fort Worth, and a partnership with Open mHealth, a patient-centric health ecosystem based in San Francisco."
This discussion has been archived. No new comments can be posted.

The Internet's Broken. Who's Going To Invent a New One?

Comments Filter:
  • by Anonymous Coward on Tuesday May 13, 2014 @07:51PM (#46994817)

    And add to that TLS/SSL. Layering encryption on top of an insecure protocol like TCP results in a network layer that provides authentication and privacy, but fails to provide robustness. A secure transport layer should provide all three: robustness, authentication and privacy, and I would add a fourth: authorisation.

    Of course this is all protocol that can be fixed by adding another IP protocol value to the proto field in IPv4 or IPv6.

    There is a problem in IPv4, that is partially but not sufficiently mitigated in IPv6, and that is the lack of topological addresing. The IPv4 address, with CIDR, in no way reflects the topological location of a subnet, and even in IPv6, large national routing tables have to be maintained to find the optimal route to a single-homed network. This is a small enough problem today that every PE router at the edge of the multihomed internet can simply have a large lookup table, but scales quickly out of control when you start talking about mesh networks, where every node is potentially and likely multihomed. The addresses are also centrally allocated by a very expensive bureaucracy, I won't say corrupt, but the cost per IPv6 address is certainly high considering their abundance and the seemingly low effort required to store an allocation into a database.

    With topological addressing, the node addresses would be allocated dynamically based on the position in the topology and supplementary information like GPS coordinates or public key (in onion networks).

    The DNS system is an enormous problem. The architecture is very flaky, totally insecure especially with the addition of DNSsec. relies on a very expensive, and in this case, I will say corrupt bureaucracy for the allocation of names and is a generally ill thought out and ineffective way for locating network objects.

    HTTP is a massive failure for end-to-endedness, breaks peer to peer expectations of the internet, adds massive protocol inefficiencies that buy next to nothing in the way of added function, and is generally ill specified. A good protocol is one that both allows reservation for future extensions in an efficient manner, and tightly constrains how the protocol must be spoken to the bit. By contrast HTTP allows vast latitude in the spelling of protocol messages, resulting in a large probability for implementation failure and failure for two implementations to interoperate, and yet has very inefficient and unreliable extensions due to the lack of foresight in designing efficient reservations into the original protocol.

    Every protocol built on top of TCP fails robustness tests, as it necessarily inherits the irrobustness of TCP. Yet every protocol built on UDP, where one could implement robustness, fails because of the epic clusterfuck that is NAT. And yet there are utter morons out there who are considering (there are RFCs published) NAT for IPv6.

  • by gewalker ( 57809 ) <Gary.Walker@Astr a D i g i tal.com> on Tuesday May 13, 2014 @08:26PM (#46995039)

    Surprisingly, Phoenix University is accredited, although it has been placed on notice -- i.e., subject to losing its accreditation as documented on their website [phoenix.edu]

    Of course, this indicates that accreditation is not exactly a true Gold Standard.

I have hardly ever known a mathematician who was capable of reasoning. -- Plato

Working...