Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
The Internet Communications

David Clark: Rebuild the Internet 323

boarder8925 writes "David Clark, who led the development of the internet in the 1970s, is working with the National Science Foundation on a plan for a whole new infrastructure to replace today's global network. The NSF aims to put out a request for proposals in the fall for plans and designs that could lead to what Clark called a 'clean slate' internet architecture. Those designs, Clark said, could be tested on the National LambdaRail, the nationwide optical network that researchers are using to experiment with new networking technologies and applications."
This discussion has been archived. No new comments can be posted.

David Clark: Rebuild the Internet

Comments Filter:
  • Wont happend (Score:5, Interesting)

    by Bruj0 ( 114447 ) on Friday July 01, 2005 @12:36AM (#12957727) Homepage
    "A whole new infraestructure" you say?.
    We cant even start using the new ipv6 protocol. I dont think we are there yet. Try in 10 or so years.
    • Re:Wont happend (Score:3, Insightful)

      by RLiegh ( 247921 ) *
      I think it's more like "ok, no one's buying our ipv6 idea; let's see what else we can come up with".
    • Re:Wont happend (Score:5, Interesting)

      by drmerope ( 771119 ) on Friday July 01, 2005 @12:52AM (#12957814)
      Might be because we realized that the IPV6 protocol was unnecessary.

      Once people were forced to NAT, it suddently dawned on the great mass of people that workstations shouldn't be getting public IPs for security and management reasons.

      Nor for that matter should these up and coming embedded devices be placed on the public internet either. It just isn't appropriate.

      Remember: The Internet was supposed to be a network of networks NOT _THE NETWORK_.

      Most of the remaining IP allocation problems result from certain lingering gross misallocations such as the Class A block assigned to MIT.
      • Re:Wont happend (Score:2, Informative)

        by Alien Being ( 18488 )
        "Remember: The Internet was supposed to be a network of networks NOT _THE NETWORK_."

        You're misusing terms here. "Network of networks" means "routable ip networks". From an IP point of view, boxes behind a NAT are irrelevant. Nobody ever claimed that every machine should be connected to the Internet, but hosts on the Internet *were* intended to be routable.

        The management and security benefits you alluded to are separate issues and can be achieved with less drastic measures than NAT.
        • Re:Wont happend (Score:3, Insightful)

          by drmerope ( 771119 )
          I suggest you re-examine the history of electronic mail and then re-evaluate your understanding of what it means to be a network of networks...

          It does not in fact merely mean routable ip networks. The internet was meant to bridge many networks that did not use IP by means of a gateway hosts that did speak IP.

          I agree that no one specifically was thinking of NAT as we know it when network of networks was coined, but it is a simple extension of the principle.
      • by jfengel ( 409917 ) on Friday July 01, 2005 @01:35AM (#12958036) Homepage Journal
        NAT doesn't seem to completely solve the addressing problem. According to this report by Cisco to Congress [doc.gov] (warning: pdf), we're going to run out of addresses for real somewhere between 2015 and 2025.

        Yeah, I know they're a vendor, but this is a really reasonable report. They counter a lot of the hype, but they say we're going to need IPv6 eventually, so let's start now, before the Japanese and Koreans have built all the infrastructure and Americans are left to buy from them.
        • Agreed. NAT isn't a permanent solution. I disagree that sooner is better though. As with anything, the most cost effective transition will begin on its own when the time is right.

          I don't know what you mean by buying infrastrcture. We're not losing out on any technology or experience really. If any important services become IPv6 only... well then we'd have a little catch-up--but that is precisely what will deliver the consumer demand.

          CISCO is right in their problem prediction but they want to accelera
          • >As with anything, the most cost effective transition will begin on its own when the time is right.

            I disagree. I work for Canada's largest IT consulting company and in my experience the transition will begin when people become forced to transition, cost effective or not.
          • The concern is that if the Koreans and Japanese have converted their infrastructure to IPv6, then they'll be buying their routers from Korean and Japanese countries. When it becomes a crisis in the US, we'll end up buying our infrastructure from them, because it will have been built, installed, and tested.

            Right now the US has dominance in these markets. If we let the Koreans and Japanese get their first, we'll be letting competitors get there first.

            At least, those are the concerns I've heard. I'm not su
      • by ashpool7 ( 18172 ) on Friday July 01, 2005 @01:36AM (#12958041) Homepage Journal
        Thanks for making "secure by default" less important.

        Thanks for retarding IPv6 development.

        Thanks for necessitating the invention of UPnP.

        Thanks for screwing up peer to peer connections for legitimate things like videoconferencing and file transfers.

        Thanks for continuing to allow ISPs to treat IP addresses like some sort of rare element.

        Thanks for mangling things like FTP.
        • by Anonymous Coward
          Oh..you said it.

          Couple of more thanks from me too...

          Thanks for making business to business integration so difficult.
          Thanks for making any server installation so difficult, if designing to give access to authentic users

        • Thanks for mangling things like FTP.

          FTP is a fucked up protocol to start with. If NAT causes its demise, I know I personally will be nothing but smiles.

          • FTP is a fucked up protocol to start with. If NAT causes its demise, I know I personally will be nothing but smiles.


            fascinating.

            Does that scare you? Since FTP is nearly dead as it is, are you partially smiles now? Does it work like that, or do you turn into smiles all at once? Does it hurt/tickle?

        • Thanks for screwing up peer to peer connections for legitimate things like videoconferencing and file transfers.

          This is often called Quality of Service.
          IPv4 has no clue about packet priority.
          But let's get cynical; whenever we need to keep the economy going, someone will sneak into law the requirement to make new US government projects implement IPv6, which will drive hardware/software sales, which will create an installed base, which will take us to a tipping point.
          And that, amigos, is how the sausag

      • Re:Wont happend (Score:5, Insightful)

        by Anonymous Coward on Friday July 01, 2005 @01:53AM (#12958128)
        NAT is the greatest evil to befall the Internet.
        Want to run a webserver behind NAT? Forward the port through NAT. Want to run *two* webservers behind NAT? Say goodbye to half of your visitors behind stupid proxies that only relay requests to port 80.

        NAT is bad because it is a complex layer of translation software, NOT a firewall. Its job is to try to fit packets through places where they shouldn't be going, not the other way around. A stateful firewall is a much better solution. Even Windows XP SP2 gets it right in that regard.

        Unless you *like* translation gateways everywhere, the idea of a network of networks is a silly idea. MITM attacks and the general waste of resources are the two biggest problems with that concept.

        Embedded devices like, say, a PDA shouldn't be on the Internet to receive phone calls or send email? What do you have against the Internet that a stateful firewall and a well written network stack wouldn't fix?
        • The only problem is crappy NAT boxes that cost $20... If I want to hide my intranet NAT is the only right way to do it.
        • Re:Wont happend (Score:3, Insightful)

          by amper ( 33785 ) *
          How in the hell did this get modded up to "5, Insightful"? The parent poster clearly has "-5, No Fucking Clue About Network Design".

          What the AC is describing is not, in fact, Network Address Translation, but Port Address Translation, which is only a subset of NAT. I have absolutely no problems running multiple hosts behind NAT using the one-to-one address translation, which generally reduces the need for publicly-valid IP addresses to the number of hosts that need to be publicly-available, plus one for a P
      • Re:Wont happend (Score:5, Interesting)

        by J. Random Luser ( 824671 ) on Friday July 01, 2005 @03:50AM (#12958576)
        ... certain lingering gross misallocations ...


        6.0.0.0/8 DoD Network Information Center
        7.0.0.0/8 Defense Information Systems Agency
        8.0.0.0/8 Level 3 Communications, Inc
        9.0.0.0/8 IBM Corporation
        11.0.0.0/8 DoD Intel Information Systems
        12.0.0.0/8 AT&T WorldNet Services
        13.0.0.0/8 Xerox Palo Alto Research Center
        15, 16.0.0.0/8 Hewlett-Packard Company
        17.0.0.0/8 Apple Computer, Inc.
        18.0.0.0/8 Massachusetts Institute of Technology
        19.0.0.0/8 Ford Motor Company
        20.0.0.0/8 Computer Sciences Corporation
        21, 22.0.0.0/8 DoD Network Information Center
        25.0.0.0/8 Royal Signals and Radar Establishment
        26, 28, 29, 30.0.0.0/8 DoD Network Information Center
        32.0.0.0/8 AT&T Global Network Services
        33.0.0.0/8 DoD Network Information Center
        34.0.0.0/8 Halliburton Company
        35.0.0.0/8 Merit Network Inc.
        38.0.0.0/8 Performance Systems International Inc.
        40.0.0.0/8 Eli Lilly and Company
        41.0.0.0/8 African Network Information Center
        44.0.0.0/8 Amateur Radio Digital Communications
        45.0.0.0/8 Interop Show Network
        47.0.0.0/8 Bell-Northern Research
        48.0.0.0/8 Prudential Securities Inc.
        51.0.0.0/8 Department of Social Security of UK
        52.0.0.0/8 E.I. du Pont de Nemours and Co., Inc.
        53.0.0.0/8 cap debis ccs (c/o Mercedes Benz AG
        54.0.0.0/8 Merck and Co., Inc.
        55.0.0.0/8 DoD Network Information Center
        56.0.0.0/8 U.S. Postal Service
        57.0.0.0/8 SITA-Societe Internationale de Telecommunications Aeronautiques
        1,2,3,4,5,14, 23, 27, 31, 36, 37, 39, 42, 46, 49, 50 are reserved to IANA

        It would be tempting to say: Nothing to see here people... please move along..., but amongst all the squatters is one new allocation, a single class A net allocated this year for the entire African continent. It works too, I've already had two 419s from it ;-)
        • Re:Wont happend (Score:3, Insightful)

          by abb3w ( 696381 )
          44.0.0.0/8 Amateur Radio Digital Communications

          Of all of the ones you point out, this is the only one I would argue that the allocation might be deserved. Ham Radio is bloody useful under emergency conditions, and it's operators should be encouraged even outside emergencies.

      • Re:Wont happend (Score:5, Insightful)

        by Anonymous Coward on Friday July 01, 2005 @04:01AM (#12958613)
        NAT is a horrible solution. When I see someone actively _advocating_ more NAT I know that either they're selling a NAT product ("Cutting your face off is a great idea, and with new faceCutOff DX we guarantee only a few weeks of agony!") or they haven't looked very hard at the problem.

        The Internet is a Peer-to-Peer network. Yesterday's big application, the "web app" didn't need this feature, but tomorrows potential big applications almost all do. If you disable them by using NAT, you're back where businesses were in 1996 when they started to realise that they should be on the web but had no clue how. Oops.

        Seen all those annoying worms that choose random IPv4 Internet addresses and attack them? If a hundred of those worms hit one address per second they'll hit most machines in a year. With a thousand infected machines they'll take a month, But with IPv6 they don't stand a chance. A million worms, trying 10 IPv6 addresses per second, won't find more than a tiny fraction of vulnerable machines in a year. Even inside your much smaller corporate network "guessing" IPv6 addresses isn't feasible.

        Elsewhere in this thread someone has observed that ordinary customers don't switch at the point of least pain. They wait, and wait, until they can't tolerate any more pain and then switch. Then they say "Oh, that was better than I expected" and maybe write an article for their trade magazine, "Why switching was actually a pretty good idea".

        The point of least pain came when more than one network hardware vendor had IPv6 native. That was several years ago. Anyone buying new kit after that point should have been negotiating for IPv6 and either getting it, or getting a discount to "do without" it for a few more years. Otherwise you're a sucker.
      • Re:Wont happend (Score:3, Interesting)

        Might be because we realized that the IPV6 protocol was unnecessary. Once people were forced to NAT, it suddently dawned on the great mass of people that workstations shouldn't be getting public IPs for security and management reasons.

        You're confusing addressability with reachability. It's right that workstations should not in general be directly reachable from random other points on the internet, but that doesn't mean that this should be done only via NAT. Normal firewalling is the right way to

    • IPv6 (Score:5, Insightful)

      by scoove ( 71173 ) on Friday July 01, 2005 @01:02AM (#12957864)
      We cant even start using the new ipv6 protocol. I dont think we are there yet.

      I've been to IPv6 summits. I've also served as the senior technology officer for several telecom companies (one of which was a very first CIX-W router connected ISP and frustration to Paul Vixie in our rather unique connection to the early Santa Clara peer point).

      Through my experience, I've advocated IPv6, yet I've found significant resistance from nearly all sectors of business (except from South Korean and South American investors - go figure). Some of the problems IPv6 plans (and this "new infrastructure" pipe dream) face include:
      • zero customer demand: dot-com was great for us geeks pushing ideas before their time. Fortunately or not, its demise meant a return to financial foundations. If customers don't demand it, there's no reason to work on it today. If it's the next great thing, then get customers understanding it! (Thought: How do we do this for IPv6? I can think of a thousand technical explanations for why this is. My customers would tell me they expect me to do these things already at no additional cost to them. Absent additional capital, it ain't happening in today's telecom market). Lacking a killer ap that only works in IPv6 land, the finance people won't back any infrastructure upgrade. Here's the rule: either make money or save money. IPv6... well, it adds features without really making or saving money. Guess what the CFO will decide? New features don't quite present well in any capital budget analysis (and rightfully so).

      • State of the consumer market: Let's be honest for a second. While we dream of IPv6 efficiencies, the world out there is clinging onto Windows 98, first edition. They're stuck in the IP dark ages (hell, I had a discussion today with a Fortune 500 senior manager who thought dialup optimization was the same thing as broadband. *sigh* It's the Dilbert PHB "etch and sketch" laptop all over again!). These are people that can't understand their kids P2P and the five trojans pushing out spam are why their broadband is slow. These are the people that refuse to use antivirus, personal firewalls and spyware detection. Do you expect them to understand the nuances of better IP networks? QoS? Mobile IP? Dream on...

      • We've forgotten our dirty bastard heritage: Don't forget, TCP/IP was the the dark horse protocol. OSI was the committees pick, yet nasty old ad hoc IP ended up winning out. NSFNET and the Baby Bell NAP plan connected by ANS was Al Gore's dream for a monopoly-powered Internet, which also flopped. A brutish commercial ISP network launched by the early CIX won out. Rarely does the committee solution prevail. Technology is one of the few areas where natural selection tends to ignore the best intentions of the wealthy and powerful elites.


      Don't think I'm not wild about IPv6. I geek out and run it over AX.25 amateur networks for fun (what better way to learn a protocol). Yet the days of getting capital markets worked up in a frenzy, ready to throw hundreds of millions at network replacement are gone. Unless this latest dream is based on new tax revenues from all of us (which only creates messes like the original unaccountable NSFNET regionals), it won't go anywhere.

      *scoove*

      • For those of you that follow Clayton Christensen's disruptive technology models, I have a question for you (those of you that don't know it, but want to run tech companies, get your ass to Amazon and buy this book yesterday [amazon.com], or else learn the hard way as I did thru several companies before Clayton figured out some rather important rules). As a career disruptor, I was shocked to read my comment as follows:

        My customers would tell me they expect me to do these things already at no additional cost to them. Ab
        • A wise man once said to me: "By the time you finally TRULY understand the rules of the game, you'll be too tired to play."

          So I might extrapolate from your post that the ideal situation for grey-haired veterans is to corral the young'ns, wait for the leader to emerge, and have a lemming-thwacker ready before they get out of hand?

          I could go for that.
        • I'm with you on Christensen's book but I disagree on DEC. DEC had innovative technologies. Even today the number one thing people want in their servers is:

          1) high reliability / built in disaster recovery
          2) much better security
          3) ease of administration
          4) better middle ware between programs

          Which is to say an updated version of VMS. There is no reason DEC should have lost the server wars to the AS/400s. There is no reason that DEC (which had the best microoprocessor) couldn't have won the workstation
    • I'm afraid the root motivation for coming up with a new internet from scratch is not because we're running out of IPV4 numbers, because then all you'd need is extend it to IPV6. Instead, how about college kids caught downloading movies on internet2 and punished? Internet2 won't be as free as the current one, but it will be the new, hip thing, marketed to death. The old internet must go because it's too free, the corporations can't milk enough profit out of it. When the new internet2 shows up, it will have e
  • by ShatteredDream ( 636520 ) on Friday July 01, 2005 @12:36AM (#12957728) Homepage
    What will the powers that be put in there to make it easy to track and control everything we do with it?
    • You noticed that too?

      There was a strong message in there that the problem with the current design is lack of identification of who is who. At least that's what I read into the business about phishing and spam.

      The business about zombies seems like a potential code for the need to block "normal" users from connecting with each other.
    • I have a strange suspicion that free speech would still be permitted but only in a designated area that the public never sees. Kind of like the official protest areas they always set up miles away from the Democratic and Republican national conventions. Of course, this will be a doubleplusgood thing since free speech is just crimethink anyway.
  • by AKAImBatman ( 238306 ) * <akaimbatman AT gmail DOT com> on Friday July 01, 2005 @12:36AM (#12957731) Homepage Journal
    ...is this project going to actually provide revolutionary designs to ease or eliminate the problems we face today, or is this just a matter of reinventing the wheel?

    I realize that it's quite tempting for computer developers to want to clean up a system after it's done, but such work only ever works if you have a clear understanding of the problems faced under the current codebase as well as an absolute need to fix the issues with the current system. Simply saying, "it'll be better/cooler/faster" just doesn't cut it. Those things can be obtained from evolutionary development. Revolutionary means that you are uprooting all the existing users. The payoff MUST be tremendous or they ignore it!
    • This is the NSF which controls quite a lot of the university grant money. Combine that with the federal government getting on board and you already have a good chunk of critical mass. The usual order for things is:

      University -> military -> porn -> mainstream corporate america -> home users

      The NSF can get the first 2 steps.
  • Summary (Score:5, Insightful)

    by mikeophile ( 647318 ) on Friday July 01, 2005 @12:38AM (#12957741)
    Clark said he would like to see two things addressed in any replacement for the current internet. The first is a coherent security architecture. The second is a healthy economic infrastructure for network service providers, who will need a bigger piece of the pie in the new internet than the one they are getting now if they are going to help pay for building it.

    I read this as users having no anonymity and paying through the nose for it.

    Can I just keep the old internet?
    • Clark said he would like to see two things addressed in any replacement for the current internet...

      A television in every home and two cars in every garage. Opps wait wrong guy..
    • Re:Summary (Score:3, Insightful)

      by femto ( 459605 )
      Or to put it another way: the corporatised Internet.

      No independence, as you're then a tame pawn for a corrupt Haliburton lookalike.

    • I read this as users having no anonymity and paying through the nose for it.

      Paying through the nose is close, if you assume current high bandwidth usage behavior. Clark understands the economics of a time-share network that has become less time-share and more reserved capacity in its model.

      Consider for a second the direction your broadband service is going. Factor in MPLS or whatever quality of service protocol you like. Imagine a broadband connection that gives you reserved bandwidth to anywhere for voi
      • economic infrastructure = having billing built into the network protocols, as for the Public Switched Telephone Network?

        That is a crap idea. The Internet is out competing Telcos precisely because billing (and the consequent control) is not built in, allowing innovation. Insert "economic infrastructure" (a.k.a billing) and the culture of the Internet (as geeks know it) dies.

    • I read this as users having no anonymity and paying through the nose for it.

      What about a anonymous, free one [fshell.org]?

  • by pg110404 ( 836120 ) on Friday July 01, 2005 @12:39AM (#12957752)
    The internet might have its problems, but it's here now and everybody is on it. Unless they add a backward compatibility layer (doubtful if they are designing a 'clean slate' architecture), it becomes a chicken and the egg phenomenon, no matter how much better the technology might be. Nobody will want to use this architecture until enough people adopt it, and enough people will need to adopt it before joe average uses it. All the while the existing internet is there.
    • First of all I can see universities and corperate researchers using it if it's better/faster/etc...

      Second of all, if IP can be carried over carrier pigions then I think IP can be carried over whatever new network they design.

      In addition, IPX and IP can run on the same LAN, who's to say that this new infrastructure can't be used in parallel with IP or in the worst case computers that need to be on both networks can have 2 NICs.
      • But at what point does it stop coexisting and at what point does it simply take over (a plan for a whole new infrastructure to REPLACE today's global network)?

        windows 9x ipx/netbeui drivers loaded by default, TCP/IP was not loaded automatically, now we have windows xp with tcp/ip loaded by default.

        When it sank into the minds of the gnomes at redmond that the internet based IP protocol was so widespread, they made the effort to make that the default method of communication.

        I can compile and install the IP
  • by Man in Spandex ( 775950 ) <prsn DOT kev AT gmail DOT com> on Friday July 01, 2005 @12:41AM (#12957759)
    PHP and MySQL [slashdot.org] which can do anything!
  • It seems every measure to stop phishing, spam and the like, just results in a means to circumvent. I'm not against renewing efforts to re-engineer, but I'm not sure it's fruitful to go after it for those reasons. IPv6 is a moderate step in that direction and is worth giving a chance.
  • How long before the RIAA tries to get on this rebuilt internet, eh? ;)
  • by fmwap ( 686598 ) on Friday July 01, 2005 @12:49AM (#12957802) Journal
    "Fuck it! I'll rewrite it from scratch."

    That approach is always more fun
    • from the original article:

      "Anything you can do all at once, you could do with incremental changes," said Robert Kahn, who helped design the architecture for Arpanet, the precursor to the internet.

      Kahn agrees with you, you both are against a clean sheet redesign, right?.

      The thing is, although incremental improvements are easier to stomach, the question is always this: just where do we want to be? A clean sheet redesign gives us a target for successive inremental improvements, and allows a very direct

      • My question, which I am attempting to form into a coherent argument still, is generally this:

        Will the new network actually be a distributed network, or will it be a massive, bottlenecked POS like we have now?

        I refer, of course, to the 12 major DNS servers which control our access: Internet health report [keynote.com]. One of these goes down and those of us still up see a super-slow internet. Two go down and pages fail to load as often as not. I've yet to see three go down completely, but it is bound to happen.

        The abov
        • Will the new network actually be a distributed network, ...[or will it have] ...DNS servers which control our access...

          good question. The answers will be known after they have a testable prototype.

    • Comment removed based on user account deletion
  • Not gonna happen (Score:4, Interesting)

    by btgreat ( 895041 ) on Friday July 01, 2005 @12:54AM (#12957829)
    "A super-high-speed internet could even allow people a world apart to collaborate inside elaborate 3-D virtual arenas, a process called tele-immersion."

    I believe the technical term for this is MMORPG. It appears to work pretty well with our current internet.

    All joking aside, I don't think anything will change any time in the near future. IPv6 is probably the most radical change the internet will see for possibly decades to come, and that can't even catch on. People are simply not going to pay to have the internet re-architected when it is working well enough as it is; why reinvent the wheel while its still rolling. Things along these lines have been proposed before, and I'm sure will be proposed again, and I'm sure that one day, the internet will eventually be rewired. However, this is still far ahead of its time.

    Cars still ride on wheels, power still goes out with storms, and cell phones still lose service underground. What makes anyone think the internet is going to be any different.
  • Not a bad idea... (Score:5, Interesting)

    by evilviper ( 135110 ) on Friday July 01, 2005 @12:56AM (#12957835) Journal
    I'll agree with him that Internet2 hasn't lived-up to what it should have been, and trying something completely different would be a very good idea.

    However, I don't agree that the current internet is in-need of replacement. Creating TCP/IP packets requires significant processing power, and a simpler protocol would mean more devices being online, but by the time anything new becomes accepted, a $1 chip will be able to do it all.

    If you want to improve the internet, put explicit congestion notification back into all TCP stacks, as it was before the BSD stack left it out... Goodbye massive packet loss due to minor congestion. Require all vendors to support jumbo frames... And many more small changes (to the existing internet).
    • And many more small changes

      Hello!! Multicasting!?!?!
    • Re:Not a bad idea... (Score:3, Informative)

      by jd ( 1658 )
      ECN would be an excellent idea, probably a derivative of RED/GREEN/BLUE/BLACK (yes, all of those really do exist) as well, and edge-level ISPs should really use some additional QoS to prevent any given user (as opposed to any given stream) overloading the network. It would also allow throttling of ISP connections, when an ISP in general is too noisy.

      As one of the other replies noted, DEFINITELY DEFINITELY have multicast. Anycasting (multicast from user, unicast from server) would be good, too, for informa

    • I2 (Score:3, Informative)

      by Nasarius ( 593729 )
      I'll agree with him that Internet2 hasn't lived-up to what it should have been

      What the...? Are you confused by the name? I2 is just another semi-private backbone. That's all. It's occasionally a testbed, but mostly it's just a bunch of fast routers, nothing magical. It serves much the same purpose as the early Internet: connecting universities and a few large organizations.

  • "It's a trap!"
  • I don't remember who's idea it was, but if we have all future internet devices use encryption (like IPSec and IPv6), then if we have a portion of the ip address be a crypto hash of the devices public key, then it would make spoofing harder. Of course part of the ip address would still have to be reserved for routing purposes for efficiency.
  • by BigZaphod ( 12942 ) on Friday July 01, 2005 @01:14AM (#12957928) Homepage
    Don't fix what ain't broken.

    Sure, there's almost always better ways to do things that are only illuminated by hindsight, but that doesn't mean that the old way should just be tossed out and replaced.

    Besides, the Internet is one of those amazing flukes of history. It's a very open, public, and free world unlike anything before it. Does anyone really think that something designed now in the age of terrorism, by committee, using government money (NSF) would be carefully designed to protect those initial design elements that make the Internet what it is today?
    • by Midnight Thunder ( 17205 ) * on Friday July 01, 2005 @01:22AM (#12957970) Homepage Journal
      At the moment these guys aren't trying to fix anything. What they are trying to do is see if something alternative could work better. See this like a prototype of a car: in order to be able to test new technologies properly you need to build it as if there were no restrictions. While this new technology might not replace anything, aspects of it might be incorporated if it proves there is a better way of getting things done.

  • by DrJimbo ( 594231 ) on Friday July 01, 2005 @01:15AM (#12957932)
    ... but while composing that post, it occurred to me that this is actually a very good idea and should be explored.

    The premise of the existing Internet was benign cooperation. The previous /. story on the 12 minute Windows heist clearly demonstrates that that model is no longer valid.

    I think it is a good time to take a look at all of the layers and see if something better is possible. I am not suggesting that Clark et. al. be given Carte Blanche to build a new Internet. The naysayers may well be right that any significant change would be practically impossible. But I do think it is a very good idea to investigate what changes are possible and what benefits those changes could provide. I'd hope that practical concerns of getting from here to there would also be explored.

    • The premise of the existing Internet was benign cooperation. The previous /. story on the 12 minute Windows heist clearly demonstrates that that model is no longer valid.

      Actually, it is. What is the average time that a non-Windows computer can last hooked up to the internet before it is compromised?

      The problem isn't the Internet per se, it's Windows (and naive computer neousers). Frankly, if more of those people got fed up with AOL, or whatever, and just gave up on it, things would probably be oh so slig
  • But I am as confident as I am that the Sun will rise tomorrow that it will be safe from terrorists. After all, we have the children to think about.

    If one is able to find any privacy or anonymity in this new Internet, it will be because of some undiscovered security hole, which will be quickly repaired, rather than any kind of conscious design decision. Probably one reason they are accepting proposals before rolling it out is to avoid the sort of accidental security holes that enable pr0n, peer-to-peer filesharing and left-wing political activism.

    Microsoft, a leading contributor both to this nation's technology base and to the campaign coffers of its leaders, will embrace this new technology and extend it in such a way that the development and dissemination of Open Source software will be, if not mathematically and physically impossible, at least as difficult as factoring a 2048-bit public key.

    Imagine, if you will, Trusted Computing implemented at the router level, in such a way that any packets that go farther than one hop are certified not only to support protocols whose patent licenses are fully paid-up and on file with the legal department in Redmond, but whose content is compliant with the Windows standard. The faintest whisp of a Public License, GNU or otherwise, will result in the dropping not only of the individual packet, not only in the cancellation of the entire file transmission, but, within microseconds, the physical location of the offending server. The identities of its rogue administrators will be fetched instantly from the database maintained by the Homeland Security Department. (You will have to submit fingerprints and DNA samples to obtain a Windows server license, as after all, Internet servers can be used to disseminate explosives recipes or the formulas for nerve gases.) The supercomputers that constantly monitor the cameras mounted on every lampost in the United States of (God Bless It!) America will be ordered to recognize the criminals' faces, and when they are spotted trying to flee to the Amazon jungle, orbiting lasers will vaporize their bodies, leaving nary but a whisp of smoke.

    When a close family friend tries to comfort one of the grieving mothers for the loss of her son, she will desperately proclaim "No, I have no children! You must have mistaken me for someone else. Please leave me alone!" before she scurries rapidly away.

    National firewalls such as those employed by The People's Republic of China are expensive and difficult to maintain. They are notoriously leaky, and easy to circumvent by anyone determined enough to find out how. But worse, they impede the economic potential of emerging economies such as China, which necessarily bottleneck technical data and eCommerce in order to have a single chokepoint for the Four Horsemen of the Infocalypse (Taiwan, Tibet, Hong Kong and Pornography).

    Imagine, if you will, the potential of our New Internet: not only by technical design, but by international treaty (enforced by the threat of military intervention on the part of the UN Security Council), each nation will have a national firewall which is as transparent to the air to fully-licensed Windows Media Video files of Barney the Dinosaur and paid-up Wal-Mart orders, yet absolutely impenetrable to content not sanctioned by Homeland Security, the Republican Party, the 700 Club and the Boy Scouts.

    I, for one, am weary of our present Internet, cesspool that it is of moral depravity and copyright infringement. I long for the days of yore, when men were men, women wore hoopskirts, and racial minorities were separate but equal. And so, I raise my right hand and shout with an enthusiastic "Heil!":

    I welcome my new Internet overlords!

    Copyright © 2005 Michael David Crawford.

    This work is licensed under the Creative Commons Attribution-NoDerivs License. To view a copy of this license, visit http://creativecommons.org/licenses/by-nd/2.5/ [creativecommons.org] or send a letter to Creative Commons, 559 Nathan Abbott Way, Stanford, California 94305, USA.

  • Who? Me? (Score:5, Funny)

    by dcclark ( 846336 ) on Friday July 01, 2005 @01:33AM (#12958023) Homepage
    Holy crap, I go offline for 12 hours and you guys are giving me this kind of jobs?? I quit! Nothing like signing on to /. and seeing your name in the top headline. -- David Clark
  • by grcumb ( 781340 ) on Friday July 01, 2005 @01:34AM (#12958028) Homepage Journal

    When you're done with the old Internet, can we have it?

    Hugs,

    The Developing World.

  • by jd ( 1658 ) <imipak@ y a hoo.com> on Friday July 01, 2005 @01:39AM (#12958067) Homepage Journal
    Either use IPv6 or one of the predecessor protocols. (One early suggestion for "IP-ng" was a protocol with adjustable-length addressing. Thus, the backbone would have very short addresses, and machines close to the edge would have longer ones. This was originally rejected as routers simply weren't advanced enough to cope with a routing system like that -and- handle IPv4, but this is a couple of decades later, and a "clean-slate" would mean you don't need to worry so much about compatibility issues.)

    Second, absolutely mandate IPSec. Don't just "mandate" it and then ignore it, as happened with IPv6, but make it a pre-requisite for all users. That gives e-commerce a lot more assurance on secure transactions and authentication, which seems to meet one of their requirements.

    Third, mnandate QoS. QoS not only guarantees network quality, which would interest a LOT of corporate users, but also provides a mechanism for increasing profit. Simply offer different levels of guaranteed quality at different prices. This meets another requirement.

    Fourth, the biggest new market is in mobile devices and wireless networking. So support them! What is the point of the IETF churning out megabytes of specs on mobile IP and mobile networks, or of software developers supporting all these new protocols, if none of the ISPs or network engineers give a damn? It would also provide an additional service, therefore an additional revenue stream, therefore also meeting the profit requirement.

    (Mobile networks are where all the wireless users are going to stay using the same router, but the router itself is moving through the network. If you were to have WAPs on aircraft or trains, where you are static relative to the vehicle, but the vehicle is moving between ground stations, this is probably the way you'd want to implement it.)

    Fifth, it is possible to balance anonymity with accountability. Accountability merely requires that machines are who they claim they are and (where user identification is relevent) users are who they claim they are. It does NOT require that anyone actually posesses enough information to actually identify those machines or users, only that when a claim is made, it is verifiable in some way.

    We already have Kerberos for authentication, so it would seem a fairly trivial extension to use that as your authentication mechanism. The token does not reveal your identity, but it can be verified with a Kerberos server in the heirarchy used for authentication by that user, to prove that the user did identify themselves correctly.

    If that isn't good enough, use X.509 certificates at both host and user levels. Lots more money to be made there. It doesn't kill anonymity, as you can perfectly well have a certificate that doesn't say anything useful or self-incriminating. It would still be useful for accountability, though, as no two entities, no two machines and no two users should have identical certificates. At the very least, the key used to examine the certificate would be different, even if the content itself was identical.

    This would be more than good enough to ensure that Joe Bank Manager's personal checking account could not be logged into by Sammy Script-Kiddy - there's your accountability - but would not require people in politically dangerous countries (such as the US) to reveal anything that would compromise their safety, meeting a lot of the anonymity requirement.

    As for the "upgrades" cost - that's just because most providers (backbone or ISP) are too cheap to do it right the first time. Optic Fibre has been around a LONG time, and to upgrade an optic link just requires upgrading the transceivers at each end - so long as the fibre is of good enough quality. At present speeds, a single fibre can carry about 4-5 terabits per second, and typical bundles have about 20 or so fibres, giving you 100 terabits per second.

    Lets say that, when the US Government was still runnin

    • Point of interest, X.509 still requires key exchange/IKE. There is a secure exchange mechanism built on Kerberos. The RFC was implemented in kame's racoon for ipsec.

      One day soon I hope to fsck with it.
      -Myren
      • One day soon I hope to fsck with it.

        I could say something vaguely amusing here, but I'll resist the temptation.

        However, you're absolutely right on your other point that you need a key exchange system (not necessarily IKE, but something that'll do the same job) with X.509. I'm not certain, but I think Sun's SKIP protocol supported X.509, and that definitely didn't use IKE, as Sun regarded the whole IPSec protocol as inefficient on unreliable networks.

        You are also right that Kerberos handles key exch

  • "Anything you can do all at once, you could do with incremental changes," said Robert Kahn

    /me slaps Robert Kahn upside the head with his quantum mechanics textbook

  • by QuickFox ( 311231 ) on Friday July 01, 2005 @02:24AM (#12958239)
    Define, as part of the standards, that when certain standards have been upgraded in important ways, within five years all essential infrastructure software must be upgraded so that it understands the new version.

    This should apply to essential infrastructure like routers, DNS servers, SMTP servers, and so on. If a server does not understand a protocol that has been around for five years, that's reason enough to refuse connection.

    If this becomes part of the standards, we won't have to support ancient legacy forever. When countries with languages other than English want readable domain names, we won't have to live forever with kludges like punycode, such kludges will stay just for five years, after that real solutions can be used instead. If/when solutions to serious problems like spam and DDoS are found and standardised, we can count on the infrastructure to support the solutions within five years. Stuff like IPv6 could spread quickly and smoothly.

    Of course, having to upgrade introduces some inconvenience and expenses. But having to support ancient legacy is also inconvenient and expensive. In spite of the upgrade inconvenience, in the long run this kind of limit should save lots of money for everyone.
  • Now, with billing! (Score:5, Insightful)

    by Animats ( 122034 ) on Friday July 01, 2005 @03:00AM (#12958377) Homepage
    Clark said he would like to see two things addressed in any replacement for the current internet. The first is a coherent security architecture. The second is a healthy economic infrastructure for network service providers, who will need a bigger piece of the pie in the new internet than the one they are getting now if they are going to help pay for building it.

    This guy must be getting support from a telco.

    Telecommunications providers hate the Internet. Not only is the Internet too cheap, it's not set up for detailed billing. The US Internet backbone cost about $1bn to build, and costs about $100 million per year to run. For something that handles over 100 million users, that's nothing. All the intelligence is in the end nodes, so telcos don't get to add "value added services" for which they can overcharge.

    What telcos want is an environment they control, like cell phones. With charges for everything from ring tones to SMS messages. That's what Clark is talking about here.

    The telcos tried this idea back in the 1980s, and it was called TP4, or "ISO 8073 COTP Connection-Oriented Transport Protocol - X.224" [univ-angers.fr] X.224 is very much like TCP, but without the adaptive retransmit machinery to work well over unreliable links. You're supposed to run X.224 over a reasonably reliable virtual circuit provided by a telco. For which you pay by the packet, like X.25 or ISDN. Bad idea. Windows NT4 actually had support for X.224, and some older Cisco routers understand it, but it's dead.

    This is not a place we, as users, want to go.

    • The telcos tried this idea back in the 1980s, and it was called TP4,

      And what about the CompuServe, GEnie, Prodigy and (ugh)AOL days. These companies weren't owned by the telcos, but you DID have to pay for everything including hourly connect charges (USD $6.30/hr), charges for using email (you had a set monthly limit that was free), etc. Then everything suddenly got much cheaper with the internet, albeit MUCH less efficient... High fees is something we have moved away from, however. I doubt very much that
  • by floki ( 48060 )
    I only hope they didn't forget to hire Al Gore or else this won't work.
  • ... how about a new Windows architecture (something that maintains the same 0wn35h1p).

    ... how about a new brain architecture for the masses (something that won't give out banking and PayPal passwords to every phishing email).

    We have many, many fundamental problems in our society. Most of the problems of the internet are not really caused by the internet itself, but are instead reflections of ourselves, our society, and the morons that surround us.

    But I wouldn't mind having an internet the way it was bac

  • by mcrbids ( 148650 ) on Friday July 01, 2005 @03:39AM (#12958526) Journal
    Guys, guys GUYS!

    I see many posts here about how we need to "mandate" this and "require" that and blah blah blah...

    But the Internet, by design, is lasse faire! There is no "mandating" ANYTHING! Anybody can hook up to their neighbor, who hooks up to some guy across town, who is hooked up to a couple other folks...

    The Internet is DECENTRALIZED and OPEN. The closest it gets to mandating anything is the much-disputed RBLs. I, for example, block all email from most Asian countries - nothing personal, but it sure drops the SPAM load with virtually no complaints. But, I can't mandate what the Chinese or Koreans do with their network - I can only mandate what they do with respect to MY networks.

    The Internet is merely a commonly agreed upon set of standards for communications across disparate networks, and it's performing the task of connecting networks the world over with grace and flair.

    Don't tell me that just because Windows systems get infected in 12 minutes, that the Internet is broken. Sorry. The Internet is working fantastically. It's Windows that's broken. It's not up to the task of functioning on a globally accessable network.

    So far, every significant "problem" I've heard with the Internet hasn't been with the Internet, but with the systems at its fringes. SPAM. zombies. Worms. Viruses. Exploits. All are simply side effects of a "zero friction network" as espoused by the all-knowing Bill Gates in his 90's book, "The Road Ahead", combined with systems not able to cope with the ramifications.

    Bill Gates, Larry Ellison, Scott McNealy, Linus Torvalds, and all the others are learning now what that truly means, and over the next decade or so, we'll see major advances in developing the kind of security needed to handle this frictionless network.

    In short: the Internet is doing just fine, people! It's the systems hooked up to it that have problems!
  • This would be a fun one that probably no techie, and no engineer could do. It would be very, very nice, if the ubiquitous they, should they begin building this thing, were to get a nice, nasty team of copyright and patent lawyers together and tame them. Next, have them attempt to build into this thing, either through patents or liscensing agreements or whatever, some protection against the flood/slurry/deluge of crappy and bogus patents we've all seen over the past three years. No more "patent on pointin
  • The RIAA and MPAA get representatives in the Internet 3.0 rebuilding committee, eliminate the pesky peer-to-peer architecture in favour of regulated servers and restricted clients, and build pervasive DRM into it at the protocol level.
  • .. better, stronger, faster.

It appears that PL/I (and its dialects) is, or will be, the most widely used higher level language for systems programming. -- J. Sammet

Working...