Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
The Internet Networking Technology

Researchers Scheming to Rebuild Internet From Scratch 254

BobB writes "Stanford University researchers have launched an initiative called the Clean Slate Design for the Internet. The project aims to make the network more secure, have higher throughput, and support better applications, all by essentially rebuilding the Internet from scratch. From the article: 'Among McKeown's cohorts on the effort is electrical engineering Professor Bernd Girod, a pioneer of Internet multimedia delivery. Vendors such as Cisco, Deutsche Telekom and NEC are also involved. The researchers already have projects underway to support their effort: Flow-level models for the future Internet; clean slate approach to wireless spectrum usage; fast dynamic optical light paths for the Internet core; and a clean slate approach to enterprise network security (Ethane).'"
This discussion has been archived. No new comments can be posted.

Researchers Scheming to Rebuild Internet From Scratch

Comments Filter:
  • by wuie ( 884711 ) on Thursday March 15, 2007 @03:04PM (#18366043)
    "There's never time to do it right, but always time to do it over."
  • by kaizenfury7 ( 322351 ) on Thursday March 15, 2007 @03:09PM (#18366117)
    ....and with DRM baked in.
  • by Colin Smith ( 2679 ) on Thursday March 15, 2007 @04:04PM (#18366875)
    Which doesn't talk to anything.

    If it's going to be useful, it has to talk to everything, that's the whole point of the network effect.

  • Re:Interesting (Score:3, Interesting)

    by EvanED ( 569694 ) <{evaned} {at} {gmail.com}> on Thursday March 15, 2007 @04:17PM (#18367107)
    For example, I am interested in the question - how would Unix work differently if extended attributes were available in all Unix filesystems from the beginning. Tradition often holds back innovation, I feel

    Fully agreed. For instance, NTFS supports alternate data streams, which are essentially really huge extended attributes. (They're a generalized version of HFS's resource and data forks. A number of other filesystems support similar things now too, such as HFS+, ZFS, and ReiserFS4 v4 in a slightly different manner.)

    But the problem is that no one uses them because nothing was built to work with them. If you upload a file with the alternate streams, you lose the streams. If you copy a file to a floppy (yeah, I know) or USB drive, you lose the streams. If you dual boot and copy the file to ext3, you lose the streams. If you say 'cat file1 > file2', with the Unix model this is the same as copying a file, but it would lose streams. The same applies for extended attributes, though maybe slightly less. (Like I don't know if copying a file between two ext3 filesystems will lose them or not.)

    It's very frustrating, because there are a lot of really neat things that you could envision doing with this sort of metadata, but no one has support for it.

    So I've wondered almost the exact same thing myself... if in 1970, someone added extended attributes/streams to Unix, what would it look like today?

    (Of course, I also wonder about things like "what would the world be like if water's heat of fusion was a quarter of what it is" brought about by the spring thaw that's in progress...)
  • by Inmatarian ( 814090 ) on Thursday March 15, 2007 @04:26PM (#18367237)
    http://en.wikipedia.org/wiki/Internet_Mail_2000 [wikipedia.org]

    The name is crappy, but the concept is a really good start. It's a shame this never caught on. Basically, Email's Subjects and Bodies are split, and the Subject is sent to the Receiver, and the Body is stored at the Sender's server. When the Receiver gets the Subject notification, they connect to the Sender's server and download the Body.

    The point of this strange scheme would be to crush spammers under the weight of their own To list, by having millions of incoming connections. The burden of storage goes to the Sender, not the Receiver.

    That should be one of the technologies Web 11.0 should implement. Somebody call up Al Gore and tell him this.
  • by bcrowell ( 177657 ) on Thursday March 15, 2007 @05:40PM (#18368209) Homepage

    I agree completely. However, what this article is talking about is redesigning the lowest-level workings of the internet and its protocols, not relatively high level stuff like e-mail. IMO what's really broken is the high-level protocols, e-mail in particular. Another thing that, with hindsight, is clearly a mistake is http, XMLHttpRequest, and all that; it's clear now that many people want to be able to run something like a GUI application through something like a web browser, but the protocols were never designed properly to allow that. Rather than putting a bag on the side of http to allow ajax apps, the right thing to do would have been to leave http alone, and create a completely different application and protocol that would do what people are trying, painfully, to do with browsers and http.

    Another problem is the creep of proprietary formats for audio and video. Mp3 is still heavily patent-encumbered, and the licensing terms do not make it legal for a Linux distro, say, to distribute as many copies of an mp3 library as they like. Video is even worse, because the closest thing to an open codec is theora, and theora doesn't work well enough to be practical. What has really turned out to be popular is to wrap videos in flash apps (the way you-tube does), which piles proprietary cruft on top of proprietary cruft.

    We have a whole bunch of technologies that do similar things:

    • Java applets are free as in everything (now that sun's java implementation is gpl'd), but users hate them because it takes so long to start up a vm. The java applet security model is also too tight for some purposes.
    • Ajax is a botch. It's way too hard to get an ajax app to work on all browsers, in a way that's consistent with what people expect from a GUI app. For example, where I work, we have a new ajax app we're required to use for filing certain paperwork, and it doesn't allow cut and paste. The solution that's been proposed is that we print out the old documents on paper, and send them to a summer intern, who will keyboard them.
    • Flash is theoretically open in many ways, but in reality it depends on way too many patent-encumbered or license-encumbered pieces to be appropriate for OSS.
  • by alexfromspace ( 876144 ) on Thursday March 15, 2007 @07:19PM (#18369279) Homepage Journal
    When I looked at the title of the article I had a strong surge of hope followed by a suddent concern for job security and visions of decreasing demand for highly skilled professionals. Well, after overviewing the white paper I was feeling completely secure and once again disappointed.

    I find most of the propositions as things that need to get done, but overall it looks like just another patch, although a huge one. Majority of it deals with reevaluating design of the physical layer components and their integration, and although grandiose the rest looks like a list of bugs needed to be fixed.

        Seriously, in order to rebuild internet from scratch, most if not all of software dealing with networking would have to be rewritten in order to go from the 5 layer model to the more proper 7 layer model. That would mean pretty much rewriting huge chunks of linux, unix, apache, throwing out billiions of lines of code and eventually seeing a significant decline in the demand for both hardware and software. On the positive note, it might also cripple windoze, dealing it a death blow.

    It is nice to see that Stanford is at least considering to reexamine the subject, since we pretty much owe it them for being stuck with 5 layers :), ouch, :)!

  • by TropicalCoder ( 898500 ) on Thursday March 15, 2007 @10:26PM (#18370751) Homepage Journal

    I found the concept of rebuilding the internet from scratch quite exciting. Now that we have some thirty years of experience with the old one, what a difference we could make with a new one, while at the same time having a much better understanding of how to build a network that will sustain continuing evolution on into the future.

    There are a few essential things missing from the Stanford proposal. I didn't see anything to suggest that they are looking for this to be a truly international collaboration. If it isn't, that would be a very short sited omission. Also needed are the inclusion of social scientists capable of making some value judgments and decisions about how the proposed new internet can encourage social inclusion and break down the digital divide, and political scientists who can suggest how the proposed new internet can enhance democracy and international harmony.

    Obviously, as the article stated, there will be resistance from current stakeholders who depend on the internet remaining as it is. Advocates of net neutrality are obviously very concerned, but it doesn't have to be the way they imagine. Imagine every packet has fields in the header that indicate its particular needs, whether that is for guaranteed delivery latency, or low jitter, or priority level, (even varying packet sizes may be useful) and every packet priced. Those of you who download entire movies via BitTorrent will be able to save money by just dropping the packet delivery priority. Really, if you want a certain movie, usually it doesn't matter if you get it today or tomorrow or next week. Imagine if you could set the priority - and the corresponding price per packet so low that it takes a whole week to deliver, but costs you only pennies?

    The thing is - the current internet IS broken. The article states that current economics can't sustain it as it is, without going into much detail. They do state as evidence, however, that six out of the seven biggest ISPs have had to restructure in an attempt to sustain profitability. Our society (and more to the point, our economies) are growing more dependent on the internet day by day, but we dare not depend on it as we do. In its current state, it is just too vulnerable. It seems quite possible that some country could declare war and launch endless DOS and other attacks to such a degree that it could cripple our economy.

    Imagine if our telephones worked the way the internet works now. Over 90% of all the phone calls we receive would be somebody trying to sell us something. We would be getting calls from people in Nigeria asking our help in reclaiming fortunes. When we call our bank, we may actually end up talking to a phisherman trying to steal our money without realizing it. There would be periods when we simply couldn't call out because of endless incoming calls in a denial of service attempt. I am sure many readers could take this analogy a long ways, but I have made my point. In my opinion, only good can come from the Stanford research if they open to broader input.

Anyone can make an omelet with eggs. The trick is to make one with none.

Working...