Vint Cerf on Internet Challenges 202
chamilto0516 writes "Phil Windley, a nationally recognized expert in using information technology, drove up to the Univ. of Utah recently hear this years Organick Lecture by Vint Cerf, one of the inventors of the Internet. In his notes, Vint talks about, 'Where is the Science in CS?' He also goes on to talk about real potential trouble spots with the Internet, but there is a bit on Interplanetary Internet (IPN). Apparently, the flow control mechanism of TCP doesn't work well when the latency goes to 40 minutes."
Well, yeah. (Score:4, Informative)
One of my favorite kernel comments.... (Score:5, Informative)
well thank goodness we have the internets... (Score:1, Informative)
Re:let's get two out of the way (Score:5, Informative)
To quote a site that bothers to keep the quote around for Google's sake: And he did take initiative in creating the Internet. In fact, he pushed funding for it through a congress that was convinced that anything attached to the military (and keep in mind that NSF and DARPA *are* connected to the military) was "the enemy". I heard Gore speak back then, and he was passionate about the creation of a national research network and how important it was.
The Internet is here with us today as much because of the funding as because of the science, and Gore was the money man.
Persoanlly, I find some of his politics a bit extreme, but like or hate liberal politics, you have to admit that the media dropped the ball by not calling Bush on this.
Re:Interplanetary TCP?? (Score:1, Informative)
Why isn't this modded funny? UDP is even worse than TCP: UDP provides no guarantees for message delivery and a UDP sender retains no state on UDP messages once sent onto the network. (For this reason UDP is sometimes expanded to "Unreliable Datagram Protocol".)
Source [wikipedia.org]
Vint Cerf: Value of the net vs. cost of the net (Score:5, Informative)
He talked extensively about how the layered architecture of the internet poses a serious challenge to business models. The fact that any application can communicate through any physical medium (of sufficient bandwidth) was great for interoperability, but hard on businesses that provide the physical layer.
The problem is that all of the value is in the application layer -- people want to run software, download movies, chat with friends, etc. Whether the data flows on copper, fiber, or RF is irrelevant to the end-user and the layered architecture ensures that this is irrelevant. In contrast, a lot of the cost is in that "irrelevant" physical layer -- the last mile is still very expensive (we can hope WiMax reduces this problem). This gulf between cost and value forces physical infrastructure providers into a position of being a commodity providers with severe cost competition. If the end-user doesn't care how their data is carried, then they tend to treat bandwidth as a commodity.
I think he was wearing his MCI hat at the time of this talk and was influenced by the beginnings of the dot-com crash. MCI's subsequent bankruptcy was not surprising. Understanding this issue explains why telecom companies don't want municipal wifi and insist that you only network your cellphone through their networks. The only way to make infrastructure pay is to bind the high-value software application layer to the high-cost hardware layer. But this strategy violates the entire layered model and enrages consumers.
Especially for telemetry data (Score:4, Informative)
Over a 100Mb LAN the difference is effectively nothing, but once you involve slow and lossy networks the difference is considerable. The impact is great enough over terrestrial radio nets and is a zillion times worse interplanetary.
Let's say you have a rover that sends a position message once a second. What you're really interested in, typically, is the most up to date info. If you're using tcp, then you won't get the up to date info until the retries etc have been done to get the old info through (iie. it's noon, but the noon data is not being sent because we're still doing the resneds to get the 8 am data through). This means that the up to date info gets delayed. With udp the lost data is just ignored and the up to date data arrives when it should.
Of course ftp still (might) be a useful way to shift large files etc, but often the udp equivalents (eg. tftp instead of ftp) will be more apropriate.
Re:What? (Score:2, Informative)
Re:Someone correct me if this is wrong (Score:5, Informative)
Re:Latency over lightyears... (Score:5, Informative)
Latency is measured in units of time. Lightyears are a measure of distance.
TCP's no good using standard broadcast methods
Huh? If I knew what you meant to say, it'd be easier to show you were wrong...
We need something that'll be as fast as fiber, but will stretch way way longer in distance.
So, like, line-of-sight laser communication?
Current radio's a broadcast. Can't do that, especially with package leakage.
How do you think we're communicating with the Mars rovers now? Or other planetary explorers?
I belive there was some experiments in quantum transmissio of data, in which an electron was split and one half sent to Munich, the other sent to Venice, and transmissions where near-instantaneous.
You can instantaneously determine what the other side received, but no information can be transmitted this way.
I see you have a low user-id, and therefore have learned to get modded up for saying stuff that is nonsensical and wrong. I must admit I'm impressed. I earn all my mod points the hard way.
Re:Latency over lightyears... (Score:4, Informative)
Re:let's get two out of the way (Score:5, Informative)
Absolutely not. Gore entered Congress in 1977, well after any point that could reasonably be construed as the "creation" of the ARPAnet/Internet. It's true that he never claimed to have "invented the Internet" but what he did say is still completely untrue.
Re:let's get two out of the way (Score:4, Informative)
According to Cerf, "The first demonstration of the triple network Internet took place in July 1977". He refers to this event as the "Birth of the Internet". Prior to that, researchers could send messages but had to be very familiar with the underlying technology.
In a September 2000 email, Cerf and Kahn give Al Gore much credit in the development of the Internet: http://www.mintruth.com/wiki/index.php?Al%20Gore%
Two excerpts:
Re:One of my favorite kernel comments.... (Score:2, Informative)
TCP: September 1981. Standard 7 [rfc-editor.org]/RFC 793 [rfc-editor.org] (replaces RFC 761 [rfc-editor.org])
FTP: October 1985. Standard 9 [rfc-editor.org]/RFC 959 [rfc-editor.org] (replaces RFC 765 [rfc-editor.org])
Re:Need wormholes (Score:2, Informative)
Basically they suggest that it opens up the possibility of wormhole cameras which can be used to view what's happening anywhere at any time without anyone's knowledge. Privacy is completely destroyed and civilization, um, takes a while to get over that fact. Later in the book other corollary results show up which are even more far out.
It's not a great book in terms of its plot, but it's classic SF
Re:Science out, Engineering in (Score:4, Informative)
My computer has a about a billion of bits of RAM, even if on average 90% of them are zero.
-
Re:Doesn't IPv6 fix this? (Score:3, Informative)
TTL (Time To Live) actually has nothing to do with time. It is a number which is decremented in the packet header each time the packet passes through a router. When the TTL field reaches (IIRC) 0 the packet is dropped. You can set the TTL in IPv4 if you want to, normally it is done when dealing with multicast traffic so that the packets don't travel too far out of the network multicast routing protocols also have an impact on this).