Vint Cerf on Internet Challenges 202
chamilto0516 writes "Phil Windley, a nationally recognized expert in using information technology, drove up to the Univ. of Utah recently hear this years Organick Lecture by Vint Cerf, one of the inventors of the Internet. In his notes, Vint talks about, 'Where is the Science in CS?' He also goes on to talk about real potential trouble spots with the Internet, but there is a bit on Interplanetary Internet (IPN). Apparently, the flow control mechanism of TCP doesn't work well when the latency goes to 40 minutes."
What? (Score:1, Interesting)
Software Quality (Score:4, Interesting)
Even if CS came up with a scientific solution to improve code quality, it would be an interesting exercise to see if the industry will be willing to absorb the costs associated with such a solution. Especially in an environment where end customers are well-trained to accept and deal with software quality issues.
Doesn't IPv6 fix this? (Score:2, Interesting)
But, as a practical matter, it would work better as an FTP request does, where you stream the data in blocks and resend any missed blocks later. This would work fairly well for lossy protocols like JPEG or suchlike, but a good image format should be able to handle it, but time stop/start protocols might get glitched and would have to be replaced.
Anyone for MP7? TUFF instead of TIFF?
The other question is, would this be on the same network, or would, given the very small number of network nodes concerned, it be on a network that we bridge to and translate as needed, buffering the data streams on each end.
Now, if you had a martian sandstorm for a few days, that's probably not going to be that helpful, but you get the idea
Re:What? (Score:5, Interesting)
I actually attended this lecture yesterday and Vinton disclaimed the "father of the internet" moniker, saying that he co-designed the original TCP/IP protocol but that he and Bob Kahn and that that work was largely based on the ARPANET design which was in turn based on packet radio, etc. So yes, the man himself said he was just one of a long list contributors.
He did joke though that his son once asked if he was the "brother of the Internet".
He also commented that one of the properties of the system that he was quite happy with was the ease with which others could contribute at any level of the system, including building new application layer protocols on top of the basic protocols without going and needing to go and get permission from someone. People can just go out and write new protocols and build the apps to use them. (e.g. Bit Torrent) He said he thought that the Internet is largely where it is today because of that openness to the contributions of thousands of people.
Interplanetary TCP HOWTO (Score:4, Interesting)
Realistically, we might see a proxy architecture as follows:
1) All traffic is "queued" at an earth-bound substation. Communication is TCP-reliable to this node; transport layer acknowledgements are degraded to "message received by retransmitter" (end-to-gateway) rather than "message received by Mars"(end-to-end). Since both Earth and Mars are in constant rotation, a "change gateway" message would need to exist to route interplanetary traffic to a different satellite node (think "global handoff").
2) Transmission rates from Earth to Mars are constant, no matter the amount of data to send. Extra link capacity is consumed by large-block forward error correction mechanisms. Conceivably, observed or predicted BER's could drive minimum FEC levels (i.e. the more traffic being dropped, due to the relative positions of the Earth and Mars, the less traffic you'd be willing to send in lieu of additional error correction data.
3) Applications would need to be rewritten towards a queue mentality, i.e. the interplanetary link is conceivably the ultimate "long fat pipe". Aggressively publishing content across the interplanetary gap would become much more popular. As much content has gone dynamic, one imagines it becoming possible to publish small virtual machines that emulate basic server side behavior within the various proxies.
You'd think all this was useless research, as there's no reason to go to Mars -- but TCP doesn't just fail when asked to go to Mars; it's actually remarkably poor at handling the multi-second lag inherent in Geosat bounces. Alot of the stuff above is just an extension of what we've been forced to do to deal with such contingencies.
--Dan
The only science is debugging code (Score:4, Interesting)
The only time this really happens with computers is troubleshooting.
Programmers may think in a logical or analytical way, but thats not science. And its a good thing to. If programmers weren't allowed to make stuff up as they went along but instead had to use scientific method for everything they did not many progams would be completed.
progress? Sorry, but we're working backwards (Score:3, Interesting)
It's definitely a joke - and the real joke is that it can't even be characterized as "progress". The programming of today is worse than it was a couple decades ago and consistently declines. I have talented friends who have dropped out of the industry in disgust over what passes as programming nowadays.
Maybe Vint Cert should be talking about the evils of "computer science" being taught around Java, or the fact that many CS programs have become little more than glorified job training.
Re:Where is the "science" in CS (Score:1, Interesting)