Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
The Internet Science

Vint Cerf on Internet Challenges 202

chamilto0516 writes "Phil Windley, a nationally recognized expert in using information technology, drove up to the Univ. of Utah recently hear this years Organick Lecture by Vint Cerf, one of the inventors of the Internet. In his notes, Vint talks about, 'Where is the Science in CS?' He also goes on to talk about real potential trouble spots with the Internet, but there is a bit on Interplanetary Internet (IPN). Apparently, the flow control mechanism of TCP doesn't work well when the latency goes to 40 minutes."
This discussion has been archived. No new comments can be posted.

Vint Cerf on Internet Challenges

Comments Filter:
  • by jarich ( 733129 ) on Wednesday April 20, 2005 @06:20PM (#12297683) Homepage Journal
    Apparently, the flow control mechanism of TCP doesn't work well when the latency goes to 40 minutes.

    Well... Duh!

    I just assumed everyone ~knew~ we'd be using UDP between planets...

    Sheesh... do I have to send a memo about ~everything???

    • Wow, so we'll finally have a use for TFTP afterall!
    • by EmbeddedJanitor ( 597831 ) on Wednesday April 20, 2005 @06:47PM (#12297922)
      ftp's policy is to get every byte through byte-perfect and in sequence and it will retry until it gets there. udp just throws out packets and hopes they get there.

      Over a 100Mb LAN the difference is effectively nothing, but once you involve slow and lossy networks the difference is considerable. The impact is great enough over terrestrial radio nets and is a zillion times worse interplanetary.

      Let's say you have a rover that sends a position message once a second. What you're really interested in, typically, is the most up to date info. If you're using tcp, then you won't get the up to date info until the retries etc have been done to get the old info through (iie. it's noon, but the noon data is not being sent because we're still doing the resneds to get the 8 am data through). This means that the up to date info gets delayed. With udp the lost data is just ignored and the up to date data arrives when it should.

      Of course ftp still (might) be a useful way to shift large files etc, but often the udp equivalents (eg. tftp instead of ftp) will be more apropriate.

    • Al Gore?
  • Awful (Score:3, Insightful)

    by erick99 ( 743982 ) <homerun@gmail.com> on Wednesday April 20, 2005 @06:22PM (#12297702)
    What an incredibly poorly written article. There was good content but it was like jogging through a field of boulders......
    • I think you're expecting too much. I've tried posting write-ups of talks before, and even if you take notes furiously, it's going to sound like a disorganized mess.

      I'm a UofU student, and had planned to go to this lecture. Something came up. So I'm thrilled that someone took the time to do this.
  • Well, yeah. (Score:4, Informative)

    by Lally Singh ( 3427 ) on Wednesday April 20, 2005 @06:22PM (#12297706) Journal
    TCP assumes anything over 2 minutes is a lost packet.
  • by Beolach ( 518512 ) <beolach AT juno DOT com> on Wednesday April 20, 2005 @06:23PM (#12297719) Homepage Journal
    /*
    * [...] Note that 120 sec is defined in the protocol as the maximum
    * possible RTT. I guess we'll have to use something other than TCP
    * to talk to the University of Mars.
    * PAWS allows us longer timeouts and large windows, so once implemented
    * ftp to mars will work nicely.
    */
    (from /usr/src/linux/net/inet/tcp.c, concerning RTT [retransmission timeout])
  • Vint Cerf? (Score:3, Funny)

    by Anonymous Coward on Wednesday April 20, 2005 @06:24PM (#12297724)
    It's a person? I thought it was a number at first.
  • by La Camiseta ( 59684 ) <(me) (at) (nathanclayton.com)> on Wednesday April 20, 2005 @06:27PM (#12297752) Homepage Journal
    Apparently, the flow control mechanism of TCP doesn't work well when the latency goes to 40 minutes.

    That's what subspace communication is for. I would hope that a geek of his caliber has at least watched some Star Trek.
  • by G4from128k ( 686170 ) on Wednesday April 20, 2005 @06:43PM (#12297890)
    I heard Vint Cerf speak at an e-business conference (remember when those were popular?).

    He talked extensively about how the layered architecture of the internet poses a serious challenge to business models. The fact that any application can communicate through any physical medium (of sufficient bandwidth) was great for interoperability, but hard on businesses that provide the physical layer.

    The problem is that all of the value is in the application layer -- people want to run software, download movies, chat with friends, etc. Whether the data flows on copper, fiber, or RF is irrelevant to the end-user and the layered architecture ensures that this is irrelevant. In contrast, a lot of the cost is in that "irrelevant" physical layer -- the last mile is still very expensive (we can hope WiMax reduces this problem). This gulf between cost and value forces physical infrastructure providers into a position of being a commodity providers with severe cost competition. If the end-user doesn't care how their data is carried, then they tend to treat bandwidth as a commodity.

    I think he was wearing his MCI hat at the time of this talk and was influenced by the beginnings of the dot-com crash. MCI's subsequent bankruptcy was not surprising. Understanding this issue explains why telecom companies don't want municipal wifi and insist that you only network your cellphone through their networks. The only way to make infrastructure pay is to bind the high-value software application layer to the high-cost hardware layer. But this strategy violates the entire layered model and enrages consumers.
    • Good point. Layered networks in the end all result in one way of billing. A fixed fee for the connection and data, with maybe some differentiation in the amount of oversubscribtion. Say everybody would get a 100mbit line, 1:10 oversubscribed, but companies can also choose 1:2, 1:1 and 1Gbit or 10Gbit lines with appropriate pricing. The most cost effective way of building networks would then be one network operator and multiple service providers on that network. This might however lead to a lack of incentive
  • by MerlynEmrys67 ( 583469 ) on Wednesday April 20, 2005 @06:44PM (#12297900)
    So, this guys claim to fame is he worked at Excite@Home and was the CIO of the state of Utah...
    Well, and maybe having his own website up there at phil.whendley.com.

    Seems kind of far from a Nationally recognized expert to me. I'd never heard of him - why do I associate his name with a talk that Vint Cerf gave and apparently this guy gave no value too, other than driving there and listening

  • Software Quality (Score:4, Interesting)

    by nokiator ( 781573 ) on Wednesday April 20, 2005 @06:44PM (#12297901) Journal
    It is rather amazing that there appears to be a consensus among industry experts that there has not been any improvement in code quality over the past 30 years or so despite the development of a vast number of new tools and languages. It is true that the size and scope of the average application has grown by leaps and bounds. But most likely, the primary contributing factor to these kind of quality problems is the prevalent time-to-market pressuer in the software industry which is typically coupled with severe underestimation of time and resources required for projects.

    Even if CS came up with a scientific solution to improve code quality, it would be an interesting exercise to see if the industry will be willing to absorb the costs associated with such a solution. Especially in an environment where end customers are well-trained to accept and deal with software quality issues.

    • Even if CS came up with a scientific solution to improve code quality, it would be an interesting exercise to see if the industry will be willing to absorb the costs associated with such a solution.

      I think they would, if it were cost effective. Industry spends tons of money and wastes tons of time on "process" that I'm sure they'd rather spend on other stuff.

    • The volume of code being written and the number of programmers writing it has also grown by leaps and bounds. This means that the average intelligence of those programmers has necessarily decreased: we have people writing code to day who would have been driving trucks thirty years ago being managed by people who would have been supervising mailrooms. When this is taken into consideration it's amazing that code quality hasn't decreased more than it has.
    • by tsotha ( 720379 ) on Wednesday April 20, 2005 @09:15PM (#12298925)
      It is rather amazing that there appears to be a consensus among industry experts that there has not been any improvement in code quality over the past 30 years or so despite the development of a vast number of new tools and languages.

      I've always assumed this was a variation of "In my day, we had to walk 10 miles through the snow just to get the mail..." I've been in this business for 18 years or so, and while I don't think the actual code is any more clever than it used to be, the expectation in terms of time-to-market and quality have definitely changed.

      When I started slinging code you could release business software with no GUI and still compete. You could release software that didn't "play nice" with other applications. You could require users to load special drivers and put arcane commands in their OS configuration. There is simply a larger set of features that have become mandatory, i.e., things you have to have to pass the laugh-test. You may call it bloat, but the fact is I can't remember the last time I cracked a manual - my expectation is the sofware is lousy if I can't install and operate it without a manual.

      I don't see the quality changing any time soon. You can never completely test a non-trivial application, and finding those last couple of esoteric bugs incur an enormous cost. Would you really be willing to pay double the price for, say, MS Office if they removed half the remaining bugs? I wouldn't, especially if I can work around the problems.

      • You may call it bloat, but the fact is I can't remember the last time I cracked a manual - my expectation is the sofware is lousy if I can't install and operate it without a manual

        Interestingly, if I can't RTFM _before_ I use software, I call it lousy. On the other hand, that may be due to the fact that I do complex things like multi million user email setups rather than trivial desktop use (word processing, file and print sharing, etc).
      • Would you really be willing to pay double the price for, say, MS Office if they removed half the remaining bugs?

        Oh, hell yes. My clients pay more than the acquisition cost of each copy of Office in support costs.

        Why just the other day I had an Outlook install where Outlook decided to corrupt its PST file and the CxO lost quite a bit of productivity until I could get in there to salvage it and wait for it to rebuild as it was close to the 1.82GB 'limit'.

        That little event cost them as much as Office did,
        • Oh, hell yes. My clients pay more than the acquisition cost of each copy of Office in support costs.

          Sure. But some of the support costs are fixed. Remember, I was comparing the current product at the current price with a product that has half the bugs for twice the price. You still need support, just less of it. Lots of small and medium companies have one person dedicated to this task, so they won't be able to realize much in the way of cost savings if Office. And companies pay some non-trivial cost

  • by kid_wonder ( 21480 ) <<slashdot> <at> <kscottklein.com>> on Wednesday April 20, 2005 @06:47PM (#12297921) Homepage
    40 minutes = 2400 seconds
    Speed of light = 299,792.458 kilometers per second
    Distance from Earth to Mars: 55,700,000 kilometers (minimum) 401,300,000 km (maximum)

    Time of travel at speed of light to mars: 401,300,000/299,792.458 = ~1339 second

    Since Mars is supposedly the first place we're likely to go farther away than the moon it seems that we are fine for now.

    Right? Or is there not a way to send data in form of light, or do radio waves travel slower than light?

    Anyway, someone correct me here
    • "Radio waves are a kind of electromagnetic radiation, and thus they move at the speed of light."

      Just found that on a web site so it must be true
    • by rewt66 ( 738525 ) on Wednesday April 20, 2005 @06:59PM (#12298023)
      I believe that TCP requires an acknowledgement that the other end of the link received the packet. So, using your numbers, that would be 1339 * 2 = 2678 seconds, which is 44.63 minutes (40 minutes in round figures).
    • All electromagnetic radiation travels at the same speed. Different signals are placed into groups like radio and visible light based on their frequency spectra(that is, how the signal varies with time). It's just a convenience; physically they're still more or less the same thing.
    • You're probably spot-on. I don't really care to verify figures, but yes, radio waves travel the same speed; wavelength and other minute details aside (does that have an effect? whatever.. short response). Latency is Rount Trip Time, though... so value * 2 (after all, ping is time between sending packet and getting packet reply BACK). Roughly 40 minutes.
  • I thought, years ago when I was looking at it, that IPv6 had a TTL that was modifiable, and thus wouldn't time out.

    But, as a practical matter, it would work better as an FTP request does, where you stream the data in blocks and resend any missed blocks later. This would work fairly well for lossy protocols like JPEG or suchlike, but a good image format should be able to handle it, but time stop/start protocols might get glitched and would have to be replaced.

    Anyone for MP7? TUFF instead of TIFF?

    The oth
    • I thought, years ago when I was looking at it, that IPv6 had a TTL that was modifiable, and thus wouldn't time out.

      TTL (Time To Live) actually has nothing to do with time. It is a number which is decremented in the packet header each time the packet passes through a router. When the TTL field reaches (IIRC) 0 the packet is dropped. You can set the TTL in IPv4 if you want to, normally it is done when dealing with multicast traffic so that the packets don't travel too far out of the network multicast routi

      • I thought, years ago when I was looking at it, that IPv6 had a TTL that was modifiable, and thus wouldn't time out.

        TTL (Time To Live) actually has nothing to do with time. It is a number which is decremented in the packet header each time the packet passes through a router. When the TTL field reaches (IIRC) 0 the packet is dropped. You can set the TTL in IPv4 if you want to, normally it is done when dealing with multicast traffic so that the packets don't travel too far out of the network multicast routi
  • Of course, that's got me thinking about Pandora's Star again and now I'm depressed as it has been 12 freakin' months and Peter F. Hamilton still hasn't completed Judas Unleashed.

    But seriously, imagine if CERN discovered a workable way to make microscopic wormholes. All you'd need is one big enough to send a stream of photons through. Hook up your optic fibre and you've got yourself a zero latentcy round-the-world communications network. It'd certainly change gaming.
    • Re:Need wormholes (Score:2, Informative)

      by anti-drew ( 72068 )
      Once you're talking about wormholes big enough to send a stream of photons through, there are many other implications. Arthur C. Clarke and Stephen Baxter's The Light of Other Days [amazon.com] is an interesting thought experiment in that direction.

      Basically they suggest that it opens up the possibility of wormhole cameras which can be used to view what's happening anywhere at any time without anyone's knowledge. Privacy is completely destroyed and civilization, um, takes a while to get over that fact. Later in the b
      • also presumes that you'd have control over "when" the other end of the wormhole was.. I personally think that if wormholes were ever feasible they would require large structures on both ends to maintain them and facilitate "instant" communications only. You couldn't open a wormhole into the past or the future. But hey, who the hell knows.
  • shame he works for the #1 spam support company [spamhaus.org] in the world.

    his company adds new spammers on an almost daily basis, just check the dates on the various sbl records.
  • by tyler_larson ( 558763 ) on Wednesday April 20, 2005 @06:57PM (#12298003) Homepage
    Phil Windley, a nationally recognized expert in using information technology...

    Wow. If I had known that he was such a celebrity, I probably would have paid more attention in his Enterprise Systems class at BYU.

    I guess it's nice to learn from someone important who doesn't act like the world revolves around him.

  • Latency (Score:2, Funny)

    by nsuccorso ( 41169 )
    Apparently, the flow control mechanism of TCP doesn't work well when the latency goes to 40 minutes.

    ...as any DirecWay customer can readily attest to...

  • by jgold03 ( 811521 ) on Wednesday April 20, 2005 @07:16PM (#12298119)
    I think people generally don't understand what computer science is. CS isn't a 4 year degree to learn how to program or set up a network. It's about having the theoretical background to be able to analyze and evaluate comptuter technologies. Classes like automata theory and theoretical data structure classes are necessary to be able to both 1) apply a real solution to a problem and 2) be able to argue the validity of that solution. There is a lot of science in CS.
  • Vint hates the chaos of evolving systems and identifies all the protocols in flux. Most solutions not offered. Vince will be able to call home from Mars. I skimmed it, sorry.
  • I'm with Captain Janeway in her dislike of temporal mechanics, but this seems like a problem the crew in TOS solved by slinging the Enterprise one way or another around the Sun. Sling the data packets x number of times around the Sun and fling them the appropriate distance into the future, or, possibly, on occasion, into the past. But then again I could be wrong; as noted above I hate temporal mechanics.
  • by Effugas ( 2378 ) * on Wednesday April 20, 2005 @07:42PM (#12298324) Homepage
    Realtime communication with a Martian node is physically impossible. It's simply too far away.

    Realistically, we might see a proxy architecture as follows:

    1) All traffic is "queued" at an earth-bound substation. Communication is TCP-reliable to this node; transport layer acknowledgements are degraded to "message received by retransmitter" (end-to-gateway) rather than "message received by Mars"(end-to-end). Since both Earth and Mars are in constant rotation, a "change gateway" message would need to exist to route interplanetary traffic to a different satellite node (think "global handoff").

    2) Transmission rates from Earth to Mars are constant, no matter the amount of data to send. Extra link capacity is consumed by large-block forward error correction mechanisms. Conceivably, observed or predicted BER's could drive minimum FEC levels (i.e. the more traffic being dropped, due to the relative positions of the Earth and Mars, the less traffic you'd be willing to send in lieu of additional error correction data.

    3) Applications would need to be rewritten towards a queue mentality, i.e. the interplanetary link is conceivably the ultimate "long fat pipe". Aggressively publishing content across the interplanetary gap would become much more popular. As much content has gone dynamic, one imagines it becoming possible to publish small virtual machines that emulate basic server side behavior within the various proxies.

    You'd think all this was useless research, as there's no reason to go to Mars -- but TCP doesn't just fail when asked to go to Mars; it's actually remarkably poor at handling the multi-second lag inherent in Geosat bounces. Alot of the stuff above is just an extension of what we've been forced to do to deal with such contingencies.

    --Dan
  • Gosh, I can't believe there hasn't been a stargate joke yet. Amazing.
  • by rufusdufus ( 450462 ) on Wednesday April 20, 2005 @08:17PM (#12298550)
    For me, its not science if it doesn't involve the methods of empiricism. Observation, hypothesis, repeat.

    The only time this really happens with computers is troubleshooting.
    Programmers may think in a logical or analytical way, but thats not science. And its a good thing to. If programmers weren't allowed to make stuff up as they went along but instead had to use scientific method for everything they did not many progams would be completed.
  • > Apparently, the flow control mechanism of TCP
    > doesn't work well when the latency goes to 40
    > minutes.

    UUCP, however, works just fine.

  • [some guy]...a nationally recognized expert in using information technology, drove up to the Univ. of Utah

    Got denied plane tickets by DHS, eh?

    More importantly, did he get to meet Brunvand [wikipedia.org] while there?
  • by MattW ( 97290 ) <matt@ender.com> on Wednesday April 20, 2005 @09:28PM (#12298987) Homepage
    We know almost nothing about making programming more efficient and systems more secure and scalable. He characterizes our progress in programming efficiency as a "joke" compared to hardware.

    It's definitely a joke - and the real joke is that it can't even be characterized as "progress". The programming of today is worse than it was a couple decades ago and consistently declines. I have talented friends who have dropped out of the industry in disgust over what passes as programming nowadays.

    Maybe Vint Cert should be talking about the evils of "computer science" being taught around Java, or the fact that many CS programs have become little more than glorified job training.
    • Anyone remember when basic knowledge of AND, NAND, OR, etc. logic concepts and binary math ability was a prereq of CS? When I was a kid, we worked in binary, hex, even octal and that was long before hitting CS classes.

      A younger friend of mine on the BSCS track complains his prof defines two ways of writing in C:his way and the wrong way. He says that given that the prof's methods aren't even close to C's creators' recommendations and look more like the grudging under protest work of a C++ junkie, it is m
  • by tres3 ( 594716 ) on Thursday April 21, 2005 @01:56AM (#12300501) Homepage
    Apparently, the flow control mechanism of TCP doesn't work well when the latency goes to 40 minutes.

    That's strange. I thought that issue would have been worked out by RFC 1149 or CPIP. You would think that 40 minute transit times would be a quick ping when using the Carrier Pigeon Internet Protocol [linux.no] (CPIP).

When you are working hard, get up and retch every so often.

Working...