Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
The Internet Technology Science

BIC-TCP 6,000 Times Quicker Than DSL 381

An anonymous reader writes "North Carolina researchers have developed an Internet protocol, subsequently tested and affirmed by Stanford, that hums along at speeds roughly 6,000 times that of DSL. The system, called BIC-TCP, beat out competing protocols from Caltech, University College London and others. The results were announced at IEEE's annual communications confab in Hong Kong." Update: 03/16 04:46 GMT by T : ScienceBlog suggests this alternate link while their site is down.
This discussion has been archived. No new comments can be posted.

BIC-TCP 6,000 Times Quicker Than DSL

Comments Filter:
  • Bottle Necks (Score:2, Informative)

    by pholower ( 739868 ) <longwoodtrail@3.14159yahoo.com minus pi> on Monday March 15, 2004 @07:54PM (#8573677) Homepage Journal
    It doesn't matter what the bandwidth of your pipe coming in is. It only matters what the connection of the other servers and switches is in the "internet cloud" At a rate like that, I would also wonder if ANY of the infrastructure we have in place would be able to keep up. Seems like something that wouldn't happen for decades.

  • by jimbosworldorg ( 615112 ) <slashdot AT jimbosworld DOT org> on Monday March 15, 2004 @07:54PM (#8573679) Homepage
    An awful lot of propagation delay tends to be equipment-internal rather than wire-length. Until you start talking about REALLY long distances like using satellite-based networking, anyway.
  • by Anonymous Coward on Monday March 15, 2004 @07:54PM (#8573687)
    They are different. DSL is a Layer 5 protocol, a datagram-based
    protocol, that provides for global addressing independent of ATM's
    scheme, routes packets independently of ATM's routing, and is used as
    the foundation for all Internet Protocol communications. IP can be used
    over ATM or over virtually any other link layer.

    DSL is layered on top of IP. It is a Layer 5 protocol, specifically
    designed to provide for "error free" communications. TCP has built-in
    error check, acknowledgment, and retry features.

    BIC-TCP is another Layer 4 protocol that can be layered over IP. It
    provides for error checking but not retry, so it isn't "error free" by
    itself.

    So IP over ATM is a more general description than TCP/IP over ATM or
    DSL/IP over ATM, or FTP over TCP/IP over ATM, or HTTP over BIC-TCP/IP over
    ATM, .... You get the picture.
  • mirror (Score:5, Informative)

    by Anonymous Coward on Monday March 15, 2004 @07:55PM (#8573691)
    Slowing down so here it is...

    New protocol could speed Internet significantly
    Posted on Monday, March 15 @ 14:04:08 EST by bjs

    Researchers in North Carolina have developed a data transfer protocol for the Internet that makes today's high-speed Digital Subscriber Line (DSL) connections seem lethargic. The protocol is named BIC-TCP, which stands for Binary Increase Congestion Transmission Control Protocol. In a recent comparative study run by the Stanford Linear Accelerator Center (SLAC), BIC consistently topped the rankings in a set of experiments that determined its stability, scalability and fairness in comparison with other protocols. The study tested six other protocols developed by researchers from schools around the world, including the California Institute of Technology and the University College of London. BIC can reportedly achieve speeds roughly 6,000 times that of DSL and 150,000 times that of current modems.

    From North Carolina State University:

    NC State Scientists Develop Breakthrough Internet Protocol

    Researchers in North Carolina State University's Department of Computer Science have developed a new data transfer protocol for the Internet that makes today's high-speed Digital Subscriber Line (DSL) connections seem lethargic.

    The protocol is named BIC-TCP, which stands for Binary Increase Congestion Transmission Control Protocol. In a recent comparative study run by the Stanford Linear Accelerator Center (SLAC), BIC consistently topped the rankings in a set of experiments that determined its stability, scalability and fairness in comparison with other protocols. The study tested six other protocols developed by researchers from schools around the world, including the California Institute of Technology and the University College of London.

    Dr. Injong Rhee, associate professor of computer science, said BIC can achieve speeds roughly 6,000 times that of DSL and 150,000 times that of current modems. While this might translate into music downloads in the blink of an eye, the true value of such a super-powered protocol is a real eye-opener.

    Rhee and NC State colleagues Dr. Khaled Harfoush, assistant professor of computer science, and Lisong Xu, postdoctoral student, presented a paper on their findings in Hong Kong at Infocom 2004, the 23rd meeting of the Institution of Electrical and Electronics Engineers Communications Society, on Thursday, March 11.

    Many national and international computing labs are now involved in large-scale scientific studies of nuclear and high-energy physics, astronomy, geology and meteorology. Typically, Rhee said, "Data are collected at a remote location and need to be shipped to labs where scientists can perform analyses and create high-performance visualizations of the data." Visualizations might include satellite images or climate models used in weather predictions. Receiving the data and sharing the results can lead to massive congestion of current networks, even on the newest wide-area high-speed networks such as ESNet (Energy Sciences Network), which was created by the U.S. Department of Energy specifically for these types of scientific collaborations.

    The problem, Rhee said, is the inherent limitations of regular TCP. "TCP was originally designed in the 1980s when Internet speeds were much slower and bandwidths much smaller," he said. "Now we are trying to apply it to networks that have several orders of magnitude more available bandwidth." Essentially, we're using an eyedropper to fill a water main. BIC, on the other hand, would open the floodgate.

    Along with postdoctoral student Xu, Rhee has been working on developing BIC for the past year, although Rhee said he has been researching network congestion solutions for at least a decade. The key to BIC's speed is that it uses a binary search approach - a fairly common way to search databases - that allows for rapid detection of maximum network capacities with minimal loss of information. "What takes TCP two hours to determine, BIC can do in les
  • by ClayJar ( 126217 ) on Monday March 15, 2004 @07:55PM (#8573693) Homepage

    To quote the part that says what the article is actually about:

    The key to BIC's speed is that it uses a binary search approach - a fairly common way to search databases - that allows for rapid detection of maximum network capacities with minimal loss of information. "What takes TCP two hours to determine, BIC can do in less than one second," Rhee said.
  • by lingqi ( 577227 ) on Monday March 15, 2004 @07:59PM (#8573746) Journal
    What they mean is that current TCP protocol becomes a bottleneck at high bandwidth applications, so a new protocol is designed that would be efficient up to ~6000xDSL speed (just a pot-shot guess, up to 9Gb/S?). It has nothing to do with pushing data down the POTS line, just that if one day you had a fat pipe to your house, this new protocol would make use of it properly unlike today's TCP.

    It's a stupid comparison, but I guess they expect people to not have an idea what 9Gb/S is...
  • by bigsexyjoe ( 581721 ) on Monday March 15, 2004 @08:04PM (#8573788)
    Actually I'll just put the abstract below. If you want to read their paper, code, and other goodies, click here [ncsu.edu]

    High-speed networks with large delays present a unique environment where TCP may have a problem utilizing the full bandwidth. Several congestion control proposals have been suggested to remedy this problem. In these protocols, mainly two properties have been considered important: TCP friendliness and bandwidth scalability. That is, a protocol should not take away too much bandwidth from TCP while fully utilizing the full bandwidth of high-speed networks. We presents another important constraint, namely, RTT (round trip time) unfairness where competing flows with different RTTs may consume vastly unfair bandwidth shares. Existing schemes have a severe RTT unfairness problem because the window increase rate gets larger as window grows - ironically the very reason that makes them more scalable. The problem occurs distinctly with drop tail routers where packet loss can be highly synchronized. Bic-TCP is a new protocol that ensures a linear RTT fairness under large windows while offering both scalability and bounded TCP-friendliness. The protocol combines two schemes called additive increase and binary search increase. When the congestion window is large, additive increase with a large increment ensures linear RTT fairness as well as good scalability. Under small congestion windows, binary search increase is designed to provide TCP friendliness.

  • by jimbosworldorg ( 615112 ) <slashdot AT jimbosworld DOT org> on Monday March 15, 2004 @08:09PM (#8573839) Homepage
    Every hop adds several milliseconds for processing time - and considerably more if the router in question is getting hit at the upper limit of its rated throughput (and thus having to buffer-and-wait instead of immediately routing packets).

    Speed-of-light is 186,000,000 meters per second - from (Cincinnatti) Ohio to Minneapolis is roughly 1600km by highway, which would leave you with a wire-speed delay of only 16ms round-trip.

    The extra 34ms you get on a well routed network generally tends to be time spent getting passed through intermediate routers along the way. Each router *does* add a noticeable amount of delay all of its own, apart from wire delay.

  • More Information (Score:3, Informative)

    by Percy_Blakeney ( 542178 ) on Monday March 15, 2004 @08:12PM (#8573854) Homepage
  • by colman77 ( 689696 ) on Monday March 15, 2004 @08:12PM (#8573858)
    This article is much clearer. http://www.csc.ncsu.edu/faculty/rhee/export/bitcp/
  • by trompete ( 651953 ) on Monday March 15, 2004 @08:16PM (#8573899) Homepage Journal
    Way too many zeros on the KMs :) --> 300,000 KM/s
  • by Anonymous Coward on Monday March 15, 2004 @08:18PM (#8573919)
    Well, if you would RTFA, then you would see that the point of the BIC protocol is to decrease the bottlenecks created by TCP on speciallized high speed data transmission networks (not your DSL line) such as those used by universities and other research institutions. Even if you have a bundle of fiber to your house with an optical computer your throughput will still be limited by TCP congestion.

    The comparison to DSL and modems is meant to demonstrate to people with no specialized knowledge of network architecture the speed increase gained from BIS. That would be your average layperson. So your current GigaBit network will not longer be slowed by TCP.
  • by tenchima ( 625569 ) on Monday March 15, 2004 @08:27PM (#8573992)

    I think DSL was originally a predominantly ATM transport layer

    DSL being the physical layer, with a network topology now available of of PPP of either PPPoA (ATM), or PPPoE (Ethernet).

    I guess this BIC-TCP is a new topology option to go with the PPP.

  • by j1m+5n0w ( 749199 ) on Monday March 15, 2004 @08:29PM (#8574002) Homepage Journal
    htmlified link [ncsu.edu] to pdf of paper and webpage [ncsu.edu] for the lazy

    Lisong Xu, Khaled Harfoush, and Injong Rhee, "Binary Increase Congestion Control for Fast Long-Distance Networks", To appear in Infocom 2004.

    jim
  • by bigginal ( 210452 ) on Monday March 15, 2004 @08:29PM (#8574006)
    I'm in Dr. Rhee's CSC316 (Data Structures for Computer Scientists) course. He absolutely knows his stuff, but he can be very hard to understand sometimes. His website is here [ncsu.edu], with a picture of the guy that doesn't really do him justice. When he walks into the classroom, I swear he looks like one of the laid-back teachers that will just let you slide by through the course, but he *really* makes you learn the material, inside and out.

    Anyway, if you're interested in a link to the original article hosted off of the NCSU servers, it is here [ncsu.edu].

    -bigginal
  • Summary of Paper (Score:5, Informative)

    by HopeOS ( 74340 ) on Monday March 15, 2004 @08:41PM (#8574092)
    First, the actual paper [ncsu.edu] is more informative. The crux of the argument is as follows.

    If you have a fat pipe, say 1 to 10GB/s, standard TCP will not fully utilize the bandwidth because the congestion control algorithm throttles the rate. As packets move and there are no errors, the rate increases, but not nearly fast enough. In particular, it takes 1.5 hours of error-free data transfer to reach full capacity, and a single error will cut the connection's bandwidth in half.

    BIC-TCP uses a different algorithm for congestion control that is more effective at these speeds.

    End of news flash.

    -Hope
  • by Anonymous Coward on Monday March 15, 2004 @08:49PM (#8574165)
    Mike's got it right.

    Why existing mechanisms don't work well: in order to be fair to other (TCP) network users, TCP only slowly increases how much bandwidth it sends. Even if you have a 100 Mb connection, if you've got a 100 ms ping time to somewhere, then in the first 100 ms TCP will only send 1500 bytes or so. In the next 100 ms, 3000. Depending on details, after that it may go exponential - 6k, 12k, 24k - but only to a limited point (often 32k or 64k bytes per RTT), after which it grows linearly at 1500 bytes per RTT.

    So what does that mean? Even with the exponential growth mode, if RTT is 100ms, you'll send less than 128kb in the first second your TCP connection is live, and it'll only grow by about 15kbps per second after that. So TCP transmitting as fast as it can for a minute sends well under 600kb, even if you have the whole 100 Mbps pipe available.
  • by ezzzD55J ( 697465 ) <slashdot5@scum.org> on Monday March 15, 2004 @08:54PM (#8574204) Homepage
    No, indeed. All this protocol does is increase the rate at which a transport layer protocol (such as TCP) adjusts to available bandwidth over a very fast physical medium (without bumping into the congestion limit, otherwise it would be easy ;)). Numbers such as 6000 and words such as DSL are crazy.
  • real mirror (Score:3, Informative)

    by silicon1 ( 324608 ) <david1@@@wolf-web...com> on Monday March 15, 2004 @08:54PM (#8574205) Homepage Journal
    site is slow so I mirrored it: mirror [silicon.wack.us]
  • by starm_ ( 573321 ) on Monday March 15, 2004 @08:57PM (#8574221)
    I have downloaded at 400kB/s on my computer.

    6000 times that is 2400MB/s

    This is faster that conventional RAM. A PC would not be able to accept the data at that speed fast enough to store it in RAM!

    The headline is obviously sensationalism.

    There exist fast optical cariers but they serve purposes that are very different to what DSL lines are meant to be. These are the kind of line that connects cities together and are not to be compared to DSL.
  • by photon317 ( 208409 ) on Monday March 15, 2004 @08:59PM (#8574237)

    The idea behind researching higher-speed protocols is that if you took plain old TCP and ran it on a line 6000x faster than DSL, you would find that the workings of the protocol itself would become the performance bottleneck in the system. These guys are thinking ahead and writing the protocols we'll need on future faster networks. The blurb _is_ kinda moronic in how it compares a protocol to DSL, but at the same time it is truthful. It would have made more sense if they had made it clearer that the protocol was designed to fully utilize an as-yet-unknown broadband service 6000x faster than DSL that might exist in the future.
  • by zenyu ( 248067 ) on Monday March 15, 2004 @09:08PM (#8574298)
    Of course, for a small fee, I'll let you use my super-duper protocol that offers virtually unlimited bandwidth - a buttzillion times faster than DSL.. it's called UDP. (UDP is very low overhead, no transmission windows, or ACK's -- or guarantees of being received.. You can stuff them onto a line as fast as it will take them.)

    Yeah, but then you'll really want to be familiar with these new TCP congestion models since you'll need to implement something. A few years ago I had to connect a few computers on a relatively fat, but very noisy pipe, after realizing TCP was using a tiny fraction of the pipe I switched to UDP. Which led to 2 months of reimplementing TCP (cuz you really can't stuff packets as fast as it will take it) with a little FEC (forward error correction to deal) thrown in to deal with the noisyness. It was a good review of old signals and systems fundamentals, but if I had to do it over again I'd have simply refused to work with the supercomputer until they rewired my network connection to it. I can't imagine that would have taken a quarter of the time. IANANE, if I were, I might have enjoyed implementing a network protocol.
  • by 680x0 ( 467210 ) <vicky @ s t e e d s . c om> on Monday March 15, 2004 @09:11PM (#8574313) Journal
    Yes, you can adjust the max. receive (MRU) and transmit (MTU) packet sizes. The MRU isn't adjusted, usually, just accept as big a packet as you can. The MTU can be adjusted manually (by the sysadmin), or automatically (PMTUD - path MTU discovery).

    However, adjusting the MTU has little to do with speed, as the Window Size (how much data can be transmitted before being acknowledged by the far end) is specified in number of bytes (in TCP). I suppose it could have some effect on speed, as when you send a packet that exceeds the MTU, it gets "segmented" into multiple IP packets, each with its own packet header overhead (and if any get lost, the whole bunch have to get retransmitted).

    What this new protocol deals with, however, is dynamically varying the window-size. Current TCP does that, but apparently not in as efficient a manner as this.

    So all this "x thousand times faster than DSL" is just complete bullshit. You'll never get any faster speeds than the slowest link between point A and point B. This new protocol simply tries to use the Y/bits-per-second available more efficiently. And you won't notice the inefficiency of the current TCP at speeds most DSL/cable/dialup users have available.

    Some tech journalists are just idiots.

  • by Frennzy ( 730093 ) on Monday March 15, 2004 @09:12PM (#8574318) Homepage
    That's also the speed of light in a vacuum.

    Electrical and optical signals travelling down copper or FO pathways (as well as microwaves through the air) have a reduced propagation speed. A good rule of thumb is about .7c.
  • Re:Cheap Bandwidth (Score:5, Informative)

    by alannon ( 54117 ) on Monday March 15, 2004 @09:15PM (#8574329)
    You appear to be rather confused.
    Modems that plug into your regular telephone line send a signal over a POTS (Plain-Old Telephone Service) phone line. This signal first goes to your telcos closest routing box, then to your telcos closest branch office. From there it gets routed to wherever your phone call was made to, etc... The technology used to route these signals is limited to a maximum THEORETICAL capacity of less than 64kbps because certain (or all) legs of the telephone network are analogue, not digital. That 'theoretical' rate is based on how much noise a typical telephone call has in it. There is simply no way to pass a denser signal through the line than that, according to our understandings of physics and math.
    The only similarity that DSL has to POTS internet connections is that the physical wires to your house are compatible and that (sometimes) the two technologies can be used over a single pair of them. Once the signal of a DSL line gets to its very first junction, it has nothing in common with your phone line any longer. It gets sent to a DSLAM bank at your nearest telco site, then sent into the larger regional DSL network and then finally routed out into the internet at large.
    What this means is, basically, is 1) there is a good reason why modem speeds haven't increased at all since 56kbps modems came out -- it's physically impossible for them to go faster. 2) DSL technology is transitory -- It only exists because people currently have wires from their telco already coming into their homes. I predict that slowly, over the next 10 years, we'll see telecommunications turn on its head. Instead of internet service being delivered over phone lines, we will have phone service delivered over internet connections. These lines may take the form of twisted-pair wires as is used in DSL, multiple twisted-pair wire groups as are used in ethernet, coaxial wires currently used in cable-tv/cable-modem service, or fiber-optical cables. The only thing I can guarantee is that they won't be routed through the telephone network before being passed into the internet.
  • by B'Trey ( 111263 ) on Monday March 15, 2004 @09:52PM (#8574610)
    It does make sense. (Sort of.) Traction is necessary for acceleration, not speed. Notice a top fuel dragster. Very wide wheels in the back - lots of traction for the wheel that hook the power to the ground. Very narrow, bicycle looking wheels in the front - minimal friction to slow the car down. If you want to go faster, but don't care how long it takes you to get up to speed (low acceleration but high speed), you want tires with low traction and thus low friction. If you want to get up to speed very quickly (lots of acceleration) you need lots of traction under the powered wheels.

    Notice the tires on a road bicycle. Very narrow, low friction tires. The human leg doesn't provide a great deal of power, so it's not necessary to have a great deal of traction to prevent spin out. Low friction means you go faster with less work. Now look at a mountain bike. Wide tires that get lots of friction. Great for riding rough, slippery trails at high speed. But not great for riding long distances on the road. Try doing a century on a road bike and then doing it on a mountain bike.

    Of course, on many cars low friction wheels are not going to give you any increase in speed. On my car, the rev limiter kicks in at just over 130. Low friction wheels would let me go that fast with a tad less work but it wouldn't help me go any faster.
  • by Cramer ( 69040 ) on Monday March 15, 2004 @09:53PM (#8574618) Homepage
    DSL is a modulation technology. You can do whatever you want with the bits entering and leaving the modulator/demodulator (mo-dem). Frame Relay and ATM are the predominant "layer2" transports with PPP gaining ground (PPPoKitchenSink is all the rage) and RFC1489(?) bridged ethernet losing ground (which is a shame as it has the lowest protocol overhead of all of them, esp. PPP.)

    What is BIC trying to fix? It certainly isn't "the internet" as most links, on average, run at a fraction of their available bandwidth. TCP can fill up more bandwidth than most people can aford. It looks like the researchers with these insane connections and even more insane data sets want the holy grail of zero protocol overhead and none of the inherent throttling. (TCP limits the number of packets it will transmit before pausing for an ack. As a result, a single TCP connection usually will not consume a gigE link -- 4 connections certainly can.)
  • by Ungrounded Lightning ( 62228 ) on Monday March 15, 2004 @10:02PM (#8574682) Journal
    It would be interesting to know how far out an implimentation of such a protocol on a large scale is.

    It already IS implemented.

    Or do you mean a large-scale "rollout"?

    If so, why bother? Unless you have a REALLY fat pipe and need to use it all for one stream, of course. (But not many need to do that, and the ones that do can now install it on both end points.)

    The phrasing of the article is leading to confusion. This is about a PROTOCOL, not about the UNDERLYING TRANSPORT.

    The TCP protocol, with its windows, handshaking turnarounds, and timeouts, imposes its own limit on the speed of the data transfer through it. For decades the limit imposed by TCP was so far above the limits imposed by the data rates of the underlying transport that it wasn't a major issue.

    But now some people are starting to have REALLY fast pipes. And for them TCP is becoming the limiting factor.

    So now reasearchers have come up with a tweaked version of TCP that won't hit the wall until the pipe is a LOT faster than what YOU can rent from your ISP. (Unless you're renting an OC-192, in which you might be starting to fall a little short of its capacity. But if you've got OC-48 or below you're fine.)

    When you CAN rent something over 6 Gbps, and you want to routinely use it all for a single TCP connection to get a REALLY FAST fast download, you might want to ask the nice professors for a THIRD generation TCP. B-)

    Meanwhile, if you're on an ordianry connection you're not going to increase your data rate by a factor of 6,000 by switching protocols. You might get a little bit closer to the line rate with this SECOND generation TCP. But that's it.

    Expect to see this start to gradually start showing up in protocol stacks as an option - automatically configured if both ends know about it and the inventors have come up with a backward-compatible negotiation. That way you'll be able to make better use of fat pipes when you can finally get them.
  • by AK Marc ( 707885 ) on Monday March 15, 2004 @10:21PM (#8574814)
    However, adjusting the MTU has little to do with speed,

    It affects the efficiency of a connection. If you have to waste a header for every 20 bytes, then you will have lower data throughput than if you could send packets up to 1500 bytes. Also, separate from the data over the wire speeds, computers generally process on a packet basis. That is, you will max out the CPU on a Windows system at a lower throughput with smaller packet sizes. Though the effect is lessened in *NIX systems because the TCP/IP stack is more efficient.
  • by km790816 ( 78280 ) <wqhq3gx02 AT sneakemail DOT com> on Monday March 15, 2004 @10:25PM (#8574839)
    For those that found a dead link, a better article: 'Better' TCP Invented [lightreading.com]

    Researchers in North Carolina State University's Department of Computer Science have developed a new data transfer protocol for the Internet that makes today's high-speed Digital Subscriber Line (DSL) connections seem lethargic.
  • by hattmoward ( 695554 ) on Monday March 15, 2004 @10:32PM (#8574897)
    What? DSL is a set of similar standards that share two-wire copper and use T1 signalling. (DS1 is T1 signalling over 4 copper wires) It's a link-level protocol and is completely transparent outside the terminations (the modem and the DSLAM, if you will). Saying DSL layers over IP is like tunneling a modem call to a BBS over VOIP. Home DSL implementations function as ethernet bridges, and some include PPPoE for further authentication over Ethernet MACs. At first, I thought you confused UDP with DSL, it is the unreliable protocol that IP stacks provide, but in the second paragraph, you said it is error-free, which UDP is not.
  • by ePhil_One ( 634771 ) on Monday March 15, 2004 @11:17PM (#8575267) Journal
    But now some people are starting to have REALLY fast pipes. And for them TCP is becoming the limiting factor.

    Its pretty darn easy to get really fast pipes. Motherboards ship with Gigabit ethernet now, Gigabit switches are way down in price. Most companies these days are building their networks on TCP/IP, this could be a pretty big thing corporate networks, iSCSI, etc. 10GigE isnt all that far away either.

    TCP/IP is bigger than the internet these days [admitedly, the server is down so I can't read the article]

  • by Loualbano2 ( 98133 ) on Tuesday March 16, 2004 @12:09AM (#8575600)
    You are thinking of RFC1483 [faqs.org] (Multiprotocol Encapsulation over ATM Adaptation Layer 5) which is obsoleted by RFC2684 [faqs.org]

    1483 Bridged does have lower overhead than PPPoA or PPPoE, but it sends broadcasts down the wire, both IP and Ethernet type. Depending on your network this could waste more bandwidth than PPPoX. 1483 Routed solves this, but you need ot allocate more IP space to use it.

    It's all a horse a piece.

    ft
  • by pyrrhonist ( 701154 ) on Tuesday March 16, 2004 @12:17AM (#8575655)
    Uh, no.

    Internet Standard #1 (currently RFC-3600 [ietf.org] - November 2003) lists TCP as being Standard #7, which is outlined in RFC-793 [ietf.org]. RFC-793 was published in September 1981. In other words, we are still using the 1981 edition of TCP. RFC-793 contains the following note from Jon Postel:

    This document is based on six earlier editions of the ARPA Internet Protocol
    Specification, and the present text draws heavily from them.
    In addition, the RFC index [isi.edu] lists RFC-675 (December 1974) as, "The first detailed specification of TCP".

    Thus, Dr. Rhee is a little bit off in his assesment.

  • by Ungrounded Lightning ( 62228 ) on Tuesday March 16, 2004 @04:03AM (#8576416) Journal
    But now some people are starting to have REALLY fast pipes. And for them TCP is becoming the limiting factor.

    Its pretty darn easy to get really fast pipes. Motherboards ship with Gigabit ethernet now, Gigabit switches are way down in price. Most companies these days are building their networks on TCP/IP, this could be a pretty big thing corporate networks, iSCSI, etc. 10GigE isnt all that far away either.


    The much higher speeds on a LAN are a good point.

    But.

    "The Wall" for TCP is a lot faster within a building than across a continent.

    The limit comes primarily from round-trip dealy - which is much shorter when things are microseconds apart then when they're milliseconds apart at speed-of-light-in-wire-or-fiber.

    The limit also comes from timeouts after lost or corrupted packets - from line flakeyness or congestion. But line flakeyness is nearly nonexistent on a LAN. As for congestion, if you're using switches rather than hubs it's also not as much of an issue within a building as it is in a cross-continent backbone.
  • by f0rt0r ( 636600 ) on Tuesday March 16, 2004 @04:14AM (#8576441)
    There are 7 layers in the OSI model. The pair of copper wires ( phone line ) would be layer 1, the way the DSL model sends data ( there are different ways of modulating the data ) is layer 2, now layer 3 we finally start getting to the actual data, but not yet a protocol for exchanging it, but at least it is up to ATM ( IP over ATM? I think ) now, layer 4, we finally got some data exhange going on between peers, and I forget the rest ( 5,6,7 ). But I am assuming from the article their protocol is at the layer 4.

    Btw, the DOD ( Dept. of Defense ) model lumps laters 3 and 4 together as the it's layer 3. 3 is network ( IP ) , and 4 being transport ( TCP ). But you almost never see on of the without the other, so they are popularly called TCP/IP .
  • by sir_cello ( 634395 ) on Tuesday March 16, 2004 @04:53AM (#8576552)
    If we're going to be pedantic:

    - the first version of the TCP specification appeared in 1973 (http://texts05.archive.org/0/texts/FirstPassDraft OfInternationalTransmissionProtocol);
    - subsequent versions were released between 1974 and 1979;
    - the final version of TCP/IP was published by DARPA in January 1980 by which time numerous implementations existed;
    - The Department of Defense standardisation recommendation was made in December 1978 and ratified in April 1980 (http://www.isi.edu/in-notes/ien/ien152.txt);
    - The ARPANET officially switched over from NCP to TCP/IP on 1st January 1983;

    If you want to know about congestion control, look at the work by Sally Floyd - it's her specialty and she sits on IETF now days. Since the original VJ work, many others have investigated various types of changes to TCP's congestion avoidance and control mechanisms: it was a very active area of investigation.

  • by Octorian ( 14086 ) on Tuesday March 16, 2004 @08:54AM (#8577198) Homepage
    First of all, the OSI model needs to be taken with a grain of salt. It does not directly map to the way things really work. Furthermore, there is flexibility allowed for layers of indirection.

    However, from the host's view down, TCP, UDP, or BIC-TCP is layer 4, IP is layer 3, and all that DSL/ATM/etc. stuff fits under the guise of Layer 2 and below. (of course with layers of indirection, you can simulate the ATM cloud over IP /w MPLS as well, but that is all transparent beyond the provider network)

"A car is just a big purse on wheels." -- Johanna Reynolds

Working...