Forgot your password?
typodupeerror
The Internet Technology Science

BIC-TCP 6,000 Times Quicker Than DSL 381

Posted by timothy
from the free-dinner-at-red-lobster dept.
An anonymous reader writes "North Carolina researchers have developed an Internet protocol, subsequently tested and affirmed by Stanford, that hums along at speeds roughly 6,000 times that of DSL. The system, called BIC-TCP, beat out competing protocols from Caltech, University College London and others. The results were announced at IEEE's annual communications confab in Hong Kong." Update: 03/16 04:46 GMT by T : ScienceBlog suggests this alternate link while their site is down.
This discussion has been archived. No new comments can be posted.

BIC-TCP 6,000 Times Quicker Than DSL

Comments Filter:
  • by Null_Packet (15946) * <nullpacket@@@doscher...net> on Monday March 15, 2004 @07:51PM (#8573639)
    It would be interesting to know how far out an implimentation of such a protocol on a large scale is.
  • Propagation delays (Score:5, Interesting)

    by trompete (651953) on Monday March 15, 2004 @07:52PM (#8573658) Homepage Journal
    Too bad they can't change the speed of light. They can put as much data on the wire as they want, but it will still take 100 ms and 25 hops to get there.
  • hmm (Score:5, Interesting)

    by krisp (59093) * on Monday March 15, 2004 @07:53PM (#8573667) Homepage
    This seems misleading. The artical says:
    "What takes TCP two hours to determine, BIC can do in less than one second,"

    Which looks to me like it can figure out the maximum bandwidth of a channel in a fraction of the time it generally takes TCP to do it, so as soon as you start transmitting at 100mbit you are using the entire pipe. Sure, its 6000 times faster than DSL but its not when it is used over the same DSL pipe. This is for getting data accross faster when you have massive bandwidth, not for bringing broadband into homes.
  • by apoplectic (711437) on Monday March 15, 2004 @07:53PM (#8573673)
    Don't worry. It won't be long before you have to suffer through a 30 minute streaming infomercial before getting to your desired bloated web page.
  • by fake_name (245088) on Monday March 15, 2004 @08:01PM (#8573760)
    Is second this notion. Here in Australia users on broadband have volume caps beyond which they have to pay for data at rather high rates, or suffer from their connections being cut back to the equivilent of a 28.8 Modem (depending on your ISP)

    The belief of USA based companies that bandwidth is "free" and that 30 second video clips are an acceptable form of advertising really hurts users in other parts of the world.
  • by madbastd (632125) * on Monday March 15, 2004 @08:04PM (#8573791)
    How can a protocol be rated faster than DSL? Shouldn't the rating be against another protocol?
    DSL is a layer 1 & 2 protocol. TCP and BIC-TCP are layer 4 protocols. So, you're correct that they're not comparing like with like.

    Also, TCP will also reach similar speeds under the right conditions. BIC-TCP will just reach those speeds with less ramp-up time, and over a wider range of conditions. One of those conditions is that it is not running over a DSL line, or a T3 or an OC3 or OC48 or anything that the average internet user will see in the next decade or two. So, the article is wildly over-hyping a minor protocol tweak that is irrelevant to almost all internet users.

  • by Anonymous Coward on Monday March 15, 2004 @08:06PM (#8573813)
    That's bogus. I live down here in Australia - the first country outside the US to join the 'net. Equipment delays between my home router and google.com's front end come to around 30ms. The delay going through fibre adds another 120ms to that figure. Using satellite adds another 400ms or more, again. (can't remember for sure, it's been a long time since I used a satellite link...)

  • Max DSL speed ? (Score:1, Interesting)

    by Anonymous Coward on Monday March 15, 2004 @08:12PM (#8573855)

    i thought was about 8mbs theoretical max, which compared to ISDN is pretty hefty, but squeezing gb's down telephone cable just seems to be the realms of fantasy
  • by The Happy Camper (750782) on Monday March 15, 2004 @08:12PM (#8573860)

    "TCP was originally designed in the 1980s when Internet speeds were much slower and bandwidths much smaller,"

    Is it me, or did they miss by a couple decades there?
  • by Jayfar (630313) on Monday March 15, 2004 @08:12PM (#8573865)
    Naw, physical layer (layer 1) is copper, glass and silicon.
  • by Anonymous Coward on Monday March 15, 2004 @08:15PM (#8573883)
    They are comparing DSL (physical transmition protocol) vs a congession protocol.
  • The Source (Score:2, Interesting)

    by Urgo (28400) on Monday March 15, 2004 @08:23PM (#8573959) Homepage
    Instead of reading it from a blog here it is the source press release [ncsu.edu]at the college. .. Still looking for the actual paper though..
  • by dbrower (114953) on Monday March 15, 2004 @08:26PM (#8573985) Journal
    tcp as we know it was NOT invented in 1974 -- that was the original arpanet, before the conversion to the IPv4 internet around 1983. Dr. Rhee is closer to being correct on this point than the confused references.

    Much algorithmic change has happened between the days of the 56k APRANET and multi-gigabit networks also using IP. Van Jacobsen's slow start and other ways of working out tradeoffs on bandwidth/delay vs. window size have been fiddled with for years, and arguably TCP as we know it is too compromised by history to work well as high speeds -- at least, that's what Rhee's comment suggests.

    This is really relevant stuff, not to be dismissed by wannabees.

    -dB

  • by cbreaker (561297) on Monday March 15, 2004 @08:41PM (#8574094) Journal
    Even if we saw this tech to our houses, the ISP's would still nerf the connection at some lame ass speed like 768k/128k. I mean, cablemodems are capable of 40Mbit and they usually nerf it down to 1.5 Mbit/256Kbit. And no, most nodes are not saturated; before they capped my cablemodem (which was after three years of using it) I used to see 10Mbit on a regular basis, and easily T-1 speeds on the upstream. I live in a heavily populated area.
  • Re:You forgot pigeon (Score:2, Interesting)

    by Jayfar (630313) on Monday March 15, 2004 @08:42PM (#8574108)
    My bad %-) For the benefit of those who don't RTFRFCs, RFC1149 - Standard for the transmission of IP datagrams on avian carriers [faqs.org]. Laugh, if you must, but RFC1149 has been successfully implemented [linux.no]. Need QOS? No problem - RFC2549 [faqs.org].
  • by ezzzD55J (697465) <slashdot5@scum.org> on Monday March 15, 2004 @08:46PM (#8574134) Homepage
    As far as I can tell..
    • It is a transport-layer protocol, such as TCP, making statements such as "New protocol could speed Internet significantly" (the title on the article page) a bit bogus, but "BIC-TCP 6,000 Times Quicker Than DSL" utterly clueless.
    • It addresses the problem that TCP connections over low latencies get to adjust their windows faster than their higher-latencies buddies sharing a link, causing the lower latency TCP connection to get more of the bandwidth before the link is filled up (and both TCP's back off due to their congestion window).
    • The window size is adjusted using binary search instead of an exponential increase; somehow this makes this new protocol able to adjust its window size to the maximum (representing optimum bandwidth utilisation) faster than regular TCP. Why this is remains puzzling, because both binary search and TCP (which uses a factor of the previous window size) should reach their windows sizes in logarithmic time, as both searches are exponentially fast.

      "What takes TCP two hours to determine, BIC can do in less than one second," Rhee said.

      This is very puzzling indeed, the article doesn't back it up in the least.

    The rest of the article can be summarized as harmless fluff and clueless crud, as far as I'm concerned.
  • by -tji (139690) on Monday March 15, 2004 @08:47PM (#8574147) Journal
    It seems to be just a very poor choice of units to quote (he was probably trying to dumb it down to something the interviewer would understand).

    From the text of the article, it sounds like it's an improvement on TCP's congestion control performance (where it widens/narrows its transmission window to allow more packets to be outstanding between ACK's). Apparently they have some big improvements over current TCP, which allow it to fully utilize high bandwidth links. TCP takes time to expand the window and "fill the pipe". With the short-lived TCP sessions used for HTTP, this is not very efficient.

    Of course, for a small fee, I'll let you use my super-duper protocol that offers virtually unlimited bandwidth - a buttzillion times faster than DSL.. it's called UDP. (UDP is very low overhead, no transmission windows, or ACK's -- or guarantees of being received.. You can stuff them onto a line as fast as it will take them.)
  • by Anonymous Coward on Monday March 15, 2004 @08:51PM (#8574180)
    Sometimes a quantitative change in technology leads to a qualitative change in society. Witness what the emergence of DSL has done to the music and movie industries.

    Even if less than 1/100 of the claimed speeds were widely implemented this would probably signal the end of copyright as we know it.

    Why? Users would be able to exchange a lifetime's worth of movies, software - you name it- in a matter of days or hours.

    As socially disruptive as that might be one can imagine truly incredible new applications that would be far more socially disruptive:

    Every internet user could in effect become a TV broadcaster if they so desired. In charge of not just one channel but many. The best channels, like the best blogs, could become hugely politically and/ or culturally influential. The big TV networks' grip would almost certainly be loosened far more than it already has been by the arrival of the net (I rarely watch TV these days, like many of my friends).

    Even the above is just a microcosm of what could be achieved. Because if speeds of that order could ever widely implemented it would be like wiring together millions of neurons : you would end up with behaviour and results totally unexpected from examination of individual components of the system.

    Knowing all the above how many people here are willing to bet that if the "Powers that be" see such a technology looming on the horizon they will not try to kill it or severely cripple it from the outset ? Personally, I believe that if a technology is commercially and technically feasible then, in a market economy, it is almost impossible to stop.
  • by joelja (94959) * on Monday March 15, 2004 @09:02PM (#8574255)
    I suppose it makes sense that the semi-clued can't tell the difference between a transport protocol and a link layer protocol. The situation is futher obscured by the differences between the 4 layer IETF model for protocol stacks and the 7 layer osi model both of which are more or less obsolete when you start having things like link layer signaling effect what goes on in upper layers as many efforts in standards bodies aim to do just that or the converse.

    Basically though, things like bic-tcp, and a lot of tuning that you can do to just plain-old-tcp are there so people with really fat network connections can utilize them in some sane fashion with a compartivily small number of data flows...

    If you happen to have 10GB/s ethernet or oc-192 POS circuits into your office and need to move data in reasonable amounts of time this might be welcome news. There's nothing in here that amounts to a new link layer though, or really any technology that's useful in the near or long term future to more than a tiny subset of all transport consumers.

    A reasonable desktop machine built today can do a passable job of keeping a gigabit ethernet link full which is fine if you have one, but not so useful if you don't. While the computing power I have personally available to me at home has increased by a factor of around 10,000 or so in the last decade, the actual speed of my external network connectivity has only increased (And I'm being optimistic here) by a factor of around 100 (to 1.5Mb/s symteric). I don't see and evidence that that would indicated that this is likely to change anytime soon, although if we follow the trend-line out another decade maybe oc-3 style connectity will really exist to the home. The gap between computing resources and available bandwidth doesn't really seem likely to get any narrower however. Thusly our ability to use data (of any variety) that we have to transport over a network is necessarily constrained not by protocol inovation but by the pidling little link-layer connections that connect our homes workpalces to the rest of the network.
  • by CrustyBread (762569) on Monday March 15, 2004 @09:20PM (#8574358)
    Just sometimes a quantitative change in technology leads to a qualitative change in society. Witness what the emergence of DSL has done to the music and movie industries.

    Even if less than 1/100 of the claimed speeds were widely implemented this would probably signal the end of copyright as we know it.

    Why? Users would be able to exchange a lifetime's worth of movies, software - you name it- in a matter of days or hours.

    As socially disruptive as that might be one can imagine truly incredible new applications that would be far more socially disruptive:

    Every internet user could in effect become a TV broadcaster if they so desired. In charge of not just one channel but many. The best channels, like the best blogs, could become hugely politically and/ or culturally influential. The big TV networks' grip would almost certainly be loosened far more than it already has been by the arrival of the net (I rarely watch TV these days, like many of my friends).

    Even the above is just a microcosm of what could be achieved. Because if speeds of that order could ever widely implemented it would be like wiring together millions of neurons : you would end up with behaviour and results totally unexpected from examination of individual components of the system.

    Knowing all the above how many people here are willing to bet that if the "Powers that be" see such a technology looming on the horizon they will not try to kill it or severely cripple it from the outset ? Personally, I believe that if a technology is commercially and technically feasible then, in a market economy, it is almost impossible to stop.
  • by Zareste (761710) on Monday March 15, 2004 @10:14PM (#8574766) Homepage
    No doubt they've already gotten a slew of calls from the MATRIX psychos, eh?
  • by Cramer (69040) on Monday March 15, 2004 @10:41PM (#8574952) Homepage
    PMTU adjusts the maxiumum send size (mss) for a connection in a downward direction. It starts out at the interface MTU. Ethernet is limited to 1500 without some "jumbo frame" voodoo. (bad things happen when this value is increased on hardware/software that doesn't know how to deal with it.)

    BIC will only make a difference for "distant" nodes -- let's say 10,000miles (~16,000km). At that distance, it takes 89.408ms for the bits to get from end to end assuming it's a straight shot (no switches, routers, etc.) The initial connection opening will take 3x the link delay for the required SYN, SYN+ACK, ACK. It takes .12112ms to transmit a single packet. TCP will stop at 3 to 5 packets to await an ACK. In that time, 733 additional packets could have been sent. Jumbo frame (9KB) would increase throughput by ~6x, but there'd still be a 118packet-time pause. All this boils down to ~480KB/s with jumbo frames.

    (somebody go check my math.)
  • Quick decription (Score:5, Interesting)

    by ziegast (168305) on Monday March 15, 2004 @11:56PM (#8575523) Homepage
    Seen on his website...

    BI-TCP is a new protocol that ensures a linear RTT fairness under large
    windows while offering both scalability and bounded TCP-friendliness.
    The protocol combines two schemes called additive increase and binary
    search increase. When the congestion window is large, additive increase
    with a large increment ensures linear RTT fairness as well as good
    scalability. Under small congestion windows, binary search increase is
    designed to provide TCP friendliness.


    My interpretation: This protocol would transfer data more efficiently than TCP/IP's teeny tiny packets and quickly figure out the correct packet size to maximize transfer speed. For similar reasons that a congested ATM network shreds the performance of multiple large TCP/IP data transfers, BI-TCP works better than TCP/IP at higher speeds. If you don't have OC-oh-my-god between your end-points, TCP/IP will continue work fine for you.

Never say you know a man until you have divided an inheritance with him.

Working...