Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
The Internet Technology Science

BIC-TCP 6,000 Times Quicker Than DSL 381

An anonymous reader writes "North Carolina researchers have developed an Internet protocol, subsequently tested and affirmed by Stanford, that hums along at speeds roughly 6,000 times that of DSL. The system, called BIC-TCP, beat out competing protocols from Caltech, University College London and others. The results were announced at IEEE's annual communications confab in Hong Kong." Update: 03/16 04:46 GMT by T : ScienceBlog suggests this alternate link while their site is down.
This discussion has been archived. No new comments can be posted.

BIC-TCP 6,000 Times Quicker Than DSL

Comments Filter:
  • by Null_Packet ( 15946 ) * <<ten.rehcsod> <ta> <tekcapllun>> on Monday March 15, 2004 @06:51PM (#8573639)
    It would be interesting to know how far out an implimentation of such a protocol on a large scale is.
    • by ackthpt ( 218170 ) * on Monday March 15, 2004 @07:16PM (#8573905) Homepage Journal
      It would be interesting to know how far out an implimentation of such a protocol on a large scale is.

      As we all know, pr0n drives the technology bubble. Indicate that the average luser could watch internet pr0n real time over a 56K modem and it's just a matter of time.

    • by Ungrounded Lightning ( 62228 ) on Monday March 15, 2004 @09:02PM (#8574682) Journal
      It would be interesting to know how far out an implimentation of such a protocol on a large scale is.

      It already IS implemented.

      Or do you mean a large-scale "rollout"?

      If so, why bother? Unless you have a REALLY fat pipe and need to use it all for one stream, of course. (But not many need to do that, and the ones that do can now install it on both end points.)

      The phrasing of the article is leading to confusion. This is about a PROTOCOL, not about the UNDERLYING TRANSPORT.

      The TCP protocol, with its windows, handshaking turnarounds, and timeouts, imposes its own limit on the speed of the data transfer through it. For decades the limit imposed by TCP was so far above the limits imposed by the data rates of the underlying transport that it wasn't a major issue.

      But now some people are starting to have REALLY fast pipes. And for them TCP is becoming the limiting factor.

      So now reasearchers have come up with a tweaked version of TCP that won't hit the wall until the pipe is a LOT faster than what YOU can rent from your ISP. (Unless you're renting an OC-192, in which you might be starting to fall a little short of its capacity. But if you've got OC-48 or below you're fine.)

      When you CAN rent something over 6 Gbps, and you want to routinely use it all for a single TCP connection to get a REALLY FAST fast download, you might want to ask the nice professors for a THIRD generation TCP. B-)

      Meanwhile, if you're on an ordianry connection you're not going to increase your data rate by a factor of 6,000 by switching protocols. You might get a little bit closer to the line rate with this SECOND generation TCP. But that's it.

      Expect to see this start to gradually start showing up in protocol stacks as an option - automatically configured if both ends know about it and the inventors have come up with a backward-compatible negotiation. That way you'll be able to make better use of fat pipes when you can finally get them.
      • by Jodka ( 520060 ) on Monday March 15, 2004 @09:55PM (#8575074)
        "That way you'll be able to make better use of fat pipes when you can finally get them."

        According to an email I received today they are already available and no prescription is required.
      • But now some people are starting to have REALLY fast pipes. And for them TCP is becoming the limiting factor.

        Its pretty darn easy to get really fast pipes. Motherboards ship with Gigabit ethernet now, Gigabit switches are way down in price. Most companies these days are building their networks on TCP/IP, this could be a pretty big thing corporate networks, iSCSI, etc. 10GigE isnt all that far away either.

        TCP/IP is bigger than the internet these days [admitedly, the server is down so I can't read the arti

        • by Ungrounded Lightning ( 62228 ) on Tuesday March 16, 2004 @03:03AM (#8576416) Journal
          But now some people are starting to have REALLY fast pipes. And for them TCP is becoming the limiting factor.

          Its pretty darn easy to get really fast pipes. Motherboards ship with Gigabit ethernet now, Gigabit switches are way down in price. Most companies these days are building their networks on TCP/IP, this could be a pretty big thing corporate networks, iSCSI, etc. 10GigE isnt all that far away either.


          The much higher speeds on a LAN are a good point.

          But.

          "The Wall" for TCP is a lot faster within a building than across a continent.

          The limit comes primarily from round-trip dealy - which is much shorter when things are microseconds apart then when they're milliseconds apart at speed-of-light-in-wire-or-fiber.

          The limit also comes from timeouts after lost or corrupted packets - from line flakeyness or congestion. But line flakeyness is nearly nonexistent on a LAN. As for congestion, if you're using switches rather than hubs it's also not as much of an issue within a building as it is in a cross-continent backbone.
      • When you CAN rent something over 6 Gbps, and you want to routinely use it all for a single TCP connection

        Dude, my computer can barely talk to my ram that fast . . . . I don't need the next paris hilton video quite that quickly.
    • No doubt they've already gotten a slew of calls from the MATRIX psychos, eh?
    • by km790816 ( 78280 ) <wqhq3gx02@@@sneakemail...com> on Monday March 15, 2004 @09:25PM (#8574839)
      For those that found a dead link, a better article: 'Better' TCP Invented [lightreading.com]

      Researchers in North Carolina State University's Department of Computer Science have developed a new data transfer protocol for the Internet that makes today's high-speed Digital Subscriber Line (DSL) connections seem lethargic.
  • by dodald ( 195775 ) * on Monday March 15, 2004 @06:51PM (#8573641) Homepage
    How can a protocol be rated faster than DSL? Shouldn't the rating be against another protocol? Did I miss something in the article?
    • by wankledot ( 712148 ) on Monday March 15, 2004 @06:54PM (#8573681)
      Comparing it to DSL (and POTS) was really stupid, IMO, but they needed something that would connect with the average reader (I guess.) They don't say anything about what kind of physical/data/ layers the thing runs on. Comparing it to DSL and modems leads the novice reader to think it works on POTS lines, which I'm sure is not the case.

      Neat stuff, stupid stupid article.

      • I think DSL was originally a predominantly ATM transport layer

        DSL being the physical layer, with a network topology now available of of PPP of either PPPoA (ATM), or PPPoE (Ethernet).

        I guess this BIC-TCP is a new topology option to go with the PPP.

        • by Cramer ( 69040 ) on Monday March 15, 2004 @08:53PM (#8574618) Homepage
          DSL is a modulation technology. You can do whatever you want with the bits entering and leaving the modulator/demodulator (mo-dem). Frame Relay and ATM are the predominant "layer2" transports with PPP gaining ground (PPPoKitchenSink is all the rage) and RFC1489(?) bridged ethernet losing ground (which is a shame as it has the lowest protocol overhead of all of them, esp. PPP.)

          What is BIC trying to fix? It certainly isn't "the internet" as most links, on average, run at a fraction of their available bandwidth. TCP can fill up more bandwidth than most people can aford. It looks like the researchers with these insane connections and even more insane data sets want the holy grail of zero protocol overhead and none of the inherent throttling. (TCP limits the number of packets it will transmit before pausing for an ack. As a result, a single TCP connection usually will not consume a gigE link -- 4 connections certainly can.)
          • You are thinking of RFC1483 [faqs.org] (Multiprotocol Encapsulation over ATM Adaptation Layer 5) which is obsoleted by RFC2684 [faqs.org]

            1483 Bridged does have lower overhead than PPPoA or PPPoE, but it sends broadcasts down the wire, both IP and Ethernet type. Depending on your network this could waste more bandwidth than PPPoX. 1483 Routed solves this, but you need ot allocate more IP space to use it.

            It's all a horse a piece.

            ft
        • There are 7 layers in the OSI model. The pair of copper wires ( phone line ) would be layer 1, the way the DSL model sends data ( there are different ways of modulating the data ) is layer 2, now layer 3 we finally start getting to the actual data, but not yet a protocol for exchanging it, but at least it is up to ATM ( IP over ATM? I think ) now, layer 4, we finally got some data exhange going on between peers, and I forget the rest ( 5,6,7 ). But I am assuming from the article their protocol is at the l
          • People keep saying TCP/IP, great, but they forget UDP.

            I wouldn't want an internet without having UDP/port 53 (DNS), that wouldn't be much fun, trust me, although maybe I could be able to remember the IP-addresses of google if I really wanted to.

            That would help.
    • by dreamchaser ( 49529 ) on Monday March 15, 2004 @06:54PM (#8573686) Homepage Journal
      That was my first thought. Isn't that like saying that they've invented gasoline that goes faster than a car?
      • by wankledot ( 712148 ) on Monday March 15, 2004 @06:56PM (#8573719)
        Better yet, they've invented tires for the space shuttle that are capable of going 100k times faster than regular tires. I want some of those tires for my Pinto! They'll make it that much faster!
        • by iminplaya ( 723125 ) on Monday March 15, 2004 @07:35PM (#8574054) Journal
          I want some of those tires for my Pinto! They'll make it that much faster!

          Yeah, maybe you'll be able to out run the fire in your gas tank. :-)
      • by Dun Malg ( 230075 ) on Monday March 15, 2004 @06:58PM (#8573735) Homepage
        That was my first thought. Isn't that like saying that they've invented gasoline that goes faster than a car?

        Or like saying they've invented a vehicle that goes faster than a NASCAR racetrack.

      • I *think* what they're trying to say is that BIC-TCP can utilize high-speed networks a lot better than plain-vanilla TCP/IP. But I don't know what the heck DSL is supposed to have to do with it; the physical *medium* consumer DSL uses (copper POTS lines) sure as hell isn't going to support a 9Gbps connection...
        • by Rorschach1 ( 174480 ) on Monday March 15, 2004 @07:06PM (#8573807) Homepage
          But I don't know what the heck DSL is supposed to have to do with it; the physical *medium* consumer DSL uses (copper POTS lines) sure as hell isn't going to support a 9Gbps connection...


          Sure it will... provided you're not more than three feet from the central office.

        • the physical *medium* consumer DSL uses (copper POTS lines) sure as hell isn't going to support a 9Gbps connection

          Sure, and the earth is flat. Did anyone believe that you could go faster than 56k before they unleashed DSL? Now that DLS is out, why couldn't they come up with another technology that would go 6k times faster?

          Open up!
      • Yep, or as it popped into my mind "You've never heard of the Millenium Falcon? It's the ship that made the Kessel Run in less than 12 parsecs"
    • I was wondering the same thing, I go to ncsu, and I cant seem to find anything that conclusively says whether this is a hardware standard or a software protocol, as far as I can tell its a software protocol Id really like to see some more technical info about it.
    • What the article seems to be trying to say is that this protocol works better than TCP/IP does on a heavily-used connection with bandwidth at the level of 6000 times greater than a typical DSL line.

      Nothing to see here, move along... it won't get grits to your home any faster.

    • by Cynikal ( 513328 ) on Monday March 15, 2004 @06:57PM (#8573728) Homepage
      thats what i was gonna say... last i heard DSL was physical connection method..

      in other news AMD has developed a new architecture 80 billion times faster than grapefruit
    • by lingqi ( 577227 ) on Monday March 15, 2004 @06:59PM (#8573746) Journal
      What they mean is that current TCP protocol becomes a bottleneck at high bandwidth applications, so a new protocol is designed that would be efficient up to ~6000xDSL speed (just a pot-shot guess, up to 9Gb/S?). It has nothing to do with pushing data down the POTS line, just that if one day you had a fat pipe to your house, this new protocol would make use of it properly unlike today's TCP.

      It's a stupid comparison, but I guess they expect people to not have an idea what 9Gb/S is...
      • by starm_ ( 573321 ) on Monday March 15, 2004 @07:57PM (#8574221)
        I have downloaded at 400kB/s on my computer.

        6000 times that is 2400MB/s

        This is faster that conventional RAM. A PC would not be able to accept the data at that speed fast enough to store it in RAM!

        The headline is obviously sensationalism.

        There exist fast optical cariers but they serve purposes that are very different to what DSL lines are meant to be. These are the kind of line that connects cities together and are not to be compared to DSL.
        • Eventually, mark my words, we will all have fat, fat OC-192-style connections coming into our houses and our computers will be participating in many and varied p2p networks swapping whatever the hell it is we're swapping all the time, because people will want to do this shit and technology marches on so they can keep selling stuff to people who keep forking over fistfuls of cash so they can buy it, by which I mean you and I. People are greedy and so they will keep inventing new stuff so they can sell it, it
    • by morcheeba ( 260908 ) * on Monday March 15, 2004 @07:03PM (#8573773) Journal
      Yep, it looks like the article makes no sense at all.

      Dr. Rhee [ncsu.edu], who made that comparison, also made another factual error: "TCP was originally designed in the 1980s when Internet speeds were much slower and bandwidths much smaller" -- Tcp was actually invented in 1974. [about.com] Not that major, but you wouldn't expect a guy who "has been researching network congestion solutions for at least a decade" to miss the mark by so much.

      Hopefully the reporter was confused, but since it was a press release, you'd think that it would have had time to go through some review.
      • by dbrower ( 114953 ) on Monday March 15, 2004 @07:26PM (#8573985) Journal
        tcp as we know it was NOT invented in 1974 -- that was the original arpanet, before the conversion to the IPv4 internet around 1983. Dr. Rhee is closer to being correct on this point than the confused references.

        Much algorithmic change has happened between the days of the 56k APRANET and multi-gigabit networks also using IP. Van Jacobsen's slow start and other ways of working out tradeoffs on bandwidth/delay vs. window size have been fiddled with for years, and arguably TCP as we know it is too compromised by history to work well as high speeds -- at least, that's what Rhee's comment suggests.

        This is really relevant stuff, not to be dismissed by wannabees.

        -dB

        • Uh, no.

          Internet Standard #1 (currently RFC-3600 [ietf.org] - November 2003) lists TCP as being Standard #7, which is outlined in RFC-793 [ietf.org]. RFC-793 was published in September 1981. In other words, we are still using the 1981 edition of TCP. RFC-793 contains the following note from Jon Postel:

          This document is based on six earlier editions of the ARPA Internet Protocol
          Specification, and the present text draws heavily from them.

          In addition, the RFC index [isi.edu] lists RFC-675 (December 1974) as, "The first detailed specifica

        • If we're going to be pedantic:

          - the first version of the TCP specification appeared in 1973 (http://texts05.archive.org/0/texts/FirstPassDraf t OfInternationalTransmissionProtocol);
          - subsequent versions were released between 1974 and 1979;
          - the final version of TCP/IP was published by DARPA in January 1980 by which time numerous implementations existed;
          - The Department of Defense standardisation recommendation was made in December 1978 and ratified in April 1980 (http://www.isi.edu/in-notes/ien/ien152.txt)

    • The idea behind researching higher-speed protocols is that if you took plain old TCP and ran it on a line 6000x faster than DSL, you would find that the workings of the protocol itself would become the performance bottleneck in the system. These guys are thinking ahead and writing the protocols we'll need on future faster networks. The blurb _is_ kinda moronic in how it compares a protocol to DSL, but at the same time it is truthful. It would have made more sense if they had made it clearer that the prot
  • oops (Score:4, Funny)

    by poot_rootbeer ( 188613 ) on Monday March 15, 2004 @06:52PM (#8573653)

    Looks like the server just got Slashdotter 6,000 times faster than normal.
  • by ptelligence ( 685287 ) on Monday March 15, 2004 @06:52PM (#8573656)
    Use it to host your blog server..immediately? You've been slashdotted.
  • Propagation delays (Score:5, Interesting)

    by trompete ( 651953 ) on Monday March 15, 2004 @06:52PM (#8573658) Homepage Journal
    Too bad they can't change the speed of light. They can put as much data on the wire as they want, but it will still take 100 ms and 25 hops to get there.
    • by jimbosworldorg ( 615112 ) <slashdot@j i m b o s w o rld.org> on Monday March 15, 2004 @06:54PM (#8573679) Homepage
      An awful lot of propagation delay tends to be equipment-internal rather than wire-length. Until you start talking about REALLY long distances like using satellite-based networking, anyway.
      • One that pisses me off is how the networks aren't bridged well. My ping to Ohio from Minneapolis was 40 MS with Comcast, and now it is 120 MS with RoadRunner. My packets are pretty tired by the time they get to San Jose and then back to Ohio. What sort of delays are you talking about inside of the devices? Most devices can start pass-through once they have the destination address out of the header. This is not true for store-and-forward devices though :(.
        • by jimbosworldorg ( 615112 ) <slashdot@j i m b o s w o rld.org> on Monday March 15, 2004 @07:09PM (#8573839) Homepage
          Every hop adds several milliseconds for processing time - and considerably more if the router in question is getting hit at the upper limit of its rated throughput (and thus having to buffer-and-wait instead of immediately routing packets).

          Speed-of-light is 186,000,000 meters per second - from (Cincinnatti) Ohio to Minneapolis is roughly 1600km by highway, which would leave you with a wire-speed delay of only 16ms round-trip.

          The extra 34ms you get on a well routed network generally tends to be time spent getting passed through intermediate routers along the way. Each router *does* add a noticeable amount of delay all of its own, apart from wire delay.

          • by Frennzy ( 730093 )
            That's also the speed of light in a vacuum.

            Electrical and optical signals travelling down copper or FO pathways (as well as microwaves through the air) have a reduced propagation speed. A good rule of thumb is about .7c.
        • by SETIGuy ( 33768 ) on Monday March 15, 2004 @08:45PM (#8574549) Homepage
          My ping to Ohio from Minneapolis was 40 MS with Comcast, and now it is 120 MS with RoadRunner.


          40 Megasiemens? Don't you also need to know the capacitance and inductance of the connection in order to figure out the ping time from that?

          % units

          You have: 40 MS
          You want: years
          conformability error
          40000000 A^2 s^3 / kg m^2
          31556926 s
    • They can put as much data on the wire as they want, but it will still take 100 ms and 25 hops to get there.

      That's the point, though; they're trying to put data on the wire more often than before. TCP doesn't start out by saturating the wire, but instead slowly "tests the water" and transfers data more and more frequently until it is confident it has saturated the line.

      This protocol, on the other hand, figures out the capacity of the line faster, and thus can saturate it more quickly. The difference b

  • hmm (Score:5, Interesting)

    by krisp ( 59093 ) * on Monday March 15, 2004 @06:53PM (#8573667) Homepage
    This seems misleading. The artical says:
    "What takes TCP two hours to determine, BIC can do in less than one second,"

    Which looks to me like it can figure out the maximum bandwidth of a channel in a fraction of the time it generally takes TCP to do it, so as soon as you start transmitting at 100mbit you are using the entire pipe. Sure, its 6000 times faster than DSL but its not when it is used over the same DSL pipe. This is for getting data accross faster when you have massive bandwidth, not for bringing broadband into homes.
  • by ejaw5 ( 570071 ) on Monday March 15, 2004 @06:53PM (#8573669)
    Nerd: I've developed a program that downloads porn from the interet a million times faster than normal

    Marge: Who would need that much porn

    Homer: [drools]...oohhh..1 million times faster..
  • by YanceyAI ( 192279 ) *
    Many national and international computing labs are now involved in large-scale scientific studies of nuclear and high-energy physics, astronomy, geology and meteorology. Typically, Rhee said, "Data are collected at a remote location and need to be shipped to labs where scientists can perform analyses and create high-performance visualizations of the data."

    They forgot to mention Steam.

  • This is a very impressive development... but I have to wonder. Current home computers would have no chance of even processing fast enough to keep up with that speed. I wonder how long it would take to get to the point that they would?

    However, the idea is exciting... imagine! Internet at the speed of computer.
  • Bottle Necks (Score:2, Informative)

    by pholower ( 739868 )
    It doesn't matter what the bandwidth of your pipe coming in is. It only matters what the connection of the other servers and switches is in the "internet cloud" At a rate like that, I would also wonder if ANY of the infrastructure we have in place would be able to keep up. Seems like something that wouldn't happen for decades.

  • Ack... (Score:2, Redundant)

    Someone needs a clue-bashing with the OSI model. A new internet protocol that's faster than DSL?? So...it negates the use a physical transmission system..or...what?
  • by TheOnlyCoolTim ( 264997 ) <tim.bolbrock@nOspam.verizon.net> on Monday March 15, 2004 @06:54PM (#8573683)
    There's 640 kbps DSL and there's 3 Mbps DSL...

    I want it in LOC/sec.

    Tim
  • mirror (Score:5, Informative)

    by Anonymous Coward on Monday March 15, 2004 @06:55PM (#8573691)
    Slowing down so here it is...

    New protocol could speed Internet significantly
    Posted on Monday, March 15 @ 14:04:08 EST by bjs

    Researchers in North Carolina have developed a data transfer protocol for the Internet that makes today's high-speed Digital Subscriber Line (DSL) connections seem lethargic. The protocol is named BIC-TCP, which stands for Binary Increase Congestion Transmission Control Protocol. In a recent comparative study run by the Stanford Linear Accelerator Center (SLAC), BIC consistently topped the rankings in a set of experiments that determined its stability, scalability and fairness in comparison with other protocols. The study tested six other protocols developed by researchers from schools around the world, including the California Institute of Technology and the University College of London. BIC can reportedly achieve speeds roughly 6,000 times that of DSL and 150,000 times that of current modems.

    From North Carolina State University:

    NC State Scientists Develop Breakthrough Internet Protocol

    Researchers in North Carolina State University's Department of Computer Science have developed a new data transfer protocol for the Internet that makes today's high-speed Digital Subscriber Line (DSL) connections seem lethargic.

    The protocol is named BIC-TCP, which stands for Binary Increase Congestion Transmission Control Protocol. In a recent comparative study run by the Stanford Linear Accelerator Center (SLAC), BIC consistently topped the rankings in a set of experiments that determined its stability, scalability and fairness in comparison with other protocols. The study tested six other protocols developed by researchers from schools around the world, including the California Institute of Technology and the University College of London.

    Dr. Injong Rhee, associate professor of computer science, said BIC can achieve speeds roughly 6,000 times that of DSL and 150,000 times that of current modems. While this might translate into music downloads in the blink of an eye, the true value of such a super-powered protocol is a real eye-opener.

    Rhee and NC State colleagues Dr. Khaled Harfoush, assistant professor of computer science, and Lisong Xu, postdoctoral student, presented a paper on their findings in Hong Kong at Infocom 2004, the 23rd meeting of the Institution of Electrical and Electronics Engineers Communications Society, on Thursday, March 11.

    Many national and international computing labs are now involved in large-scale scientific studies of nuclear and high-energy physics, astronomy, geology and meteorology. Typically, Rhee said, "Data are collected at a remote location and need to be shipped to labs where scientists can perform analyses and create high-performance visualizations of the data." Visualizations might include satellite images or climate models used in weather predictions. Receiving the data and sharing the results can lead to massive congestion of current networks, even on the newest wide-area high-speed networks such as ESNet (Energy Sciences Network), which was created by the U.S. Department of Energy specifically for these types of scientific collaborations.

    The problem, Rhee said, is the inherent limitations of regular TCP. "TCP was originally designed in the 1980s when Internet speeds were much slower and bandwidths much smaller," he said. "Now we are trying to apply it to networks that have several orders of magnitude more available bandwidth." Essentially, we're using an eyedropper to fill a water main. BIC, on the other hand, would open the floodgate.

    Along with postdoctoral student Xu, Rhee has been working on developing BIC for the past year, although Rhee said he has been researching network congestion solutions for at least a decade. The key to BIC's speed is that it uses a binary search approach - a fairly common way to search databases - that allows for rapid detection of maximum network capacities with minimal loss of information. "What takes TCP two hours to determine, BIC can do in les
  • by ClayJar ( 126217 ) on Monday March 15, 2004 @06:55PM (#8573693) Homepage

    To quote the part that says what the article is actually about:

    The key to BIC's speed is that it uses a binary search approach - a fairly common way to search databases - that allows for rapid detection of maximum network capacities with minimal loss of information. "What takes TCP two hours to determine, BIC can do in less than one second," Rhee said.
    • by zalas ( 682627 ) on Monday March 15, 2004 @07:13PM (#8573867) Homepage
      I think a better summary would be that this is not entirely a new protocol. Rather, it's a variant on TCP with changes to the window increasing portion of the code. Basically, they claim that there currently exists an unfairness in allocation of bandwidth of two connections sharing a pipe. Basically that having different round trip times causes them to share the bandwidth unfairly. Their protocol supposedly alleviates this problem in high bandwidth pipes whereas TCP does not.
  • A new protocol that's 150,000 times the speed of current modems? Uh...I think the reviewer got a little mixed up here. There's the max theoretical speed of the transmission line, and then there's the speed at which the protocol can transmit over that line. While I'm sure it can make modems faster by transmitting more bytes, its not going to make modems 150,000 times faster.
  • by Ars-Fartsica ( 166957 ) on Monday March 15, 2004 @06:56PM (#8573712)
    When did a protocol become "faster" than a transmission technology?

    The article is /.'d so I can't figure out wht this means - what transmission media/hardware are they using? I can make plain old TCP/IP 600,000 times faster than "DSL speeds" if I have hardware that meets that specification.

  • by DaveRobb ( 139653 ) on Monday March 15, 2004 @06:59PM (#8573744)
    This article somewhat erroneously compares the speed of "DSL" vs the speed of "BIC-TCP". DSL is a link-layer protocol. BIC-TCP is an network layer protocol. These are different things. See http://www.webopedia.com/quick_ref/OSI_Layers.asp [webopedia.com] for details.

    The question I'd love to ask the authors would be "so, what happens when I run BIC-TCP over a DSL modem? Does it suddenly become 6000 times faster?" I don't think so.
    Connections are still going to be constrained by the underlying link speed, and the internet will not become thousands of times faster overnight because of this.

    Sure, BIC-TCP looks like it's more efficient than TCP and that's a good thing, but the gains this protocol provides over TCP are in scalability when using suitably big links.

  • Yeah but... (Score:5, Funny)

    by gilmet ( 601408 ) on Monday March 15, 2004 @06:59PM (#8573748) Homepage
    Does it beat out AOL 9.0 Topspeed technology?
  • And so... (Score:5, Funny)

    by Tuxedo Jack ( 648130 ) on Monday March 15, 2004 @07:00PM (#8573755) Homepage
    This becomes just another fast way to piss the RIAA off.
  • by RajivSLK ( 398494 ) on Monday March 15, 2004 @07:01PM (#8573758)
    I have developed a super fast car that is 6,000 times quicker than your driveway, an delicious orange that is 6,000 times tastier than your tongue and a new form of water that is 6,000 wetter than your garden hose!

    Please send lots of money in the form of grants to
    super inventor guy
    123 fake street
    v3n3r9
  • by Animats ( 122034 ) on Monday March 15, 2004 @07:02PM (#8573769) Homepage
    They've discovered gigabit Ethernet! Wow!
  • by rock_climbing_guy ( 630276 ) on Monday March 15, 2004 @07:03PM (#8573776) Journal
    It's 6000x faster than MSN DSL, isn't it?
  • by bigsexyjoe ( 581721 ) on Monday March 15, 2004 @07:04PM (#8573788)
    Actually I'll just put the abstract below. If you want to read their paper, code, and other goodies, click here [ncsu.edu]

    High-speed networks with large delays present a unique environment where TCP may have a problem utilizing the full bandwidth. Several congestion control proposals have been suggested to remedy this problem. In these protocols, mainly two properties have been considered important: TCP friendliness and bandwidth scalability. That is, a protocol should not take away too much bandwidth from TCP while fully utilizing the full bandwidth of high-speed networks. We presents another important constraint, namely, RTT (round trip time) unfairness where competing flows with different RTTs may consume vastly unfair bandwidth shares. Existing schemes have a severe RTT unfairness problem because the window increase rate gets larger as window grows - ironically the very reason that makes them more scalable. The problem occurs distinctly with drop tail routers where packet loss can be highly synchronized. Bic-TCP is a new protocol that ensures a linear RTT fairness under large windows while offering both scalability and bounded TCP-friendliness. The protocol combines two schemes called additive increase and binary search increase. When the congestion window is large, additive increase with a large increment ensures linear RTT fairness as well as good scalability. Under small congestion windows, binary search increase is designed to provide TCP friendliness.

  • So What.... (Score:5, Funny)

    by Sophrosyne ( 630428 ) on Monday March 15, 2004 @07:08PM (#8573833) Homepage
    I've invented a pen that can write 6000 times faster than a pencil.
    (fine print: super human strength required, in order to reach maximum speed alterations of the laws of physics may be necessary.)
  • More Information (Score:3, Informative)

    by Percy_Blakeney ( 542178 ) on Monday March 15, 2004 @07:12PM (#8573854) Homepage
  • by colman77 ( 689696 ) on Monday March 15, 2004 @07:12PM (#8573858)
    This article is much clearer. http://www.csc.ncsu.edu/faculty/rhee/export/bitcp/
  • by IvyMike ( 178408 ) on Monday March 15, 2004 @07:26PM (#8573979)
    Ok, the article (especially the "6000x faster than DSL") doesn't make a whole lot of sense. Here's my take on it: they're talking about a new congestion avoidance mechanism.

    Here's a super-simplified version of the problem they're trying to solve: Imagine you have a 3Mbps link to your ISP, as do 49 of your neighbors. However, your ISP has a 45Mbps T3 link to the outside internet. What happens when everybody on your ISP trys to download the Half-Life 2 demo at the same time, creating a need for for 150 Mbps at the ISP uplink? This is called congestion.

    There are various solutions that you can use for congestion avoidance; you may have heard of TCP Vegas and Reno [psu.edu] (I'm linking to the PDF document, because it contains a lot of math. This should also be a signal to you about how ridiculously siplified my explanation above is). Obviously, when there is congestion, somebody's got to wait, but determining who and how is not as easy as it might seem.

    The new part of the problem is: today's fast networks have very different bandwidth and latency ratios to the networks of even five years ago. Vegas and Reno congestion avoidance algorithms don't work as well as they used to under these conditions. This paper presents a solution that does work well on today's high-speed networks. (Maybe somebody with more expertise could pipe in here with a discussion of "why the existing mechanisms don't work well, and how the new solutions address the problem"?)

    I believe slashdot has already covered FAST [caltech.edu], which I believe is a different solution to the same problems.
  • by mynameis (mother ... ( 745416 ) on Monday March 15, 2004 @07:26PM (#8573981)

    A mistake of this magnitude really calls for the removal of ALL of his geek-points, immediate surrender of any ssh keys, termination of all accounts on any non-windows machines, immediate discontinuation of WEP encryption, reversion to SSID "netgear", and unrestricted enablement of "File & Printer Sharing".

    Unless he can demonstrate how a Honda can get more people somewhere than then the highway they now use... Well actually more like the license plate and turn signals on a honda but I'll let him off easy :)

  • by bigginal ( 210452 ) on Monday March 15, 2004 @07:29PM (#8574006)
    I'm in Dr. Rhee's CSC316 (Data Structures for Computer Scientists) course. He absolutely knows his stuff, but he can be very hard to understand sometimes. His website is here [ncsu.edu], with a picture of the guy that doesn't really do him justice. When he walks into the classroom, I swear he looks like one of the laid-back teachers that will just let you slide by through the course, but he *really* makes you learn the material, inside and out.

    Anyway, if you're interested in a link to the original article hosted off of the NCSU servers, it is here [ncsu.edu].

    -bigginal
  • Clarification (Score:5, Insightful)

    by Percy_Blakeney ( 542178 ) on Monday March 15, 2004 @07:31PM (#8574022) Homepage
    It seems that the protocol is meant to decrease the amount of time it takes to fully utilize a certain (large) amount of bandwidth. TCP isn't designed to quickly utilize huge amounts of bandwidth, so they are compensating for that. To quote from their site:

    In order for TCP to increase its window for full utilization of 10Gbps with 1500-byte packets, it requires over 83,333 RTTs [round trip times]. With 100ms RTT, it takes approximately 1.5 hours...

    If I understand correctly, they are not making the inherent speed faster, they are just making the protocol able to understand the nature of the bandwidth more quickly, thus improving its ability to efficiently utilize the bandwidth. Thus, instead of requiring 1.5 hours to ramp up, theirs might take a few seconds or minutes.

    My guess is that you aren't going to see huge gains from this for the average person; you'd need scads and scads of bandwidth in order to really need something like this -- TCP doesn't have any problem saturating a small 56kbps.

  • Summary of Paper (Score:5, Informative)

    by HopeOS ( 74340 ) on Monday March 15, 2004 @07:41PM (#8574092)
    First, the actual paper [ncsu.edu] is more informative. The crux of the argument is as follows.

    If you have a fat pipe, say 1 to 10GB/s, standard TCP will not fully utilize the bandwidth because the congestion control algorithm throttles the rate. As packets move and there are no errors, the rate increases, but not nearly fast enough. In particular, it takes 1.5 hours of error-free data transfer to reach full capacity, and a single error will cut the connection's bandwidth in half.

    BIC-TCP uses a different algorithm for congestion control that is more effective at these speeds.

    End of news flash.

    -Hope
  • by cbreaker ( 561297 ) on Monday March 15, 2004 @07:41PM (#8574094) Journal
    Even if we saw this tech to our houses, the ISP's would still nerf the connection at some lame ass speed like 768k/128k. I mean, cablemodems are capable of 40Mbit and they usually nerf it down to 1.5 Mbit/256Kbit. And no, most nodes are not saturated; before they capped my cablemodem (which was after three years of using it) I used to see 10Mbit on a regular basis, and easily T-1 speeds on the upstream. I live in a heavily populated area.
  • BIC-TCP (Score:4, Funny)

    by iminplaya ( 723125 ) on Monday March 15, 2004 @07:43PM (#8574117) Journal
    "connects first time, every time."
  • by ezzzD55J ( 697465 ) <slashdot5@scum.org> on Monday March 15, 2004 @07:46PM (#8574134) Homepage
    As far as I can tell..
    • It is a transport-layer protocol, such as TCP, making statements such as "New protocol could speed Internet significantly" (the title on the article page) a bit bogus, but "BIC-TCP 6,000 Times Quicker Than DSL" utterly clueless.
    • It addresses the problem that TCP connections over low latencies get to adjust their windows faster than their higher-latencies buddies sharing a link, causing the lower latency TCP connection to get more of the bandwidth before the link is filled up (and both TCP's back off due to their congestion window).
    • The window size is adjusted using binary search instead of an exponential increase; somehow this makes this new protocol able to adjust its window size to the maximum (representing optimum bandwidth utilisation) faster than regular TCP. Why this is remains puzzling, because both binary search and TCP (which uses a factor of the previous window size) should reach their windows sizes in logarithmic time, as both searches are exponentially fast.

      "What takes TCP two hours to determine, BIC can do in less than one second," Rhee said.

      This is very puzzling indeed, the article doesn't back it up in the least.

    The rest of the article can be summarized as harmless fluff and clueless crud, as far as I'm concerned.
  • by -tji ( 139690 ) on Monday March 15, 2004 @07:47PM (#8574147) Journal
    It seems to be just a very poor choice of units to quote (he was probably trying to dumb it down to something the interviewer would understand).

    From the text of the article, it sounds like it's an improvement on TCP's congestion control performance (where it widens/narrows its transmission window to allow more packets to be outstanding between ACK's). Apparently they have some big improvements over current TCP, which allow it to fully utilize high bandwidth links. TCP takes time to expand the window and "fill the pipe". With the short-lived TCP sessions used for HTTP, this is not very efficient.

    Of course, for a small fee, I'll let you use my super-duper protocol that offers virtually unlimited bandwidth - a buttzillion times faster than DSL.. it's called UDP. (UDP is very low overhead, no transmission windows, or ACK's -- or guarantees of being received.. You can stuff them onto a line as fast as it will take them.)
    • Of course, for a small fee, I'll let you use my super-duper protocol that offers virtually unlimited bandwidth - a buttzillion times faster than DSL.. it's called UDP. (UDP is very low overhead, no transmission windows, or ACK's -- or guarantees of being received.. You can stuff them onto a line as fast as it will take them.)

      Yeah, but then you'll really want to be familiar with these new TCP congestion models since you'll need to implement something. A few years ago I had to connect a few computers on a r
  • real mirror (Score:3, Informative)

    by silicon1 ( 324608 ) <david1&wolf-web,com> on Monday March 15, 2004 @07:54PM (#8574205) Homepage Journal
    site is slow so I mirrored it: mirror [silicon.wack.us]
  • by joelja ( 94959 ) * on Monday March 15, 2004 @08:02PM (#8574255)
    I suppose it makes sense that the semi-clued can't tell the difference between a transport protocol and a link layer protocol. The situation is futher obscured by the differences between the 4 layer IETF model for protocol stacks and the 7 layer osi model both of which are more or less obsolete when you start having things like link layer signaling effect what goes on in upper layers as many efforts in standards bodies aim to do just that or the converse.

    Basically though, things like bic-tcp, and a lot of tuning that you can do to just plain-old-tcp are there so people with really fat network connections can utilize them in some sane fashion with a compartivily small number of data flows...

    If you happen to have 10GB/s ethernet or oc-192 POS circuits into your office and need to move data in reasonable amounts of time this might be welcome news. There's nothing in here that amounts to a new link layer though, or really any technology that's useful in the near or long term future to more than a tiny subset of all transport consumers.

    A reasonable desktop machine built today can do a passable job of keeping a gigabit ethernet link full which is fine if you have one, but not so useful if you don't. While the computing power I have personally available to me at home has increased by a factor of around 10,000 or so in the last decade, the actual speed of my external network connectivity has only increased (And I'm being optimistic here) by a factor of around 100 (to 1.5Mb/s symteric). I don't see and evidence that that would indicated that this is likely to change anytime soon, although if we follow the trend-line out another decade maybe oc-3 style connectity will really exist to the home. The gap between computing resources and available bandwidth doesn't really seem likely to get any narrower however. Thusly our ability to use data (of any variety) that we have to transport over a network is necessarily constrained not by protocol inovation but by the pidling little link-layer connections that connect our homes workpalces to the rest of the network.
  • by CrustyBread ( 762569 ) on Monday March 15, 2004 @08:20PM (#8574358)
    Just sometimes a quantitative change in technology leads to a qualitative change in society. Witness what the emergence of DSL has done to the music and movie industries.

    Even if less than 1/100 of the claimed speeds were widely implemented this would probably signal the end of copyright as we know it.

    Why? Users would be able to exchange a lifetime's worth of movies, software - you name it- in a matter of days or hours.

    As socially disruptive as that might be one can imagine truly incredible new applications that would be far more socially disruptive:

    Every internet user could in effect become a TV broadcaster if they so desired. In charge of not just one channel but many. The best channels, like the best blogs, could become hugely politically and/ or culturally influential. The big TV networks' grip would almost certainly be loosened far more than it already has been by the arrival of the net (I rarely watch TV these days, like many of my friends).

    Even the above is just a microcosm of what could be achieved. Because if speeds of that order could ever widely implemented it would be like wiring together millions of neurons : you would end up with behaviour and results totally unexpected from examination of individual components of the system.

    Knowing all the above how many people here are willing to bet that if the "Powers that be" see such a technology looming on the horizon they will not try to kill it or severely cripple it from the outset ? Personally, I believe that if a technology is commercially and technically feasible then, in a market economy, it is almost impossible to stop.
  • by aled ( 228417 ) on Monday March 15, 2004 @08:25PM (#8574387)
    My /dev/null is still faster in uploads.
  • Quick decription (Score:5, Interesting)

    by ziegast ( 168305 ) on Monday March 15, 2004 @10:56PM (#8575523) Homepage
    Seen on his website...

    BI-TCP is a new protocol that ensures a linear RTT fairness under large
    windows while offering both scalability and bounded TCP-friendliness.
    The protocol combines two schemes called additive increase and binary
    search increase. When the congestion window is large, additive increase
    with a large increment ensures linear RTT fairness as well as good
    scalability. Under small congestion windows, binary search increase is
    designed to provide TCP friendliness.


    My interpretation: This protocol would transfer data more efficiently than TCP/IP's teeny tiny packets and quickly figure out the correct packet size to maximize transfer speed. For similar reasons that a congested ATM network shreds the performance of multiple large TCP/IP data transfers, BI-TCP works better than TCP/IP at higher speeds. If you don't have OC-oh-my-god between your end-points, TCP/IP will continue work fine for you.
  • by TA ( 14109 ) on Tuesday March 16, 2004 @07:16AM (#8577039)
    Shame on whoever put that title ("Faster than DSL") on this
    posting. This is much worse than comparing apples and oranges,
    it's like saying "a ferrari is faster than a tarmac road".

    DSL is a low-level protocol for utilizing the copper going to
    your house, and nothing in BIC-TCP is going to increase that
    speed.

    BIC-TCP is a solution for the more and more common problem of
    really high bandwidth (say, up to hundreds of megabits, or
    gigabits per sec.), combined with relatively long round trip
    times. Like e.g. having a fiber from one continent to another,
    or high speed satellite links. With standard TCP/IP your
    transmission rate will basically be limited to
    2^window_size_in_bits/RTT_in_seconds
    (see http://www.ieft.org/rfc/rfc1323.txt). Try some calculations
    and you'll find that this sucks majorly. BIC-TCP is meant as
    a way out of this problem. It won't make your copper go faster.

If you have a procedure with 10 parameters, you probably missed some.

Working...