BIC-TCP 6,000 Times Quicker Than DSL 381
An anonymous reader writes "North Carolina researchers have developed an Internet protocol, subsequently tested and affirmed by Stanford, that hums along at speeds roughly 6,000 times that of DSL. The system, called BIC-TCP, beat out competing protocols from Caltech, University College London and others. The results were announced at IEEE's annual communications confab in Hong Kong." Update: 03/16 04:46 GMT by T : ScienceBlog suggests this alternate link while their site is down.
Time to Implimentation? (Score:5, Interesting)
Re:Time to Implimentation? (Score:5, Funny)
As we all know, pr0n drives the technology bubble. Indicate that the average luser could watch internet pr0n real time over a 56K modem and it's just a matter of time.
Re:Time to Implimentation? (Score:3, Insightful)
I suppose it's possible some people say pr0n instead of porn to try
Re:Time to Implimentation? (Score:3, Insightful)
That was sort of my point.
A protocol as described has no real bearing on how fat your pipe is.
If you run the protocol over a T1 or DSL connection, you aren't going to see any obvious difference in speed.
It already IS implemented. (Score:5, Informative)
It already IS implemented.
Or do you mean a large-scale "rollout"?
If so, why bother? Unless you have a REALLY fat pipe and need to use it all for one stream, of course. (But not many need to do that, and the ones that do can now install it on both end points.)
The phrasing of the article is leading to confusion. This is about a PROTOCOL, not about the UNDERLYING TRANSPORT.
The TCP protocol, with its windows, handshaking turnarounds, and timeouts, imposes its own limit on the speed of the data transfer through it. For decades the limit imposed by TCP was so far above the limits imposed by the data rates of the underlying transport that it wasn't a major issue.
But now some people are starting to have REALLY fast pipes. And for them TCP is becoming the limiting factor.
So now reasearchers have come up with a tweaked version of TCP that won't hit the wall until the pipe is a LOT faster than what YOU can rent from your ISP. (Unless you're renting an OC-192, in which you might be starting to fall a little short of its capacity. But if you've got OC-48 or below you're fine.)
When you CAN rent something over 6 Gbps, and you want to routinely use it all for a single TCP connection to get a REALLY FAST fast download, you might want to ask the nice professors for a THIRD generation TCP. B-)
Meanwhile, if you're on an ordianry connection you're not going to increase your data rate by a factor of 6,000 by switching protocols. You might get a little bit closer to the line rate with this SECOND generation TCP. But that's it.
Expect to see this start to gradually start showing up in protocol stacks as an option - automatically configured if both ends know about it and the inventors have come up with a backward-compatible negotiation. That way you'll be able to make better use of fat pipes when you can finally get them.
Re:It already IS implemented. (Score:5, Funny)
According to an email I received today they are already available and no prescription is required.
Re:It already IS implemented. (Score:3, Informative)
Its pretty darn easy to get really fast pipes. Motherboards ship with Gigabit ethernet now, Gigabit switches are way down in price. Most companies these days are building their networks on TCP/IP, this could be a pretty big thing corporate networks, iSCSI, etc. 10GigE isnt all that far away either.
TCP/IP is bigger than the internet these days [admitedly, the server is down so I can't read the arti
TCP can run a lot faster within a building. (Score:4, Informative)
Its pretty darn easy to get really fast pipes. Motherboards ship with Gigabit ethernet now, Gigabit switches are way down in price. Most companies these days are building their networks on TCP/IP, this could be a pretty big thing corporate networks, iSCSI, etc. 10GigE isnt all that far away either.
The much higher speeds on a LAN are a good point.
But.
"The Wall" for TCP is a lot faster within a building than across a continent.
The limit comes primarily from round-trip dealy - which is much shorter when things are microseconds apart then when they're milliseconds apart at speed-of-light-in-wire-or-fiber.
The limit also comes from timeouts after lost or corrupted packets - from line flakeyness or congestion. But line flakeyness is nearly nonexistent on a LAN. As for congestion, if you're using switches rather than hubs it's also not as much of an issue within a building as it is in a cross-continent backbone.
Re:It already IS implemented. (Score:3, Funny)
Dude, my computer can barely talk to my ram that fast . . . . I don't need the next paris hilton video quite that quickly.
Re:Time to Implimentation? (Score:3, Interesting)
Re:Time to Implimentation? (Score:5, Informative)
Researchers in North Carolina State University's Department of Computer Science have developed a new data transfer protocol for the Internet that makes today's high-speed Digital Subscriber Line (DSL) connections seem lethargic.
Protocol faster than DSL? (Score:5, Insightful)
Re:Protocol faster than DSL? (Score:5, Insightful)
Neat stuff, stupid stupid article.
Re:Protocol faster than DSL? (Score:3, Informative)
I think DSL was originally a predominantly ATM transport layer
DSL being the physical layer, with a network topology now available of of PPP of either PPPoA (ATM), or PPPoE (Ethernet).
I guess this BIC-TCP is a new topology option to go with the PPP.
Re:Protocol faster than DSL? (Score:5, Informative)
What is BIC trying to fix? It certainly isn't "the internet" as most links, on average, run at a fraction of their available bandwidth. TCP can fill up more bandwidth than most people can aford. It looks like the researchers with these insane connections and even more insane data sets want the holy grail of zero protocol overhead and none of the inherent throttling. (TCP limits the number of packets it will transmit before pausing for an ack. As a result, a single TCP connection usually will not consume a gigE link -- 4 connections certainly can.)
Re:Protocol faster than DSL? (Score:3, Informative)
1483 Bridged does have lower overhead than PPPoA or PPPoE, but it sends broadcasts down the wire, both IP and Ethernet type. Depending on your network this could waste more bandwidth than PPPoX. 1483 Routed solves this, but you need ot allocate more IP space to use it.
It's all a horse a piece.
ft
Re:Protocol faster than DSL? (Score:3, Informative)
Re:Protocol faster than DSL? (Score:3, Insightful)
I wouldn't want an internet without having UDP/port 53 (DNS), that wouldn't be much fun, trust me, although maybe I could be able to remember the IP-addresses of google if I really wanted to.
That would help.
Re:Protocol faster than DSL? (Score:5, Funny)
Re:Protocol faster than DSL? (Score:5, Funny)
Re:Protocol faster than DSL? (Score:5, Funny)
Yeah, maybe you'll be able to out run the fire in your gas tank.
Re:They have tires like that (Score:4, Informative)
Notice the tires on a road bicycle. Very narrow, low friction tires. The human leg doesn't provide a great deal of power, so it's not necessary to have a great deal of traction to prevent spin out. Low friction means you go faster with less work. Now look at a mountain bike. Wide tires that get lots of friction. Great for riding rough, slippery trails at high speed. But not great for riding long distances on the road. Try doing a century on a road bike and then doing it on a mountain bike.
Of course, on many cars low friction wheels are not going to give you any increase in speed. On my car, the rev limiter kicks in at just over 130. Low friction wheels would let me go that fast with a tad less work but it wouldn't help me go any faster.
Re:Protocol faster than DSL? (Score:5, Insightful)
Or like saying they've invented a vehicle that goes faster than a NASCAR racetrack.
Re:Protocol faster than DSL? (Score:5, Insightful)
Re:Protocol faster than DSL? (Score:5, Funny)
Sure it will... provided you're not more than three feet from the central office.
Re:Protocol faster than DSL? (Score:3, Insightful)
Sure, and the earth is flat. Did anyone believe that you could go faster than 56k before they unleashed DSL? Now that DLS is out, why couldn't they come up with another technology that would go 6k times faster?
Open up!
Re:Protocol faster than DSL? (Score:3, Funny)
Re:Protocol faster than DSL? (Score:2)
Re:Protocol faster than DSL? (Score:2)
What the article seems to be trying to say is that this protocol works better than TCP/IP does on a heavily-used connection with bandwidth at the level of 6000 times greater than a typical DSL line.
Nothing to see here, move along... it won't get grits to your home any faster.
Re:Protocol faster than DSL? (Score:5, Funny)
in other news AMD has developed a new architecture 80 billion times faster than grapefruit
Re:Protocol faster than DSL? (Score:5, Funny)
Warrent some (lots of) explanation (Score:5, Informative)
It's a stupid comparison, but I guess they expect people to not have an idea what 9Gb/S is...
Re:Warrent some (lots of) explanation (Score:5, Informative)
6000 times that is 2400MB/s
This is faster that conventional RAM. A PC would not be able to accept the data at that speed fast enough to store it in RAM!
The headline is obviously sensationalism.
There exist fast optical cariers but they serve purposes that are very different to what DSL lines are meant to be. These are the kind of line that connects cities together and are not to be compared to DSL.
Re:Warrent some (lots of) explanation (Score:3, Funny)
Re:Warrent some (lots of) explanation (Score:5, Informative)
However, adjusting the MTU has little to do with speed, as the Window Size (how much data can be transmitted before being acknowledged by the far end) is specified in number of bytes (in TCP). I suppose it could have some effect on speed, as when you send a packet that exceeds the MTU, it gets "segmented" into multiple IP packets, each with its own packet header overhead (and if any get lost, the whole bunch have to get retransmitted).
What this new protocol deals with, however, is dynamically varying the window-size. Current TCP does that, but apparently not in as efficient a manner as this.
So all this "x thousand times faster than DSL" is just complete bullshit. You'll never get any faster speeds than the slowest link between point A and point B. This new protocol simply tries to use the Y/bits-per-second available more efficiently. And you won't notice the inefficiency of the current TCP at speeds most DSL/cable/dialup users have available.
Some tech journalists are just idiots.
Re:Warrent some (lots of) explanation (Score:3, Interesting)
BIC will only make a difference for "distant" nodes -- let's say 10,000miles (~16,000km). At that distance, it takes 89.408ms for the bits to get from end to end assuming it's a straight shot (no switches, routers, et
Re:Protocol faster than DSL? (Score:5, Insightful)
Dr. Rhee [ncsu.edu], who made that comparison, also made another factual error: "TCP was originally designed in the 1980s when Internet speeds were much slower and bandwidths much smaller" -- Tcp was actually invented in 1974. [about.com] Not that major, but you wouldn't expect a guy who "has been researching network congestion solutions for at least a decade" to miss the mark by so much.
Hopefully the reporter was confused, but since it was a press release, you'd think that it would have had time to go through some review.
Re:Protocol faster than DSL? (Score:5, Interesting)
Much algorithmic change has happened between the days of the 56k APRANET and multi-gigabit networks also using IP. Van Jacobsen's slow start and other ways of working out tradeoffs on bandwidth/delay vs. window size have been fiddled with for years, and arguably TCP as we know it is too compromised by history to work well as high speeds -- at least, that's what Rhee's comment suggests.
This is really relevant stuff, not to be dismissed by wannabees.
-dB
Re:Protocol faster than DSL? (Score:3, Informative)
Internet Standard #1 (currently RFC-3600 [ietf.org] - November 2003) lists TCP as being Standard #7, which is outlined in RFC-793 [ietf.org]. RFC-793 was published in September 1981. In other words, we are still using the 1981 edition of TCP. RFC-793 contains the following note from Jon Postel:
In addition, the RFC index [isi.edu] lists RFC-675 (December 1974) as, "The first detailed specifica
Re:Protocol faster than DSL? (Score:3, Informative)
- the first version of the TCP specification appeared in 1973 (http://texts05.archive.org/0/texts/FirstPassDraf t OfInternationalTransmissionProtocol);
- subsequent versions were released between 1974 and 1979;
- the final version of TCP/IP was published by DARPA in January 1980 by which time numerous implementations existed;
- The Department of Defense standardisation recommendation was made in December 1978 and ratified in April 1980 (http://www.isi.edu/in-notes/ien/ien152.txt)
Re:Protocol faster than DSL? (Score:3, Informative)
The idea behind researching higher-speed protocols is that if you took plain old TCP and ran it on a line 6000x faster than DSL, you would find that the workings of the protocol itself would become the performance bottleneck in the system. These guys are thinking ahead and writing the protocols we'll need on future faster networks. The blurb _is_ kinda moronic in how it compares a protocol to DSL, but at the same time it is truthful. It would have made more sense if they had made it clearer that the prot
Re:Protocol faster than DSL? (Score:4, Informative)
Re:Protocol faster than DSL? (Score:3, Informative)
However, from the host's view down, TCP, UDP, or BIC-TCP is layer 4, IP is layer 3, and all that DSL/ATM/etc. stuff fits under the guise of Layer 2 and below. (of course with layers of indirection, you can simulate the ATM cloud over IP
oops (Score:4, Funny)
Looks like the server just got Slashdotter 6,000 times faster than normal.
New Protocol???!!!! (Score:5, Funny)
Propagation delays (Score:5, Interesting)
Re:Propagation delays (Score:5, Informative)
Re:Propagation delays (Score:2)
Re:Propagation delays (Score:5, Informative)
Speed-of-light is 186,000,000 meters per second - from (Cincinnatti) Ohio to Minneapolis is roughly 1600km by highway, which would leave you with a wire-speed delay of only 16ms round-trip.
The extra 34ms you get on a well routed network generally tends to be time spent getting passed through intermediate routers along the way. Each router *does* add a noticeable amount of delay all of its own, apart from wire delay.
Re:Propagation delays (Score:3, Informative)
Electrical and optical signals travelling down copper or FO pathways (as well as microwaves through the air) have a reduced propagation speed. A good rule of thumb is about
Re:Propagation delays (Score:5, Funny)
40 Megasiemens? Don't you also need to know the capacitance and inductance of the connection in order to figure out the ping time from that?
Re:Propagation delays (Score:3, Informative)
Re:Propagation delays (Score:3, Insightful)
That's the point, though; they're trying to put data on the wire more often than before. TCP doesn't start out by saturating the wire, but instead slowly "tests the water" and transfers data more and more frequently until it is confident it has saturated the line.
This protocol, on the other hand, figures out the capacity of the line faster, and thus can saturate it more quickly. The difference b
Re:Propagation delays: quantum teleporting (Score:3, Funny)
hmm (Score:5, Interesting)
"What takes TCP two hours to determine, BIC can do in less than one second,"
Which looks to me like it can figure out the maximum bandwidth of a channel in a fraction of the time it generally takes TCP to do it, so as soon as you start transmitting at 100mbit you are using the entire pipe. Sure, its 6000 times faster than DSL but its not when it is used over the same DSL pipe. This is for getting data accross faster when you have massive bandwidth, not for bringing broadband into homes.
ob. simpsons ref. (Score:5, Funny)
Marge: Who would need that much porn
Homer: [drools]...oohhh..1 million times faster..
CS (Score:2)
They forgot to mention Steam.
Yikes! How can a home user tell? (Score:2, Insightful)
However, the idea is exciting... imagine! Internet at the speed of computer.
Bottle Necks (Score:2, Informative)
Ack... (Score:2, Redundant)
Sorta. It's based on photo-optics. (Score:4, Funny)
It's called the Bonfire-Utilizing Light System Hardware Infrastructure Technology (aka BULSHIT).
Re:Ack... (Score:3, Funny)
Great measurement (Score:4, Funny)
I want it in LOC/sec.
Tim
Re:Great measurement (Score:3, Funny)
mirror (Score:5, Informative)
New protocol could speed Internet significantly
Posted on Monday, March 15 @ 14:04:08 EST by bjs
Researchers in North Carolina have developed a data transfer protocol for the Internet that makes today's high-speed Digital Subscriber Line (DSL) connections seem lethargic. The protocol is named BIC-TCP, which stands for Binary Increase Congestion Transmission Control Protocol. In a recent comparative study run by the Stanford Linear Accelerator Center (SLAC), BIC consistently topped the rankings in a set of experiments that determined its stability, scalability and fairness in comparison with other protocols. The study tested six other protocols developed by researchers from schools around the world, including the California Institute of Technology and the University College of London. BIC can reportedly achieve speeds roughly 6,000 times that of DSL and 150,000 times that of current modems.
From North Carolina State University:
NC State Scientists Develop Breakthrough Internet Protocol
Researchers in North Carolina State University's Department of Computer Science have developed a new data transfer protocol for the Internet that makes today's high-speed Digital Subscriber Line (DSL) connections seem lethargic.
The protocol is named BIC-TCP, which stands for Binary Increase Congestion Transmission Control Protocol. In a recent comparative study run by the Stanford Linear Accelerator Center (SLAC), BIC consistently topped the rankings in a set of experiments that determined its stability, scalability and fairness in comparison with other protocols. The study tested six other protocols developed by researchers from schools around the world, including the California Institute of Technology and the University College of London.
Dr. Injong Rhee, associate professor of computer science, said BIC can achieve speeds roughly 6,000 times that of DSL and 150,000 times that of current modems. While this might translate into music downloads in the blink of an eye, the true value of such a super-powered protocol is a real eye-opener.
Rhee and NC State colleagues Dr. Khaled Harfoush, assistant professor of computer science, and Lisong Xu, postdoctoral student, presented a paper on their findings in Hong Kong at Infocom 2004, the 23rd meeting of the Institution of Electrical and Electronics Engineers Communications Society, on Thursday, March 11.
Many national and international computing labs are now involved in large-scale scientific studies of nuclear and high-energy physics, astronomy, geology and meteorology. Typically, Rhee said, "Data are collected at a remote location and need to be shipped to labs where scientists can perform analyses and create high-performance visualizations of the data." Visualizations might include satellite images or climate models used in weather predictions. Receiving the data and sharing the results can lead to massive congestion of current networks, even on the newest wide-area high-speed networks such as ESNet (Energy Sciences Network), which was created by the U.S. Department of Energy specifically for these types of scientific collaborations.
The problem, Rhee said, is the inherent limitations of regular TCP. "TCP was originally designed in the 1980s when Internet speeds were much slower and bandwidths much smaller," he said. "Now we are trying to apply it to networks that have several orders of magnitude more available bandwidth." Essentially, we're using an eyedropper to fill a water main. BIC, on the other hand, would open the floodgate.
Along with postdoctoral student Xu, Rhee has been working on developing BIC for the past year, although Rhee said he has been researching network congestion solutions for at least a decade. The key to BIC's speed is that it uses a binary search approach - a fairly common way to search databases - that allows for rapid detection of maximum network capacities with minimal loss of information. "What takes TCP two hours to determine, BIC can do in les
Strange Jab... (Score:4, Funny)
Summary: BIC-TCP is an efficient TCP successor (Score:5, Informative)
To quote the part that says what the article is actually about:
Re:Summary: BIC-TCP is an efficient TCP successor (Score:5, Insightful)
Comparing apples and oranges (Score:2, Redundant)
Apples and Oranges? (Score:5, Insightful)
The article is /.'d so I can't figure out wht this means - what transmission media/hardware are they using? I can make plain old TCP/IP 600,000 times faster than "DSL speeds" if I have hardware that meets that specification.
DSL speed vs IP speed (Score:5, Insightful)
The question I'd love to ask the authors would be "so, what happens when I run BIC-TCP over a DSL modem? Does it suddenly become 6000 times faster?" I don't think so.
Connections are still going to be constrained by the underlying link speed, and the internet will not become thousands of times faster overnight because of this.
Sure, BIC-TCP looks like it's more efficient than TCP and that's a good thing, but the gains this protocol provides over TCP are in scalability when using suitably big links.
Yeah but... (Score:5, Funny)
And so... (Score:5, Funny)
In other news.. (Score:5, Funny)
Please send lots of money in the form of grants to
super inventor guy
123 fake street
v3n3r9
Gigabit Ethernet? (Score:4, Funny)
They left out a word (Score:5, Funny)
Let's slashdot the researchers site too (Score:5, Informative)
High-speed networks with large delays present a unique environment where TCP may have a problem utilizing the full bandwidth. Several congestion control proposals have been suggested to remedy this problem. In these protocols, mainly two properties have been considered important: TCP friendliness and bandwidth scalability. That is, a protocol should not take away too much bandwidth from TCP while fully utilizing the full bandwidth of high-speed networks. We presents another important constraint, namely, RTT (round trip time) unfairness where competing flows with different RTTs may consume vastly unfair bandwidth shares. Existing schemes have a severe RTT unfairness problem because the window increase rate gets larger as window grows - ironically the very reason that makes them more scalable. The problem occurs distinctly with drop tail routers where packet loss can be highly synchronized. Bic-TCP is a new protocol that ensures a linear RTT fairness under large windows while offering both scalability and bounded TCP-friendliness. The protocol combines two schemes called additive increase and binary search increase. When the congestion window is large, additive increase with a large increment ensures linear RTT fairness as well as good scalability. Under small congestion windows, binary search increase is designed to provide TCP friendliness.
So What.... (Score:5, Funny)
(fine print: super human strength required, in order to reach maximum speed alterations of the laws of physics may be necessary.)
More Information (Score:3, Informative)
This one makes more sense (Score:4, Informative)
Mike's oversimplified take on things. (Score:5, Insightful)
Here's a super-simplified version of the problem they're trying to solve: Imagine you have a 3Mbps link to your ISP, as do 49 of your neighbors. However, your ISP has a 45Mbps T3 link to the outside internet. What happens when everybody on your ISP trys to download the Half-Life 2 demo at the same time, creating a need for for 150 Mbps at the ISP uplink? This is called congestion.
There are various solutions that you can use for congestion avoidance; you may have heard of TCP Vegas and Reno [psu.edu] (I'm linking to the PDF document, because it contains a lot of math. This should also be a signal to you about how ridiculously siplified my explanation above is). Obviously, when there is congestion, somebody's got to wait, but determining who and how is not as easy as it might seem.
The new part of the problem is: today's fast networks have very different bandwidth and latency ratios to the networks of even five years ago. Vegas and Reno congestion avoidance algorithms don't work as well as they used to under these conditions. This paper presents a solution that does work well on today's high-speed networks. (Maybe somebody with more expertise could pipe in here with a discussion of "why the existing mechanisms don't work well, and how the new solutions address the problem"?)
I believe slashdot has already covered FAST [caltech.edu], which I believe is a different solution to the same problems.
timothygate, a dark day for 'geeks' (Score:5, Funny)
A mistake of this magnitude really calls for the removal of ALL of his geek-points, immediate surrender of any ssh keys, termination of all accounts on any non-windows machines, immediate discontinuation of WEP encryption, reversion to SSID "netgear", and unrestricted enablement of "File & Printer Sharing".
Unless he can demonstrate how a Honda can get more people somewhere than then the highway they now use... Well actually more like the license plate and turn signals on a honda but I'll let him off easy :)
Rhee is my CSC316 teacher (Score:4, Informative)
Anyway, if you're interested in a link to the original article hosted off of the NCSU servers, it is here [ncsu.edu].
-bigginal
Clarification (Score:5, Insightful)
In order for TCP to increase its window for full utilization of 10Gbps with 1500-byte packets, it requires over 83,333 RTTs [round trip times]. With 100ms RTT, it takes approximately 1.5 hours...
If I understand correctly, they are not making the inherent speed faster, they are just making the protocol able to understand the nature of the bandwidth more quickly, thus improving its ability to efficiently utilize the bandwidth. Thus, instead of requiring 1.5 hours to ramp up, theirs might take a few seconds or minutes.
My guess is that you aren't going to see huge gains from this for the average person; you'd need scads and scads of bandwidth in order to really need something like this -- TCP doesn't have any problem saturating a small 56kbps.
Summary of Paper (Score:5, Informative)
If you have a fat pipe, say 1 to 10GB/s, standard TCP will not fully utilize the bandwidth because the congestion control algorithm throttles the rate. As packets move and there are no errors, the rate increases, but not nearly fast enough. In particular, it takes 1.5 hours of error-free data transfer to reach full capacity, and a single error will cut the connection's bandwidth in half.
BIC-TCP uses a different algorithm for congestion control that is more effective at these speeds.
End of news flash.
-Hope
It's not like it matters... (Score:4, Interesting)
BIC-TCP (Score:4, Funny)
Attempt at distilling technical info (Score:5, Interesting)
"What takes TCP two hours to determine, BIC can do in less than one second," Rhee said.
This is very puzzling indeed, the article doesn't back it up in the least.
It's a congestion control improvement (Score:4, Interesting)
From the text of the article, it sounds like it's an improvement on TCP's congestion control performance (where it widens/narrows its transmission window to allow more packets to be outstanding between ACK's). Apparently they have some big improvements over current TCP, which allow it to fully utilize high bandwidth links. TCP takes time to expand the window and "fill the pipe". With the short-lived TCP sessions used for HTTP, this is not very efficient.
Of course, for a small fee, I'll let you use my super-duper protocol that offers virtually unlimited bandwidth - a buttzillion times faster than DSL.. it's called UDP. (UDP is very low overhead, no transmission windows, or ACK's -- or guarantees of being received.. You can stuff them onto a line as fast as it will take them.)
Re:It's a congestion control improvement (Score:3, Informative)
Yeah, but then you'll really want to be familiar with these new TCP congestion models since you'll need to implement something. A few years ago I had to connect a few computers on a r
real mirror (Score:3, Informative)
hmm... confusion reigns among the semi-journalists (Score:4, Interesting)
Basically though, things like bic-tcp, and a lot of tuning that you can do to just plain-old-tcp are there so people with really fat network connections can utilize them in some sane fashion with a compartivily small number of data flows...
If you happen to have 10GB/s ethernet or oc-192 POS circuits into your office and need to move data in reasonable amounts of time this might be welcome news. There's nothing in here that amounts to a new link layer though, or really any technology that's useful in the near or long term future to more than a tiny subset of all transport consumers.
A reasonable desktop machine built today can do a passable job of keeping a gigabit ethernet link full which is fine if you have one, but not so useful if you don't. While the computing power I have personally available to me at home has increased by a factor of around 10,000 or so in the last decade, the actual speed of my external network connectivity has only increased (And I'm being optimistic here) by a factor of around 100 (to 1.5Mb/s symteric). I don't see and evidence that that would indicated that this is likely to change anytime soon, although if we follow the trend-line out another decade maybe oc-3 style connectity will really exist to the home. The gap between computing resources and available bandwidth doesn't really seem likely to get any narrower however. Thusly our ability to use data (of any variety) that we have to transport over a network is necessarily constrained not by protocol inovation but by the pidling little link-layer connections that connect our homes workpalces to the rest of the network.
If this technology gets off the ground ... (Score:4, Interesting)
Even if less than 1/100 of the claimed speeds were widely implemented this would probably signal the end of copyright as we know it.
Why? Users would be able to exchange a lifetime's worth of movies, software - you name it- in a matter of days or hours.
As socially disruptive as that might be one can imagine truly incredible new applications that would be far more socially disruptive:
Every internet user could in effect become a TV broadcaster if they so desired. In charge of not just one channel but many. The best channels, like the best blogs, could become hugely politically and/ or culturally influential. The big TV networks' grip would almost certainly be loosened far more than it already has been by the arrival of the net (I rarely watch TV these days, like many of my friends).
Even the above is just a microcosm of what could be achieved. Because if speeds of that order could ever widely implemented it would be like wiring together millions of neurons : you would end up with behaviour and results totally unexpected from examination of individual components of the system.
Knowing all the above how many people here are willing to bet that if the "Powers that be" see such a technology looming on the horizon they will not try to kill it or severely cripple it from the outset ? Personally, I believe that if a technology is commercially and technically feasible then, in a market economy, it is almost impossible to stop.
You think that's good? (Score:3, Funny)
Quick decription (Score:5, Interesting)
BI-TCP is a new protocol that ensures a linear RTT fairness under large
windows while offering both scalability and bounded TCP-friendliness.
The protocol combines two schemes called additive increase and binary
search increase. When the congestion window is large, additive increase
with a large increment ensures linear RTT fairness as well as good
scalability. Under small congestion windows, binary search increase is
designed to provide TCP friendliness.
My interpretation: This protocol would transfer data more efficiently than TCP/IP's teeny tiny packets and quickly figure out the correct packet size to maximize transfer speed. For similar reasons that a congested ATM network shreds the performance of multiple large TCP/IP data transfers, BI-TCP works better than TCP/IP at higher speeds. If you don't have OC-oh-my-god between your end-points, TCP/IP will continue work fine for you.
Totally false title (Score:3, Insightful)
posting. This is much worse than comparing apples and oranges,
it's like saying "a ferrari is faster than a tarmac road".
DSL is a low-level protocol for utilizing the copper going to
your house, and nothing in BIC-TCP is going to increase that
speed.
BIC-TCP is a solution for the more and more common problem of
really high bandwidth (say, up to hundreds of megabits, or
gigabits per sec.), combined with relatively long round trip
times. Like e.g. having a fiber from one continent to another,
or high speed satellite links. With standard TCP/IP your
transmission rate will basically be limited to
2^window_size_in_bits/RTT_in_seconds
(see http://www.ieft.org/rfc/rfc1323.txt). Try some calculations
and you'll find that this sucks majorly. BIC-TCP is meant as
a way out of this problem. It won't make your copper go faster.
Re:please don't do this. (Score:2, Interesting)
Re:please don't do this. (Score:5, Interesting)
The belief of USA based companies that bandwidth is "free" and that 30 second video clips are an acceptable form of advertising really hurts users in other parts of the world.
Re:Cheap Bandwidth (Score:3, Funny)
"Detroit" has a 100mpg carburetor that oil companies suppress to maintain price.
"Alternative" medicine is rejected by "Western" medicine to preserve medical monopolies.
Solar power if so cheap and efficient that it can easily provide for all our energy needs, but is preven
Re:Cheap Bandwidth (Score:5, Informative)
Modems that plug into your regular telephone line send a signal over a POTS (Plain-Old Telephone Service) phone line. This signal first goes to your telcos closest routing box, then to your telcos closest branch office. From there it gets routed to wherever your phone call was made to, etc... The technology used to route these signals is limited to a maximum THEORETICAL capacity of less than 64kbps because certain (or all) legs of the telephone network are analogue, not digital. That 'theoretical' rate is based on how much noise a typical telephone call has in it. There is simply no way to pass a denser signal through the line than that, according to our understandings of physics and math.
The only similarity that DSL has to POTS internet connections is that the physical wires to your house are compatible and that (sometimes) the two technologies can be used over a single pair of them. Once the signal of a DSL line gets to its very first junction, it has nothing in common with your phone line any longer. It gets sent to a DSLAM bank at your nearest telco site, then sent into the larger regional DSL network and then finally routed out into the internet at large.
What this means is, basically, is 1) there is a good reason why modem speeds haven't increased at all since 56kbps modems came out -- it's physically impossible for them to go faster. 2) DSL technology is transitory -- It only exists because people currently have wires from their telco already coming into their homes. I predict that slowly, over the next 10 years, we'll see telecommunications turn on its head. Instead of internet service being delivered over phone lines, we will have phone service delivered over internet connections. These lines may take the form of twisted-pair wires as is used in DSL, multiple twisted-pair wire groups as are used in ethernet, coaxial wires currently used in cable-tv/cable-modem service, or fiber-optical cables. The only thing I can guarantee is that they won't be routed through the telephone network before being passed into the internet.