Bufferbloat — the Submarine That's Sinking the Net 525
gottabeme writes "Jim Gettys, one of the original X Window System developers and editor of the HTTP/1.1 spec, has posted a series of articles on his blog detailing his research on the relatively unknown problem of bufferbloat. Bufferbloat is affecting the entire Internet, slowly worsening as RAM prices drop and buffers enlarge, and is causing latency and jitter to spike, especially for home broadband users. Unchecked, this problem may continue to deteriorate the usability of interactive applications like VOIP and gaming, and being so widespread, will take years of engineering and education efforts to resolve. Being like 'frogs in heating water,' few people are even aware of the problem. Can bufferbloat be fixed before the Internet and 3G networks become nearly unusable for interactive apps?"
Correction: JIM GETTYS (Score:4, Informative)
http://en.wikipedia.org/wiki/X_Window_System
Really? (Score:2, Insightful)
Latency is bad? Bigger buffers = more latency?
Re: (Score:3, Informative)
Re: (Score:3)
Yes, buffers can introduce latency (Score:5, Informative)
Latency is bad? Bigger buffers = more latency?
Buffers increasing latency is not exactly a new phenomena. Its been observed and taken into design considerations for quite some time. For example back-in-the-day serial chips essentially had a buffer of one byte. The CPU fed data one byte at a time as the buffer became available and latency was pretty low since data was immediately transmitted. As more capable serial chips became available larger buffers were introduced. A newer chip may have a larger buffer but it may also not transmit data as soon as it has a single byte. It was common to have two programmable thresholds to begin a data transmission, (1) when a certain amount of data has accumulated in the buffer or (2) when a certain amount of time has elapsed. So if a "packet" to transmit was small enough it may sit in the buffer until (2), hence more latency with larger buffers. Software that cared generally began to issue flush commands to cause anything in the buffer to be sent immediately.
Network cards and/or the operating system may try to similarly accumulate data before transmitting a packet.
Re: (Score:3)
With network buffers, what JG seems to be saying is that this does not happen. As packets arrive at whatever the choke point is in the circuit, there is no method for telling the sender to stop sending - the
Re:Yes, buffers can introduce latency (Score:5, Insightful)
It doesn't help that massive numbers of people actively insist on breaking protocols which specifically exist to alleviate some of these types of problems. [wikipedia.org]
Far too many people ignorantly block all ICMP traffic. As a result, the network path in between the two communicating hosts are forced to buffer more data as the destination host becomes saturated. Worse, this type of filtering has a tendency to quickly compound, which in turn creates the exact type of bufferbloat he's describing.
I wish people would understand there is a difference between, "No route to host", and a black hole. When you find a black hole, chances are really good you've found a host. As such, purposely breaking protocols for people to have an imagined increase in security only breaks the Internet as a whole when it becomes a wide spread tactic. And before people start rattling off that it opens a whole new can of worms, please realize that unlike in the past, stateful firewalls are extremely common today - so no.
Definition, please (Score:5, Insightful)
I'm so glad the term has been defined so that I know what the hell we're talking about here. Oh wait, no it hasn't.
Okay, then I'll RTFA. Oh wait, two screens worth of text later and it still hasn't.
I'd like to change the topic now to the submarine that's sinking the English language: jargonbloat.
First link in the first article (Score:5, Insightful)
Just start RTFAing: "In my last post I outlined the general bufferbloat problem."
Follow the link:
"Each of these initial experiments were been designed to clearly demonstrate a now very common problem: excessive buffering in a network path. I call this bufferbloat
Re: (Score:2)
I hate it when someone feels the need to come up with a piece of meaningless jargon when "excessive packet buffering" would have been much more descriptive and required less explanation.
Re: (Score:2)
Re:First link in the first article (Score:4, Interesting)
Demands for definition are a bit pompous...
A bit?
Even more pompous is making a post about it when everyone can clearly see, "bufferbloat" is shorter than constantly saying something tedious like, "excessive packet buffering in the entirety of a network path."
Perhaps this will help the uninitiated. The article describes a wide problem of excessive packet buffering in the entirety of a network path, which has been dubbed, "bufferbloat."
Re: (Score:3)
A better definition could be:
"A user saturating its broadband connection by transferring 20GB files and not taking care of using the --bwlimit (limit bandwidth) option with rsync"
I have been using it for ages to prevent this very problem.
Other types of traffic shaping can be done, with Linux tc as an example, but it is always best to do it at the application level when possible.
Re: (Score:3)
The reason you traffic shape to 90% line speed is to stop your upstream from buffering at all. It's all just a workaround for the fact that your ISP has large buffers.
The fact we nerds can configure Linux routers to avoid the issue, doesn't mean its a non-issue to everyone else.
Re: (Score:3)
Buffering of packets, "network path" can't be refering to anything else.
Re: (Score:3)
Re:Buffering of what? (Score:5, Informative)
Within a router it would be the actual IP data packets that are being buffered. A standard router has a number of network interfaces (token ring, ethernet, wireless, ISDN, whatever....) . Each network interface is piece of hardware that is memory mapped to allow the CPU to send and receive packets. Each hardware device also has a small online memory buffer to store the most recently received or transmitted addressed data packets (every protocol layer down to the MAC source and destination address, IP address, sequence number as well as the data). Depending on system and packet size, that could be anything between 1 and 16.
The usual implementation was to have each hardware device generate an interrupt whenever some data had been received and to transfer the data from internal memory to a common pool in system RAM. The latter was divided up into pre-allocated blocks with a few large blocks (>1000 bytes) and many smaller blocks (512 bytes). Some one might have done a statistical analysis onto the theoretical distribution of the size of packet data being sent through the network. Most of the time this worked out, but there were problems that happened some times. If all the smaller blocks were in use, then the larger blocks were used instead. For efficiency, these wouldn't be transferred through the system, until all the entire block has been filled up with data, so if you have a stream of 128 byte packets, it would take eight of them before the larger block was filled. For some systems, packet sizes were enhanced to 4K or even 8K. A constant high-speed stream of small packets was most likely to do this.
Also, many of the hardware devices would simply overwrite the contents of one unprocessed data packet with the contents of the latest arrival if it wasn't collected fast enough. So that could really mess up sequence numbers.
Re: (Score:3)
For example if there is a brief burst of packets higher than a router's outbound connection bandwidth supports, the router has two options:
a) buffering or queuing up the packets (assuming enough buffer space), till the output queue empties.
b) dropping the packets.
UDP latency doesn't go up if UDP packets are dropped - since there are no retransmissions.
But if TCP packets are dropped too often just becaus
Re: (Score:3)
The solution is for the network providers to have enough internal bandwidth so that THEIR buffers rarely start to fill up, and there is minimal packet loss, AND for the user to do traffic shaping and policying at their connection (and the servers too).
There is no such thing a enough bandwidth. It will always fill. You need to allow the built in mechanisms to recognize when it is full. And while I agree that traffic shaping is nice (and easy with firewalls like m0n0wall) it is not in most home routers. Besides, expecting most users to do this properly when they can not even patch their systems is folly at best.
Re:Definition, please (Score:5, Informative)
For what it's worth, TFS seems to be linking into the middle of the story, so maybe that's part of my problem. Still, it's really annoying to be told about this new problem with new jargon word, that's going to make the sky fall any day now, without knowing just what the hell it is.
The previous article seems to explain things a little better: http://gettys.wordpress.com/2010/12/03/introducing-the-criminal-mastermind-bufferbloat/ [wordpress.com]
Re: (Score:3, Interesting)
Things change at large scale (Score:5, Informative)
How much bandwidth can I have, though? Take the link between my desktop and a Slashdot server; is the correct answer "1GBit/s, no more" (speed of my network card)? Is is "20MBit/s, no more" (speed of my current Internet connection)? Is it "0.5MBit/s, no more" (my fair share of this office's Internet connection)? In practice, you need the answer to change rapidly, depending on network conditions - maybe I can have the full 20MBit/s if no-one else is using the Internet, maybe I should slow down briefly while someone else handles their e-mail.
TCP doesn't slam the network; it starts off slowly (TCP slow start currently sends just two packets initially), and gradually ramps up as it finds that packets aren't dropped. When packet drop happens, it realises that it's pushing too hard, and drops back. If there's been no packet drop for a while, it goes back to trying to ramp up. RFC 5681 [ietf.org] talks about the gory details. It's possible (bar idiots with firewalls that block it) to use ECN (explicit congestion notification) [ietf.org] instead of packet drop to indicate congestion, but the presence of people who think that ECN-enabled packets should be dropped (regardless of whether congestion has happened) means that you can't implement ECN on the wider Internet.
This works well in practice, given sane buffers; it dynamically shares the link bandwidth, without overflowing it. Bufferbloat destroys this, because TCP no longer gets the feedback it expects until the latency is immense. As a result, instead of sending typically 20MBit/s (assuming I'm the only user of the connection), and occasionally trying 20.01MBit/s, my TCP stack tries 20.01MBit/s, finds it works (thanks to the queue), speeds up to 20.10MBit/s, and still no failure, until it's trying to send at (say) 25MBit/s over a 20MBit/s bottleneck. Then packet loss kicks in, and brings it back down to 20MBit/s, but now the link latency is 5 seconds, not 5 milliseconds.
Re:Definition, please (Score:4, Insightful)
Solutions which require the internet's infrastructure to be replaced (all the routers and switches and so forth) have been proposed for many years, and never go anywhere. The only one I'm aware of is IPv6, and look how slowly that beast has taken off. That said, TCP sawtooth isn't as bad as you make it out to be - in most cases. Whenever a packet is dropped, the TCP connection drops its speed to around half, then gradually ramps up to where it was previously. You don't get 100% of your bandwidth utilization, but you do get to automatically adjust to changing network conditions. And as the number of TCP connections over one pipe increases, you get closer and closer to max utilization rates.
TCP fails when:
-competing against UDP, which has no congestion control and will clog a line even if every UDP packet is dropped
-there is interference in the line causing packet corruption, which TCP interprets as congestion
-competing against Microsoft products, which have TCP stacks that are tweaked to grab more than their fair share of the bandwidth
My understanding is that TCP congestion control generally isn't applied to backbones - I believe that ISPs throttle your traffic before sending it over an optic link so as not to overbook its capacity. You're probably just competing with your household, and possibly people on your block - can someone verify this?
Re:Definition, please (Score:5, Insightful)
Actually, I blame the submitter. It is well known that Slashdot "editors" don't edit. They merely choose the least worthless articles out of the slush pile and push the button, sometimes using copy and paste to combine two similar submissions. Even my above link was still to the middle of the story, but it explains the core concept best.
I also place a teensy bit of blame on the blogger, for not linking the first use of the word to the previous article. But he couldn't expect to get linked into the middle of the series.
Re: (Score:2, Insightful)
Yeah, I see this a lot with nerds. It's pretty fucking annoying when someone launches in a long winded dissertation on some obscure subject, without even bothering to put an introductory paragraph at the top giving even the briefest overview of what the fuck they're even talking about. I shouldn't have to read fifteen paragraphs just to get a basic birds-eye view of what the problem is, a framework which I can then proceed to fill in by reading into the details.
Re:Definition, please (Score:5, Insightful)
They know something you don't, they want you to know it, and they want to keep it that way for as long as possible...
Re:Definition, please (Score:4, Informative)
There are two reasons I can think of why people write like that. One is they're poor communicators, the second is they want to appear intelligent.
It seems there are two kinds of stories posted here lately -- science and tech stories written for the non-nerd by non-nerds like one last week that explained what a CPU was (!), and stories like this that coin new jargon and don't explain it, or use an acronym that most folks here will misunderstand, like using BT when referring to Britich Telecom when most of us think of BitTorrent when we see BT.
Maybe I'm just getting old.
Re: (Score:2)
Blue Öyster Cult (Score:3)
In Canada references to the Bank of Canada in news stories have lately been abbreviated to BOC.
That's because unlike "Federal Reserve" and "Federal Express", "Bank of Canada" doesn't have a snappy, pronounceable contraction (Fed and FedEx respectively).
When I read "BOC to raise interest rates" I always wonder why the Blue Oyster Cult is doing that.
No, that'd be "BÖC to raise interest rates". BÖC was probably the first rock band to incorporate a gratuitous diaeresis [wikipedia.org] in its name. The root problem here is that BOC's [wikipedia.org] dis am bigger than yours [wikipedia.org].
Re: (Score:2)
There's a link to the definition in the first four words of the article. Do you want every peice of writing to repeat the definitions of every term it uses?
Re:Definition, please (Score:5, Insightful)
Yeah, I see this a lot with nerds. It's pretty fucking annoying when someone launches in a long winded dissertation on some obscure subject, without even bothering to put an introductory paragraph at the top giving even the briefest overview of what the fuck they're even talking about.
Its a series of blog articles. He presumes you've been following his series of articles whereby he introduces the topic and experimentally validates his assertions. If you didn't get the introduction, blame your own laziness or the failure of the poster to also provide a link to the first blog post in the series.
Basically you're complaining because you jumped to the middle of a book and then bitched that the chapter you started reading doesn't have an introduction. Most people will wonder what the hell is wrong with you. To then attack the author for other's failings is bizarre to say the least. And all this ignores that blogs are frequently written to be familiar and causal reading; which also entirely invalidates your general tone.
Re: (Score:3)
Many mechanisms have been proposed (Even I'm pro
Re: (Score:3)
He's written a whole series on this over the course of months, if he doesn't explain it a long way into the series then blame the slashdot summary, not the guy doing the research/testing and telling the world about it.
Re: (Score:2)
But that's just a guess.
Re:Definition, please (Score:5, Insightful)
You asked, I just provided:
http://gettys.wordpress.com/what-is-bufferbloat-anyway/
Good question.
Bufferbloat is the cause of much of the poor performance and human pain using today’s internet. It can be the cause of a form of congestion collapse of networks, though with slightly different symptoms than that of the 1986 NSFnet collapse. There have been arguments over the best terminology for the phenomena. Since that discussion reached no consensus on terminology, I invented a term that might best convey the sense of the problem. For the English language purists out there, formally, you are correct that “buffer bloat” or “buffer-bloat” would be more appropriate.
I’ll take a stab at a formal definition:
Bufferbloat is existence of excessively large (bloated) buffers into systems, particularly network communication systems.
Systems suffering from bufferbloat will have bad latency under load under some or all circumstances, depending on if and where the bottleneck in the communication’s path exists. Bufferbloat encourages congestion of networks; bufferbloat destroys congestion avoidance in transport protocols such as HTTP, TCP, Bittorrent, etc. Without active queue management, these bloated buffers will fill, and stay full.
More subtlety, poor latency, besides being painful to users, can cause complete failure of applications and/or networks, and extremely aggravated people suffering with them.
Bufferbloat is seldom detected during the design and implementations of systems as engineers are methodical people, seldom if ever test latency under load systematically, and today’s memory is so cheap buffers are often added without thought of the consequences, where it can be hidden in many different parts of network systems.
You see manifestations of bufferbloat today in your operating systems, your home network, your broadband connections, possibly your ISP’s and corporate networks, at busy conference wireless networks, and on 3G networks.
Bufferbloat is a mistake we’ve all made together.
We’re all Bozos on This Bus.
Re: (Score:3)
But I am not the author, so perhaps he can chime in.
Re:Definition, please (Score:5, Informative)
I'll attempt to translate.
TCP has to be able to estimate how fast* it can send data, because there's no way it can know definitively the link speed, capacity, and reliability between your system and a remote system. It does this by progressively getting faster until it starts detecting transmission problems between the two systems, at which point it backs off and slows down. Ideally, you hit a nice equilibrium at some point.
On a proper network, if some router along the path is at capacity, either internally, or along one of its outgoing paths, it should drop the packets it can't handle in a timely fashion. This seems counterintuitive at first, but remember that TCP handles the guaranteed transmission already - it will retransmit packets that didn't arrive. If the router is holding these packets in a buffer, and sending them along once the links clear up, i.e. "when it gets around to it", the packets will reach their destination with hugely inflated latency. This in turn confuses TCP, as it can't get a reliable estimate of link capacity, and the whole speed negotiation falls apart. The latency becomes wild and unpredictable as packets are sometimes buffered, sometimes not, but they always reach their destination, so TCP thinks it's sending at an acceptable rate. So now you've got all the endpoints conversing through this router that's claiming, "No problem, I can handle it!" when it really can't, and the problem just compounds itself as the router gets slammed harder and harder.
By getting timely notification of dropped packets, TCP can say, "Oh, I'm transmitting too fast for this link, time to shrink the sliding window and slow down." This both smooths out latency, and minimizes further dropped packets, not just for the two hosts involved, but for everyone else transmitting through the affected routes as well. This is how it's supposed to work, but excessive buffering of packets within routers prevents it from happening.
Moral: Dropped packets are perfectly normal and in fact required for TCP to manage its own speed and latency. Stop trying to buffer and guarantee packet delivery - TCP is handling that already.
(Disclaimer: I'm a DBA, not a network engineer. Feel free to clarify or correct anything I've mucked up.)
* "Fast" in this case means "How many packets should I send at once before stopping to wait for acknowledgment of those packets getting where they're going". "Faseter" equates to "more of them".
Re:Definition, please (Score:4, Funny)
Name wrong (Score:3, Informative)
He's Jim Gettys, not Getty.
Awsum, TTY in your name (Score:5, Funny)
Jim Getty, one of the original X Window System developers and editor of the HTTP/1.1 spec
I'd murder four people just to have TTY in my name. Five if I could capitalize them, and postfix with a number. I'd name my son Dev.
You'd get a business card with something like Dev GeTTY1, Armadillo Avenue 64, Seattle, Washington
Re:Awsum, TTY in your name (Score:4, Funny)
So you are the reason I keep getting this in my logs "getty keeps dying. There may be a problem".
Re:Awsum, TTY in your name (Score:4, Funny)
Naming Your Son Dev (Score:2)
Theoretically, could this be mitigated with ATM? (Score:2)
If we all switched to ATM (Asynchronous transfer mode [wikipedia.org]), would this issue be fixed, regardless of the size of RAM available at the endpoints? Yes, yes I realize that this would be utterly impractical; my question is theoretical.
Re: (Score:2)
If we all switched to ATM, I'd find you in your sleep and murder you.
TBH though.. MPLS sorta tries to split the difference in the 'good' ways. Especially if you drink the Kool Aid (tm) and have the budget to spend on rolling it out.
Comment removed (Score:5, Insightful)
Re: (Score:2)
I agree. I was having the same issues on a much smaller connection until I set up QOS. Now, I never have issues with any of my stuff.
Re: (Score:2)
There's much more to it than that - the connection gets maxed out too easily, or it maxes out way below where it should, the reason being that too much is buffered. Too much buffering = lots of latency = TCP/IP latency and bandwidth calculations go out the window and you can't get the transfer speed you ought to.
Or so I understand it.
Re: (Score:2)
In my experience most software will handle some higher latencies just fi
Re:pegged connection == latency, who'd of thunk it (Score:5, Informative)
Several issues:
1. People who aren't networking engineers don't know about QoS, or don't know/want to know how to configure it.
2. QoS used that way is a hack to work around an issue that doesn't have to be there in the first place
3. How do you determine the maximum throughput? It's not necessarily the official line's speed. The nice thing about TCP is that it's supposed to figure out on its own how much bandwidth there is. You're proposing a regression to having to tell the system by hand.
4. QoS is most effective on stuff you're sending, but in the current consumer-oriented internet most people download a lot more than they upload.
Re:pegged connection == latency, who'd of thunk it (Score:4, Informative)
but in the current consumer-oriented internet most people download a lot more than they upload.
Because the current consumer infrastructure forces it onto you. I would happily seed my torrents all year long, except I only have 1/12th the uploading bandwidth as I have for downloading. Since I need some of it for other things, uploading becomes impractical.
It's easy to blame the consumer, but there's a certain model imposed on him from the start.
It will be a hack (Score:5, Insightful)
2. QoS used that way is a hack to work around an issue that doesn't have to be there in the first place
3. How do you determine the maximum throughput? It's not necessarily the official line's speed. The nice thing about TCP is that it's supposed to figure out on its own how much bandwidth there is. You're proposing a regression to having to tell the system by hand.
4. QoS is most effective on stuff you're sending, but in the current consumer-oriented internet most people download a lot more than they upload.
While the Internet in-theory is beautiful, our modern implementation really is a series of layered hacks. And the solution to Bufferbloat is going to be another hack. You're crazy if you think that the solution to the Bufferbloat 'problem' is going to be some fundamental redesign of the TCP protocol (how would you force 10 people to use it?), or the total re-architecture of millions of consumer devices to remove buffering. You're also crazy if you think the ISPs and backbone providers are going to stand by while this thing kills the Internet.
So the question is: which hack will it be? The GP poster already identified one that works well enough --- using QoS to control flows. Your final objection about content providers stressing connections is the real one. But there's some probably a good hack to deal with it --- or more likely a series of hacks, some at the content providers themselves (e.g., Netflix), some in the backbone, and some at your ISP. It won't be elegant, but it will keep this problem from ever becoming anything more than a few cranky blog posts.
Re: (Score:3)
The problem is that maxing your connection from one site is causing everything else you do on your connection to be delayed / dropped as well, because it ends up queued behind anything that got buffered mid-transit from the first site. With a smaller buffer the large transfer would start to drop packets and back off sooner, allowing packets from other sources to "hop the queue".
Re:pegged connection == latency, who'd of thunk it (Score:5, Interesting)
As an extreme example, say you request a 1GB file from a download site. That site has a monster internet connection, and manages to transmit the entire file in 1 second. The file makes it to the ISP at that speed, who then buffers the packets for slow transmission over your ADSL link, which will take 1 hour. During that time you try to browse the web, and your PC tries to do a dns lookup. The request goes out ok, but the response gets added to the buffer on the ISP side of your internet connection, so you won't get it until your original transfer completes. How's 1 hour for latency?
The situation is only not that bad because:
A: Most download sites serve so many people at once and/or rate limit so they won't saturate most peoples' connections
B: Most buffers in network hardware are still quite small
Re:pegged connection == latency, who'd of thunk it (Score:4)
Re:pegged connection == latency, who'd of thunk it (Score:4, Funny)
So naturally, I instantly get modded down.
Re:pegged connection == latency, who'd of thunk it (Score:5, Insightful)
This is an excellent explanation of what issues are happening here. I can clearly see that this is an issue, and the problem is something that over time will impact everybody.
The problem is really focused on trying to deal with differences in bandwidth between computers... always a problem but in this case trying to match up slow connections with fast connections is particularly difficult. Since memory is cheap, a 1 GB buffer certainly can be found in some devices now and perhaps much more. I don't see this example as being really too far off the mark in the near future.... which is the point being raised and why buffer bloat is such a big deal.
More to the point, some of the complaints that triggered the "quality of service" debate are rooted in this problem. As mentioned in the original article triggering this whole slashdot thread, setting up "quality of service" priorities only creates multiple buffer queues.... it doesn't solve the problem of the monster queue to begin with. That is why the author of the blog post suggests that the debate over network neutrality is not based upon the real problem that is facing network engineering and why it is a political solution in search of a problem.
It takes awhile to "grok" this problem, but once you do it becomes obvious why this is such a huge deal.
Re:pegged connection == latency, who'd of thunk it (Score:4, Informative)
OK, not on the (intentionally ridiculous) scale used in the example, but people are doing something very similar to what you describe, even though they "can't do that". http://slashdot.org/article.pl?sid=10/11/26/1729218 [slashdot.org]
Re:pegged connection == latency, who'd of thunk it (Score:4, Informative)
There is no 'bufferbloat because RAM is getting cheaper'. What he is seeing is what happens when you want to saturate your link. ... ...you get either a buffered or a dropped packet.
Yes, and if a link is saturated, there should be packet drops, which TCP senses, then automatically throttles back to reduce the required bandwidth and avoid saturation. But what is happening, is that these huge buffers are holding packets that would otherwise be dropped, and so TCP doesn't get the feedback it needs to detect saturation. So it continues transmitting at full speed, believing it has uncongested pipes, which in turn continues to fill the buffers, and so on.
Because of the buffers, most of these packets are eventually getting through, but maybe in seconds instead of tens or low hundreds of milliseconds. Thus you're getting huge latency.
Jitter is caused by the buffers eventually filling or TCP timing out (registering packet loss), dropping the rate for a little bit, the buffers draining, then TCP upping the rate again as the buffers refill, hiding the saturation, until they're full again. Rinse and repeat.
It's related to the "bloat" of buffering (due to the increasing affordability of RAM and the "more of a good thing must be better than a little of a good thing - QED" mindset) because, if the size of the buffer is kept below a certain point related to the pipe bandwidth and number of traffic streams, it tends to act just as a temporary "buffer" against spikes in the traffic (the intention of buffering), and can't cause the scenario above, having insufficient capacity to overload the bandwidth just from buffer contents alone. Above this threshold, the latency issues and back-and-forth thrashing noted above occurs. The bigger the buffers, the worse the effect.
And it's not just a "well, keep your traffic below x mbit if you're on ADSL2" issue, because it happens anywhere a high capacity pipe interfaces with a low capacity or otherwise congested (of any capacity) pipe. This might be your ISP's backbone which is getting hit by several thousand people downloading the latest WOW patch simultaneously, causing your 300kbps Skype call to go to hell through latency and jitter. If the ISP's equipment had smaller buffers, the servers would be throttling back as packet loss occurred. You'd probably still be losing packets, but they'd be detected and re-transmitted pretty quickly and you possibly wouldn't notice the latency or have jitter.
What he is seeing is what happens when you want to saturate your link.
So, no, what you get with appropriate buffers is your TCP connection moderating itself to the appropriate link capacity and availability, and latency remaining approximately the same (relative to what you're seeing in bufferbloat, but worse than an uncongested link, obviously).
With bufferbloat, your bandwidth appears to remain about the same, but your latency balloons massively and you get jitter effects as above.
Re: (Score:3)
There are no layers involved. It's a high level description of something that ignores lots of stuff and "oh shock horror" even gets some details wrong.
Did they ever tell you in school that electrons orbit the nucleus in shells? Oh shit! They lied! Clearly they made no sense at all!
The example is describing what the packets do when you request the file. And yes its exaggerating things and yes it's simplifying things. But that's how you describe things to people who don't know the technical details when those
Understanding fail. (Score:3)
I can see why you're posting as an AC, because you don't understand the difference between an HTTP request and the TCP connection that fulfills it. There is no requesting of packets; the request is made via HTTP, and the receiver then ACKnowledges TCP packets from the server, which may send more quickly than it receives ACKs so as to increase throughput--this then fills buffers and causes cascading latency.
You are compounding the problem by spreading misinformation. Please stop and go educate yourself.
Re:pegged connection == latency, who'd of thunk it (Score:5, Funny)
Really, what's the problem here?
You really don't see the problem? How can you be so naive. Maybe you're new to this. All signs show to the fact there is a problem.
Of course the problem is not obvious. The article itself says it'll completely surprise us. They know we won't believe it at first. But that's why we must believe it, or else it's Armageddon.
Would you risk an Armageddon, because of your inability to understand and see?
And that's, in short, why we must attack Iraq.
Wait, what were we talking about :P?
Re: (Score:2)
It's kinda like the Fed printing another 600 billion and refusing to raise interest rates, while at the same time saying everything is fine and the economy is improving.
Re: (Score:2)
Re: (Score:2)
It's not about *you* buffering - it's about the machine in the middle buffering. When that machine buffers instead of drops, your TCP connection will never become aware that it has to play nice and lower its transmission window.
Re: (Score:2)
And his point is that said queue is so excessively long, it's screwing up TCP's congestion avoidance. Those queues mean delay. Serious delay.
Ahhh HAH! (Score:2)
cringley explains (Score:5, Interesting)
http://www.cringely.com/2011/01/2011-predictions-one-word-bufferbloat-or-is-that-two-words/ [cringely.com]
Re: (Score:2)
So, let me get this straight... (Score:5, Insightful)
RAM is cheap.
High speed uplink is not cheap.
Peering agreements are manipulative, expensive, and sometimes extortionate.
So...
The poorly designed, poorly peered, under allocated back haul links can't handle the traffic that routers want to push through them -- but since RAM is cheap, operators just add RAM to the buffers so that when those back-haul lines slow down for a second the packets can get pushed through.
And we're blaming the buffer for the problem?
Re:So, let me get this straight... (Score:5, Insightful)
Re: (Score:2)
Re: (Score:2)
Your Linux (and Mac and Windows) laptops also suffer from bufferbloat.
As does your home router and 802.11 network.
As do some ISP's.
Don't think the problem is just in your broadband. It's all over.
Re: (Score:3)
the real interesting bit would be what would the internet be without those buffers. packetloss at 80%?
Re:So, let me get this straight... (Score:4, Insightful)
Yes, lack of buffers would be bad, as even a trivial delay would result in a packet getting dropped. Oversized buffers are also bad, as they simply delay the packet getting dropped, preventing congestion control from reacting in a timely manner. The buffers need to be sized appropriately relative to the link speed and typical latency.
Re:So, let me get this straight... (Score:4, Insightful)
No.
Packet losses would be handled by adjusting to the conditions.
Look at the trace Gettys posted in the referenced article. Lots of dup packets. Get rid of those, and there's some bandwidth that can be *used*. And allowing TCP to adjust to prevailing conditions should result in less packet loss. It might seem to be less bandwidth also, but we may be in a vicious circle of increasing bandwidth to solve a problem that is NOT bandwidth. Packet loss by itself is a symptom, not a problem.
Re:So, let me get this straight... (Score:5, Informative)
Re: (Score:3)
A lack of a buffer is bad, as even if a packet had to be buffered for a microsecond, it would still be dropped as it couldn't be transmitted immediately.
But a too large buffer is also bad, as it delays the packet being dropped until it times out, preventing TCP's congestion control functionality from rapidly responding to the congestion (by shrinking the window size). Eventually, it will detect the packets getting dropped as they time out, but in the meantime (possibly several seconds), it continues merril
You have have not RTFA or not UTFA.. (Score:5, Informative)
What Jim is saying is that TCP flows try to train themselves to the dynamically available bandwidth, such that there is a minimum of dropped packets, retransmits, etc.
But in order for TCP to do this, packets must be dropped _fast_.
When TCP was designed, the assumptions about the price of ram (and thus, the amount of onboard memory in all the devices in the virtual circuit) were different -- namely, buffers were going to be smaller, fill up faster, and send "i'm full" messages backwards much sooner.
What the experimentation has determined is that many network devices will buffer 1 megabyte or MORE of traffic before finally dropping something and telling the tcp originator to slow down. And yet with a 1 meg buffer and a rate of 1 megabyte per second.. it will take 1 second simply to drain the buffer.
The pervasive presence of large buffers all along the tcp vc, and the non-speified or tail-drop drop behavior of these large queues means that tcp's ability to rate limit is effectively nullified, and in situations where the link is highly utilized, many degenerate behaviors occur, such that the overall link has extremely high latency, and that bulk traffic will cause interesting traffic to be randomly dropped.
Personally, I used pf/altQ on openBSD to try and manage this somewhat.. but its a dicey business.
Re: (Score:2)
Yeah, that is how I read it. The presence of large buffers causes the 'controlling protocols' to go haywire, thus network transfer efficiency hurtles out of the window.
Concerning Boiled Frogs (Score:5, Informative)
If you put a frog in a pot of water and slowly raise the temperature it will try to jump out before the water reaches a temperature that is fatal to the frog.
Re:Concerning Boiled Frogs (Score:5, Funny)
Re:Concerning Boiled Frogs (Score:5, Funny)
If you put a frog in a pot of water and don't even bother boiling it, the frog will jump out anyway.
If you were to find a frog in its natural habitat where it's happy to sit all day waiting for food to drift past and boil that environment slowly, you might actually have an experiment on your hands... and an ethics committee on your tail.
Boiling a lagoon is left as an exercise for the reader.
Re:Concerning Boiled Frogs (Score:5, Funny)
Someone with networking chops (Score:2)
...chime in please. It seems like the solution to this is potentially all user-side, and controllable? Adjust the buffers in your devices if you can, or perhaps find a way to reduce the TCP buffer in your modern operating system?
Re: (Score:3)
Re: (Score:3)
As an extreme example, say you request a 1GB file from a download site. That site has a monster internet connection, and manages to transmit the entire file in 1 second. The file makes it to the ISP at that speed, who then buffers the packets for slow transmission over your ADSL link, which will take 1 hour. During that time you try to browse the web, and your PC tries to do a dns lookup. The request goes out ok, but the response gets added to the buffer on the ISP side of your internet connection, so you won't get it until your original transfer completes. How's 1 hour for latency?
The situation is only not that bad because:
A: Most download sites serve so many people at once and/or rate limit so they won't saturate most peoples' connections
B: Most buffers in network hardware are still quite small
Hmm (Score:2)
ECN - Explicit Congestion Notification (Score:3)
And this is a known problem, and fairly intuitive (Score:4, Interesting)
let me summarize the problem that is being observed: On a given interface, if you have more buffer memory than is needed as packet buffer on the transmit side, it can induce latency. As an example, consider a 1Mb/s link. If you want to have a peak of .1s latency added by buffering at high load, then you want 1Mb*.1=12,500 bytes of buffer. If you have 1MB of buffer, then you have 8 seconds of buffer, therefore triggering the "buffer bloat" issue. Part of the problem is that buffer size would be set based on the top speed a piece of hardware could drive, i.e. if you want a 1Gb/s interface to be able to buffer .1s, then you use it at 100Mb/s, then it has 1s worth of buffer. In most home deployments where you have a router that may have a 1Gbps upstream, maybe 4 100Mb/s physical connections, and a 54Mbps wireless router, you probably have a shared buffer for all the interfaces. The result of this is that when using the 54Mb/s wireless, you can easily have the buffer over saturated, while the buffer size may be just right for the 100Mb/s interfaces.
What is the solution to this? Realistically, the alternative is to drop packets that have resided in the buffer longer than a configured amount of time, which causes it's own performance issues. Net result: TCP would slowdown for a period of time, but would speed up again resulting in a sawtooth behavior. This would result in periodic issues with other protocols as well, i.e. VOIP would have dropped packets every time TCP ramps up again, etc.
Solution: Don't download porn when you are trying to do VOIP calls.
Jim Gettys did the world a great service with this (Score:5, Interesting)
I discovered this series of blog posts about 2 months ago, when he accidentally published one of his blog posts prematurely. I started reading it and followed the links and saw that this was a like a sleuth tale-if I had started reading this with his very first blog on the topic I would have had no idea where he was going with this. Now as to why this contribution by Jim Gettys does the world a great service:
Hats of to Jim Gettys. Thanks for your service.
The Sky is Falling.....NOT! (Score:3)
TCP contains some of the most incredible heuristic algorithms I've ever seen. Each algorithm, like Slow Start, RTT Estimation, SACK, etc. are relatively simple but together they work incredibly well at keeping data flowing across heterogeneous networks. They work so well that I've seen TCP overcome broken ethernet drivers and make them appear to work. Unfortunately, as someone who use to look at TCP traces for a living, I can tell you it can be really hard to work backwards from packet traces to figure out what is going on in the TCP/IP stack because there can be so much going on at the same time. This means that Wireshark in the hands of a weekend-hacker can easily lead to erroneous conclusions. If you follow this link [lartc.org] and go to section 14.5 Random Early Detection (RED) you can see that the issue is already known and there are already solutions to mitigate the problem.
Relax and take a deep breath. Now you can move on to something more important......... like where you're going to spend your eternity
Re: (Score:2)
That's not what the article talks about. He talks about TCP package buffering, not video buffering. Queueing up tons of packages with small amounts of bandwidth gives horrible latency is the point. Not really knowledgeable about that stuff so I can't tell if he is correct in that there is a problem though.
Re:I think buffers are a good thing (Score:5, Interesting)
Re:QoS (Score:4, Informative)
After reading TFSeries, the problem is excessive buffering (as in 1-10 or more seconds worth of data) screwing up TCP/IP's automatic bandwidth detection. QoS helps a little bit by getting the important packets (especially ACKs) through, but high-bandwidth TCP connections are still going nuts when they hit a slower link with excessive buffering.
And one of the major offenders is Linux commonly defaulting to a txqueuelen of 1000.
Re: (Score:2)
Comment removed (Score:5, Interesting)
Re:Looks like a hype (Score:5, Insightful)
You haven't read the article (or the many others around on LWN.net on the same topic). Basically, large buffers in networking gear, from DSL routers on your home network through to ISP's, mean that interactivity is *shite*. You might download Gb's but in terms of interactive applications it's useless and we're facing ever-increasing latency and problems through wanting to cope too much with errors and delays (e.g. huge buffers to keep resending instead of just letting packets drop and having TCP sort it out by retransmission). TCP windows never shrink because errors and buffered and retried so much from intermediate devices that any sort of window scaling is worthless because it doesn't *see* any packet-loss.
Same devices, smaller buffers, everything works fine and "faster" / "more responsive" all around. It actually would *save* money on new devices because you don't need some huge artificial buffer, you can just drop the occasional packet. But the problem is so deeply embedded into run-of-the-mill hardware that it's almost impossible to escape at the moment and thus EVERYONE from large businesses to home users are running on a completely sub-optimal setup because of it. Almost every networking device made in the last few years has buffers so large that they cause problems with interactivity, bandwidth control, QoS, etc. It's NOT just that a "faster connection" solves the problem - we are getting a percentage of optimal service that's steadily decreasing as buffers increase even though we're improving all the time. That's the point. And it *is* caused by memory prices because memory is so cheap that a huge thoughtless buffer costs no more than a tiny, thought-out buffer.
Re: (Score:3)
Larger buffers do not really decrease congestion as far as TCP is concerned: With a large buffer TCP will simply send more/faster, untill the buffer overflows. The congestion will simply manifest a tiny bit later, but much, much severe.
Re: (Score:2)