Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
The Internet Technology

Bufferbloat — the Submarine That's Sinking the Net 525

gottabeme writes "Jim Gettys, one of the original X Window System developers and editor of the HTTP/1.1 spec, has posted a series of articles on his blog detailing his research on the relatively unknown problem of bufferbloat. Bufferbloat is affecting the entire Internet, slowly worsening as RAM prices drop and buffers enlarge, and is causing latency and jitter to spike, especially for home broadband users. Unchecked, this problem may continue to deteriorate the usability of interactive applications like VOIP and gaming, and being so widespread, will take years of engineering and education efforts to resolve. Being like 'frogs in heating water,' few people are even aware of the problem. Can bufferbloat be fixed before the Internet and 3G networks become nearly unusable for interactive apps?"
This discussion has been archived. No new comments can be posted.

Bufferbloat — the Submarine That's Sinking the Net

Comments Filter:
  • Really? (Score:2, Insightful)

    by Anonymous Coward on Friday January 07, 2011 @09:08AM (#34789802)

    Latency is bad? Bigger buffers = more latency?

  • Definition, please (Score:5, Insightful)

    by Megane ( 129182 ) on Friday January 07, 2011 @09:10AM (#34789810)

    I'm so glad the term has been defined so that I know what the hell we're talking about here. Oh wait, no it hasn't.

    Okay, then I'll RTFA. Oh wait, two screens worth of text later and it still hasn't.

    I'd like to change the topic now to the submarine that's sinking the English language: jargonbloat.

  • by mangu ( 126918 ) on Friday January 07, 2011 @09:14AM (#34789854)

    Just start RTFAing: "In my last post I outlined the general bufferbloat problem."

    Follow the link:

    "Each of these initial experiments were been designed to clearly demonstrate a now very common problem: excessive buffering in a network path. I call this bufferbloat

  • by Shakrai ( 717556 ) on Friday January 07, 2011 @09:18AM (#34789874) Journal

    I read TFA and I'm not seeing the problem. He can't duplicate this issue unless he maxes out his connection and then his latency goes to hell. No shit Sherlock, that's what happens when your pipe is full and the packets have to wait in the queue to be transmitted. Am I stupid or could he avoid this issue entirely by using QoS and/or rate-limiting his connection to some amount <100% of it's maximum throughout? I have QoS at the office that keeps our connection from pegging (it's limited to around 75% on the download and 90% on upload) and have never once encountered an issue with latency or jitter. At home I only throttle the upload (to 90% of maximum) and have successfully ran VPNs, bittorrent uploads and VoIP calls all at the same time without any headaches.

    Really, what's the problem here?

  • by shiftless ( 410350 ) on Friday January 07, 2011 @09:24AM (#34789914)

    Yeah, I see this a lot with nerds. It's pretty fucking annoying when someone launches in a long winded dissertation on some obscure subject, without even bothering to put an introductory paragraph at the top giving even the briefest overview of what the fuck they're even talking about. I shouldn't have to read fifteen paragraphs just to get a basic birds-eye view of what the problem is, a framework which I can then proceed to fill in by reading into the details.

  • by drooling-dog ( 189103 ) on Friday January 07, 2011 @09:29AM (#34789948)

    They know something you don't, they want you to know it, and they want to keep it that way for as long as possible...

  • by CFD339 ( 795926 ) <{andrewp} {at} {thenorth.com}> on Friday January 07, 2011 @09:31AM (#34789966) Homepage Journal

    RAM is cheap.
    High speed uplink is not cheap.
    Peering agreements are manipulative, expensive, and sometimes extortionate.

    So...

    The poorly designed, poorly peered, under allocated back haul links can't handle the traffic that routers want to push through them -- but since RAM is cheap, operators just add RAM to the buffers so that when those back-haul lines slow down for a second the packets can get pushed through.

    And we're blaming the buffer for the problem?

  • by phantomcircuit ( 938963 ) on Friday January 07, 2011 @09:55AM (#34790174) Homepage
    TCP assumes that packets will do dropped when there is congestion, if they aren't the congestion control algorithms fail (hard).
  • by Megane ( 129182 ) on Friday January 07, 2011 @09:55AM (#34790176)

    Actually, I blame the submitter. It is well known that Slashdot "editors" don't edit. They merely choose the least worthless articles out of the slush pile and push the button, sometimes using copy and paste to combine two similar submissions. Even my above link was still to the middle of the story, but it explains the core concept best.

    I also place a teensy bit of blame on the blogger, for not linking the first use of the word to the previous article. But he couldn't expect to get linked into the middle of the series.

  • by jg ( 16880 ) on Friday January 07, 2011 @10:10AM (#34790318) Homepage

    You asked, I just provided:

    http://gettys.wordpress.com/what-is-bufferbloat-anyway/

    Good question.

    Bufferbloat is the cause of much of the poor performance and human pain using today’s internet. It can be the cause of a form of congestion collapse of networks, though with slightly different symptoms than that of the 1986 NSFnet collapse. There have been arguments over the best terminology for the phenomena. Since that discussion reached no consensus on terminology, I invented a term that might best convey the sense of the problem. For the English language purists out there, formally, you are correct that “buffer bloat” or “buffer-bloat” would be more appropriate.

    I’ll take a stab at a formal definition:

    Bufferbloat is existence of excessively large (bloated) buffers into systems, particularly network communication systems.

    Systems suffering from bufferbloat will have bad latency under load under some or all circumstances, depending on if and where the bottleneck in the communication’s path exists. Bufferbloat encourages congestion of networks; bufferbloat destroys congestion avoidance in transport protocols such as HTTP, TCP, Bittorrent, etc. Without active queue management, these bloated buffers will fill, and stay full.

    More subtlety, poor latency, besides being painful to users, can cause complete failure of applications and/or networks, and extremely aggravated people suffering with them.

    Bufferbloat is seldom detected during the design and implementations of systems as engineers are methodical people, seldom if ever test latency under load systematically, and today’s memory is so cheap buffers are often added without thought of the consequences, where it can be hidden in many different parts of network systems.

    You see manifestations of bufferbloat today in your operating systems, your home network, your broadband connections, possibly your ISP’s and corporate networks, at busy conference wireless networks, and on 3G networks.

    Bufferbloat is a mistake we’ve all made together.

    We’re all Bozos on This Bus.

  • by ledow ( 319597 ) on Friday January 07, 2011 @10:12AM (#34790342) Homepage

    You haven't read the article (or the many others around on LWN.net on the same topic). Basically, large buffers in networking gear, from DSL routers on your home network through to ISP's, mean that interactivity is *shite*. You might download Gb's but in terms of interactive applications it's useless and we're facing ever-increasing latency and problems through wanting to cope too much with errors and delays (e.g. huge buffers to keep resending instead of just letting packets drop and having TCP sort it out by retransmission). TCP windows never shrink because errors and buffered and retried so much from intermediate devices that any sort of window scaling is worthless because it doesn't *see* any packet-loss.

    Same devices, smaller buffers, everything works fine and "faster" / "more responsive" all around. It actually would *save* money on new devices because you don't need some huge artificial buffer, you can just drop the occasional packet. But the problem is so deeply embedded into run-of-the-mill hardware that it's almost impossible to escape at the moment and thus EVERYONE from large businesses to home users are running on a completely sub-optimal setup because of it. Almost every networking device made in the last few years has buffers so large that they cause problems with interactivity, bandwidth control, QoS, etc. It's NOT just that a "faster connection" solves the problem - we are getting a percentage of optimal service that's steadily decreasing as buffers increase even though we're improving all the time. That's the point. And it *is* caused by memory prices because memory is so cheap that a huge thoughtless buffer costs no more than a tiny, thought-out buffer.

  • This is an excellent explanation of what issues are happening here. I can clearly see that this is an issue, and the problem is something that over time will impact everybody.

    The problem is really focused on trying to deal with differences in bandwidth between computers... always a problem but in this case trying to match up slow connections with fast connections is particularly difficult. Since memory is cheap, a 1 GB buffer certainly can be found in some devices now and perhaps much more. I don't see this example as being really too far off the mark in the near future.... which is the point being raised and why buffer bloat is such a big deal.

    More to the point, some of the complaints that triggered the "quality of service" debate are rooted in this problem. As mentioned in the original article triggering this whole slashdot thread, setting up "quality of service" priorities only creates multiple buffer queues.... it doesn't solve the problem of the monster queue to begin with. That is why the author of the blog post suggests that the debate over network neutrality is not based upon the real problem that is facing network engineering and why it is a political solution in search of a problem.

    It takes awhile to "grok" this problem, but once you do it becomes obvious why this is such a huge deal.

  • by GooberToo ( 74388 ) on Friday January 07, 2011 @10:42AM (#34790626)

    Yeah, I see this a lot with nerds. It's pretty fucking annoying when someone launches in a long winded dissertation on some obscure subject, without even bothering to put an introductory paragraph at the top giving even the briefest overview of what the fuck they're even talking about.

    Its a series of blog articles. He presumes you've been following his series of articles whereby he introduces the topic and experimentally validates his assertions. If you didn't get the introduction, blame your own laziness or the failure of the poster to also provide a link to the first blog post in the series.

    Basically you're complaining because you jumped to the middle of a book and then bitched that the chapter you started reading doesn't have an introduction. Most people will wonder what the hell is wrong with you. To then attack the author for other's failings is bizarre to say the least. And all this ignores that blogs are frequently written to be familiar and causal reading; which also entirely invalidates your general tone.

  • It will be a hack (Score:5, Insightful)

    by dachshund ( 300733 ) on Friday January 07, 2011 @11:02AM (#34790836)

    2. QoS used that way is a hack to work around an issue that doesn't have to be there in the first place
    3. How do you determine the maximum throughput? It's not necessarily the official line's speed. The nice thing about TCP is that it's supposed to figure out on its own how much bandwidth there is. You're proposing a regression to having to tell the system by hand.
    4. QoS is most effective on stuff you're sending, but in the current consumer-oriented internet most people download a lot more than they upload.

    While the Internet in-theory is beautiful, our modern implementation really is a series of layered hacks. And the solution to Bufferbloat is going to be another hack. You're crazy if you think that the solution to the Bufferbloat 'problem' is going to be some fundamental redesign of the TCP protocol (how would you force 10 people to use it?), or the total re-architecture of millions of consumer devices to remove buffering. You're also crazy if you think the ISPs and backbone providers are going to stand by while this thing kills the Internet.

    So the question is: which hack will it be? The GP poster already identified one that works well enough --- using QoS to control flows. Your final objection about content providers stressing connections is the real one. But there's some probably a good hack to deal with it --- or more likely a series of hacks, some at the content providers themselves (e.g., Netflix), some in the backbone, and some at your ISP. It won't be elegant, but it will keep this problem from ever becoming anything more than a few cranky blog posts.

  • by compro01 ( 777531 ) on Friday January 07, 2011 @11:44AM (#34791486)

    Yes, lack of buffers would be bad, as even a trivial delay would result in a packet getting dropped. Oversized buffers are also bad, as they simply delay the packet getting dropped, preventing congestion control from reacting in a timely manner. The buffers need to be sized appropriately relative to the link speed and typical latency.

  • by GooberToo ( 74388 ) on Friday January 07, 2011 @12:49PM (#34792494)

    It doesn't help that massive numbers of people actively insist on breaking protocols which specifically exist to alleviate some of these types of problems. [wikipedia.org]

    Far too many people ignorantly block all ICMP traffic. As a result, the network path in between the two communicating hosts are forced to buffer more data as the destination host becomes saturated. Worse, this type of filtering has a tendency to quickly compound, which in turn creates the exact type of bufferbloat he's describing.

    I wish people would understand there is a difference between, "No route to host", and a black hole. When you find a black hole, chances are really good you've found a host. As such, purposely breaking protocols for people to have an imagined increase in security only breaks the Internet as a whole when it becomes a wide spread tactic. And before people start rattling off that it opens a whole new can of worms, please realize that unlike in the past, stateful firewalls are extremely common today - so no.

  • by rickb928 ( 945187 ) on Friday January 07, 2011 @01:01PM (#34792732) Homepage Journal

    No.

    Packet losses would be handled by adjusting to the conditions.

    Look at the trace Gettys posted in the referenced article. Lots of dup packets. Get rid of those, and there's some bandwidth that can be *used*. And allowing TCP to adjust to prevailing conditions should result in less packet loss. It might seem to be less bandwidth also, but we may be in a vicious circle of increasing bandwidth to solve a problem that is NOT bandwidth. Packet loss by itself is a symptom, not a problem.

  • by myrdos2 ( 989497 ) on Friday January 07, 2011 @02:07PM (#34793822)

    Solutions which require the internet's infrastructure to be replaced (all the routers and switches and so forth) have been proposed for many years, and never go anywhere. The only one I'm aware of is IPv6, and look how slowly that beast has taken off. That said, TCP sawtooth isn't as bad as you make it out to be - in most cases. Whenever a packet is dropped, the TCP connection drops its speed to around half, then gradually ramps up to where it was previously. You don't get 100% of your bandwidth utilization, but you do get to automatically adjust to changing network conditions. And as the number of TCP connections over one pipe increases, you get closer and closer to max utilization rates.

    TCP fails when:
    -competing against UDP, which has no congestion control and will clog a line even if every UDP packet is dropped
    -there is interference in the line causing packet corruption, which TCP interprets as congestion
    -competing against Microsoft products, which have TCP stacks that are tweaked to grab more than their fair share of the bandwidth

    My understanding is that TCP congestion control generally isn't applied to backbones - I believe that ISPs throttle your traffic before sending it over an optic link so as not to overbook its capacity. You're probably just competing with your household, and possibly people on your block - can someone verify this?

E = MC ** 2 +- 3db

Working...