Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
The Internet Network Technology

Controlling Bufferbloat With Queue Delay 134

CowboyRobot writes "We all can see that the Internet is getting slower. According to researchers, the cause is persistently full buffers, and the problem is only made worse by the increasing availability of cheap memory, which is then immediately filled with buffered data. The metaphor is grocery store checkout lines: a cramped system where one individual task can block many other tasks waiting in line. But you can avoid the worst problems by having someone actively managing the checkout queues, and this is the solution for bufferbloat as well: AQM (Active Queue Management). However, AQM (and the metaphor) break down in the modern age when Queues are long and implementation is not quite so straightforward. Kathleen Nichols at Pollere and Van Jacobson at Parc have a new solution that they call CoDel (Controlled Delay), which has several features that distinguish it from other AQM systems. 'A modern AQM is just one piece of the solution to bufferbloat. Concatenated queues are common in packet communications with the bottleneck queue often invisible to users and many network engineers. A full solution has to include raising awareness so that the relevant vendors are both empowered and given incentive to market devices with buffer management.'"
This discussion has been archived. No new comments can be posted.

Controlling Bufferbloat With Queue Delay

Comments Filter:
  • by Zondar ( 32904 ) on Wednesday May 09, 2012 @12:58AM (#39937955)

    Yep, same cause. They attempted to minimize packet loss by increasing the buffers in their network. The user experience was horrible.

    http://blogs.broughturner.com/2009/10/is-att-wireless-data-congestion-selfinflicted.html

  • And why... (Score:5, Interesting)

    by Anonymous Coward on Wednesday May 09, 2012 @01:01AM (#39937973)

    Is the internet getting slower? (laggier)

    because the simplest pages are HUGE BLOATED MONSTROSITIES!

    Between flash and ads. And every single page loading crap from all around the world as their 'ad partners', hit counters, click counters, +1 this, like this, digg this, and all the other stupid social media crap that has invaded the web. All this shit that serves no purpose other than to some marketers. And EVERY SINGLE PAGE has to have a 'comment' section and other totally useless shit tacked on as well.

    Just this little page here on slashdot. With less than a dozen replies. Tops 80k so far. And that's with everything being blocked that can be.

    slower? laggier? no... the signal to noise raito is sucking major ass.

  • by Logic and Reason ( 952833 ) on Wednesday May 09, 2012 @01:23AM (#39938051)

    We all can see that the Internet is getting slower.

    Can we? I'd suggest that most people are unaware of any such trend, perhaps because it has happened too gradually and too unevenly. Indeed:

    A full solution has to include raising awareness so that the relevant vendors are both empowered and given incentive to market devices with buffer management.

    Exactly. Consumers don't know or care about low latency, so the market doesn't deliver it (that plus lack of competition among ISPs in general, but that's another kettle of fish).

    We need a simple, clear way for ISPs to measure latency. It needs to boil down to a single number that ISPs can report alongside bandwidth and that non-techies can easily understand. It doesn't need to be completely accurate, and can't be: ISPs will exaggerate just like they do with bandwidth, just like auto manufacturers do with fuel efficiency, etc. What matters is that ISPs can't outright make up numbers, so that a so-called "40 ms" connection will reliably have lower average latency than a "50 ms" connection. That should be enough for the market to start putting competitive pressure on ISPs.

    What kind of measure could be used for this purpose? Perhaps some kind of standardized latency test suite, like what the Acid tests were to web standards compliance? Certainly there would be significant additional difficulties, but could it be done?

  • by tlambert ( 566799 ) on Wednesday May 09, 2012 @01:33AM (#39938101)

    The boundary they are transiting is one between a fast network and a slower network, similar to what you see at a head-end at a LATA or broadband distribution point and leaf nodes like peoples houses, or one the other end, on the pipe into a NOC with multi gigabit interconnects much bigger than the pipes into or out of the NOC.

    The obvious answer is the same as it was in 1997 when it was first implemented on the Whistle InterJet: lie about the available window size on the slow end of the link so as to keep the fast end of the link from becoming congested by having all its buffers filled up with competing traffic.

    In this way, even if you have tasks which would otherwise eat all of your bandwidth (at the time, it was mostly FTP and SMTP traffic), you can still set aside enough buffer space on the fast side of the router on the other end of the slow link to let ssh or HTTP traffic streams make it through. Beats the heck out of things like AltQ, which do absolutely nothing to prevent a system with a fast link that has data to send you crap-flooding the upstream router so that it has no free buffers to receive any other traffic, and which it can't possibly hope to shove down the smaller pipe at the rate it's coming in the large one.

    Ideally this would be cooperatively managed, as was suggested at one point by Jeff Mogul (which is likely barred due to the lack of a trust relationship between the upstream and downstream routers, if nothing else). Think of it like half your router lives at each end of the link wire, instead of both sides living on one end.

    It's the job of the device on the border who happens to know there's a pipe size differential to control what it asks for from the upstream side int terms of the outstanding buffer space it's possible for inbound packets to consume (and to likewise lie about the upstream windows to the downstream higher speed LAN on the other end of the slow link).

    I'm pretty sure Julian Elischer tried to push the patches for lying about window size out to FreeBSD as oart of Whistle contributing Netgraph to FreeBSD.

    While people are looking at that, they might also want to reconsider the TCP Rate Halving work at CMU, and the LRP implementation from Peter Druschel's group out of Rice University.

    -- Terry

The one day you'd sell your soul for something, souls are a glut.

Working...