Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
The Internet Network Technology

Controlling Bufferbloat With Queue Delay 134

CowboyRobot writes "We all can see that the Internet is getting slower. According to researchers, the cause is persistently full buffers, and the problem is only made worse by the increasing availability of cheap memory, which is then immediately filled with buffered data. The metaphor is grocery store checkout lines: a cramped system where one individual task can block many other tasks waiting in line. But you can avoid the worst problems by having someone actively managing the checkout queues, and this is the solution for bufferbloat as well: AQM (Active Queue Management). However, AQM (and the metaphor) break down in the modern age when Queues are long and implementation is not quite so straightforward. Kathleen Nichols at Pollere and Van Jacobson at Parc have a new solution that they call CoDel (Controlled Delay), which has several features that distinguish it from other AQM systems. 'A modern AQM is just one piece of the solution to bufferbloat. Concatenated queues are common in packet communications with the bottleneck queue often invisible to users and many network engineers. A full solution has to include raising awareness so that the relevant vendors are both empowered and given incentive to market devices with buffer management.'"
This discussion has been archived. No new comments can be posted.

Controlling Bufferbloat With Queue Delay

Comments Filter:
  • s/slower/laggier/ (Score:5, Insightful)

    by diamondmagic ( 877411 ) on Tuesday May 08, 2012 @11:30PM (#39937855) Homepage

    The Internet is not getting slower. It is becoming laggier. Comeon people, learn the difference.

    • by gstrickler ( 920733 ) on Tuesday May 08, 2012 @11:47PM (#39937915)

      And smaller buffers will help. Larger buffers do almost nothing to increase throughput, but they can increase latency. Having buffers isn't a problem. Having buffers that are too large is a problem.

      • by DarkOx ( 621550 )

        Depends...

        Suppose you have a router has link A connected at 10Mbs, link B at 10Mbs, and link C at 300Kbps. You have a host on the far end of A sending packets to something on the far end of C. The traffic is highly bursty. TCP does reliability end to end, so if the host on the end of C misses packets because the router discarded them that is all traffic that has to run across link A again, which cuts down the available bandwidth for A to B. If the router had a large buffer the burst of traffic from A f

        • by phlinn ( 819946 )
          Buffers are so large that A never receives an ACK from the destination, and resends anyways.
        • by swalve ( 1980968 )
          I think the problem is that the buffers are getting in the way of the TCP retransmission process, and confusing it. Without (huge) buffers, the sender throttles down pretty much instantly. With huge buffers, the window slides back and forth, dropping packets every time the buffer fills up.

          I think one of the problems is that some UDP traffic prefers some buffering (media streaming), and that's the bulk of traffic these days. What we probably need to do is take a different look at QOS, or at least have c
      • I disagree. It's fine for buffers to be very big, practically "infinite" even. Buffer size does not have to be linked with latency at all. The reason it is currently is because most routers are doing it wrong ;).

        If you really want to address latency what routers should do is keep track of how long packets have been in the router (in clock ticks, milliseconds or even microseconds) and use that with QoS stuff (and maybe some heuristics) to figure out which packets to send first, or to drop.

        For example, "bulk/
        • You're trying to solve a problem created by using large buffers. Especially when you have routers with different bandwidth links, it's important to limit the buffer size. Size should be limited per port.

          Active buffer management and QoS can be useful, but those are primarily "solutions" to the problem of high latency caused by buffer bloat. Stop creating the buffer bloat in the first place by limiting the size of buffers.

          • by TheLink ( 130905 )

            Uh. The real problem is latency, not buffer size. Right? That's what people are complaining about - packets are taking longer than they should.

            You propose solving the latency problem by reducing buffer size.

            I propose solving the latency problem by reducing the maximum time packets are allowed to spend in a router (AKA latency).

            Which do you think actually deals with the real issue? If the problem is latency, why not deal with it rather than go off on loosely related tangents?

            BTW I finally read the article an

            • I propose solving the latency problem by not creating it with bloated buffers in the first place. Small buffers do help with throughput, and they do it with low latency. Large buffers add almost no additional throughput, but they do add latency. Solving a problem created by poor design just adds complexity.

              • It isn't that easy. Small buffers actually result in a high level instability that can leave them completely empty as often as it leaves them completely full. If the buffer is too small performance goes completely to hell on a permanent basis. It becomes highly chaotic.

                There are plenty of situations where you want a bigger buffer to handle a very short-term burst of information, but where that same big buffer becomes a liability when the information is continuously in excess of available bandwidth.

                Simila

                • So what do you propose to fix the problem?
                • by TheLink ( 130905 )

                  Is this not a fancy latency-based algorithm? http://queue.acm.org/detail.cfm?id=2209336 [acm.org]

                  What do you propose that solves the problem better than that algorithm? And what is "right" by your definition?

                  As for my proposal, it's more of an approach - since it seemed to me that a lot of people were not using the "time/age in router"[1] of a packet to help determine whether to discard it or not. To back that up, just look at all the RED papers, and all the talk about "bufferbloat" ( where bloat=size of buffers). N

                • by swalve ( 1980968 )
                  So isn't the solution to drop the packets sooner and let the endpoints deal with it immediately, rather than buffering and muddying up the congestion signal? I'm not well educated on my information theory, but I would bet there is a mathematical relationship between input speed, output speed and some time factor that will determine the optimum buffer size. Below that and you get chaos, above it you get bloat.
      • Yeah, buffers we have a lot of buffers haha -- Godfather part II more or less

        • It was meant to be funny and no one had rated it except overrated. No sense of humor this MOD. Well I'm glad I don't make money doing comedy because....there are many people that have buffered me out.

    • Re:s/slower/laggier/ (Score:5, Informative)

      by Xtifr ( 1323 ) on Tuesday May 08, 2012 @11:51PM (#39937927) Homepage

      Yup, and another error in TFS is:

      According to researchers, the cause is persistently full buffers.

      should be "a cause".

      Lame, misleading summaries is par for the course around here, though. But look on the bright side--it helps keep us on our toes, sorting sense from nonsense, and helps us spot the real idiots in the crowd. :)

      At least this one had a link to a fairly reliable source. It wasn't just blog-spam to promote some idiot's misinterpretation of the facts. Might have been nice to also provide a link to bufferbloat.net [bufferbloat.net] or Wikipedia on bufferbloat [wikipedia.org], as well, for background information, but what can you do?

      • by julesh ( 229690 )

        To be fair, this isn't exactly the first /. article discussing bufferbloat, so presumably both submitter and editor assumed we already knew what it was.

        • by Xtifr ( 1323 )

          That may--barely--excuse the lack of context-providing links. It doesn't excuse the misstatements and misinformation in the summary.

      • TFA is actually one of the first coherent explanations of bufferbloat I've seen. Bufferbloat.net tells me they can fix my chaotic and laggy network performance, alright, fine. But... how?

        • by Xtifr ( 1323 )

          TFA was great--TFS not so much. I agree that bufferbloat.net doesn't seem to be as useful as I thought on a first glance, but the Bufferbloat FAQ [wordpress.com] would have been a good resource.

    • but lag kills people
  • by rogueippacket ( 1977626 ) on Tuesday May 08, 2012 @11:43PM (#39937899)
    Today, there is no incentive for an ISP to consider spending money on this. For their private customers, they sell QoS, which guarantees their customers a better queuing method. Extremely profitable. For consumers, it makes sense to simply continue investing in infrastructure. Adding capacity from the street to the CO not only eliminates the issue, but also allows the ISP to provide better, more profitable services. In short, we will likely see better queuing methods integrated with future routers. The may be one of them, but only time will tell, and nobody will discard all of their equipment today to get it. The issue is just too minor while capacity remains cheap and QoS profitable.
    • by skids ( 119237 )

      In short, we will likely see better queuing methods integrated with future routers

      Not holding my breath, given the age and demonstrated effectiveness of SFQ variants and their non-presence in modern routing platforms.

      What TFA left me wondering was whether their algorithm will prove resilient to being combined with prioritization and connection-based/host-based/service-based fairness strategies and various ECN mechanisms.

    • by Ungrounded Lightning ( 62228 ) on Wednesday May 09, 2012 @02:10AM (#39938527) Journal

      Today, there is no incentive for an ISP to consider spending money on this. For their private customers, they sell QoS, which guarantees their customers a better queuing method. Extremely profitable. For consumers, it makes sense to simply continue investing in infrastructure.

      You appear to be confused about the issue. This is not about capacity and oversubscription. This is about a pathology of queueing.

      The packets leaving a router, once it has figured out where they go, are stored in a buffer, waiting their turn on the appropriate output interface. While there are a lot of details about the selection of which packet leaves when, you can ignore it and still understand this particular issue: Just assume they all wait in a single first-in-first-out queue and leave in the order they were processed.

      If the buffer is full when a new packet is routed, there's nothing to do but drop it (or perhaps some other packet previously queued - but something has to go). If there are more packets to go than bandwidth to carry them, they can't all go.

      TCP (the main protocol carrying high-volume data such as file transfers) attempts to fully utilize the bandwidth of the most congested hop on its path and divide it evenly among all the flows passing through it. It does this by speeding up until packets drop, then slowing down and ramping up again - and doing it in a way that is systematic so all the TCP links end up with a fair share. (Packet drop was the only congestion signal available when TCP was defined.)

      So the result is that the traffic going out router interfaces tends to increase until packets occasionally drop. This keeps the pipes fully utilized. But if buffer overflow is the only way packets are dropped, it also keeps the buffers full.

      A full buffer means a long line, and a long delay between the time a packet is routed and the time it leaves the router. Adding more memory to the output buffer just INCREASES the delay. So it HURTS rather than helping.

      The current approach to fixing this is Van Jacobson's previous work: RED (Random Early Drop/Discard). In addition to dropping packets when the buffer gets full, an very occasional randomly-chosen packet is dropped when the queue is getting long. The queue depth is averaged - using a rule related to typical round-trip times - and the random dropping increases with the depth. The result is that the TCP sessions are signalled early enough that they back off in time to keep the queue short while still keeping the output pipe full.The random selection of packets to drop means TCP sessions are signalled in proportion to their bandwidth and all back off equally, preserving fairness. The individual flows don't have any more packets drop on the average - they just get signalled a little sooner. Running the buffers nearly empty rather than nearly full cuts round-trip time and leaves the bulk of the buffers available to forward - rather than drop - sudden bursts of traffic.

      ISPs have a VERY LARGE incentive to do this. Nearly-full queues increase turnaround time of interactive sessions, creating the impression of slowness, and dropping bursty traffic creates the impression of flakeyness. This is very visible to customers and doing it poorly leaves the ISP at a serious competitive disadvantage to a competitor that does it well.

      So ISPs require the manufacturers of their equipment to have this feature. Believe me, I know about this: Much of the last 1 1/2 years at my latest job involved implementing a hardware coprocessor to perform the Van Jacobson RED processing in a packet processor chip, to free the sea of RISC cores from doing this work in firmware and save their instructions for other work on the packets.

      • This is very visible to customers and doing it poorly leaves the ISP at a serious competitive disadvantage to a competitor that does it well.

        Assuming you are a fellow USian, what competition?

        • by jon3k ( 691256 )
          If you live in the average U.S. 3rd tier (or better) city: DSL vs Cable vs 4G
          • That would be the DSL provider with long queues and lots of lag, the cable provider with long queues and lots of lag, and the "4G" operator (more like 3.25G speed) with long queues, lots of lag, signal strength issues, and a 100MB monthly cap.

            If this theory of competition between the duopoly worked, cable and DSL would both have better customer service and lower rates...

      • by Twinbee ( 767046 )
        How does RED fare against AQM or CoDel?
      • by HighBit ( 689339 )

        You appear to be confused about the issue. This is not about capacity and oversubscription. This is about a pathology of queueing.

        To be fair, it's about both.

        Large queues are a problem, but they can be mitigated by adding more capacity (bandwidth). It doesn't matter how deep the queue can be if it's never used -- it doesn't matter how many packets can be queued if there's enough bandwidth to push every packet out as soon as it's put in the queue.

        That said, your point about AQM being a valid solution to congestion is, of course, right on:

        To avoid large (tens of milliseconds or more) queue backlogs on congested links, you use Active Que

        • I agree with much of what you say. But:

          Large queues are a problem, but they can be mitigated by adding more capacity (bandwidth). It doesn't matter how deep the queue can be if it's never used -- it doesn't matter how many packets can be queued if there's enough bandwidth to push every packet out as soon as it's put in the queue.

          Unfortunately, TCP ramps up until there IS congestion. Raise the capacity of the congested link and it either becomes congested at the higher capacity or some other link becomes t

      • by TheLink ( 130905 )

        RED seems to be a primitive hack job to me.

        My proposal is this: http://tech.slashdot.org/comments.pl?sid=2837433&cid=39941029 [slashdot.org]

        Lastly if RED doesn't take into account packet size when it drops then it hurts lower bandwidth channels with small packets disproportionately more than the ones with big packets, and most latency sensitive applications use small packets. When communicating across say the pacific ocean, unnecessary packet loss can hurt a lot more (2 *50 milliseconds?).

      • by Idbar ( 1034346 )
        While I don't agree that these guys are pushing their algorithm and not showing their comparisons to other well known AQM mechanisms out there. I'd have to agree with the grand parent that ISPs have probably no interest on fixing this. Rather, I'd assume having large pipes in their infrastructure, queueing is likely not happening inside their network.

        This is much of a bottleneck issue and user experience. The ones in the best interest to implement this are router manufacturers to provide a better user e
      • More fiber and less memory, sure. The buffers are probably in place to separate different customer and their quality of service since there is big money in that.

    • The algorithm can probably be retrofitted onto some/most equipment [via a firmware upgrade]. Since it's not a protocol change, it doesn't affect other routers in the path (e.g. mandate that it be implemented as an all-or-nothing). The incentive is that after you charge your premium for QoS, you actually have to deliver it.
  • by Zondar ( 32904 ) on Tuesday May 08, 2012 @11:58PM (#39937955)

    Yep, same cause. They attempted to minimize packet loss by increasing the buffers in their network. The user experience was horrible.

    http://blogs.broughturner.com/2009/10/is-att-wireless-data-congestion-selfinflicted.html

    • A long time ago when the earth was greener someone promoted the concept of an internet with ZERO packet loss.

      My InterTubes are BETTER because I HAVE ZERO LOSS!!!

      Oddly enough such a business model turned out to be unsustainable due to
      (1) it's finanically expensive (between one thing an another)
      (2) doing this the less expensive way (ie by slathering on bigger buffers) introduces excessive latency (for some customer designated value of "excessive")

      For the life of me I don't understand how ANYBODY can be a
      • Correction: There is no get-rich-quick scheme with a high probability of success. There are a few (like the lottery) which may get you rich quick, but with only a small probability.
  • And why... (Score:5, Interesting)

    by Anonymous Coward on Wednesday May 09, 2012 @12:01AM (#39937973)

    Is the internet getting slower? (laggier)

    because the simplest pages are HUGE BLOATED MONSTROSITIES!

    Between flash and ads. And every single page loading crap from all around the world as their 'ad partners', hit counters, click counters, +1 this, like this, digg this, and all the other stupid social media crap that has invaded the web. All this shit that serves no purpose other than to some marketers. And EVERY SINGLE PAGE has to have a 'comment' section and other totally useless shit tacked on as well.

    Just this little page here on slashdot. With less than a dozen replies. Tops 80k so far. And that's with everything being blocked that can be.

    slower? laggier? no... the signal to noise raito is sucking major ass.

    • And every single page loading crap from all around the world as their 'ad partners', hit counters, click counters, +1 this, like this, digg this, and all the other stupid social media crap that has invaded the web.

      Amen, bro. I hate that crap being sprinkled all over. Even without the +1 buttons there's too many pages framed with various sidebars and menus.

    • If you used to have a 56kbit modem, and now you have a 10Mbit connection, that's going up by a factor of 200. A classic html page was maybe 5kB (no images), so now it should be allowed to be 1MB large. If you had a few of images then, that would account for a youtube video now.

      • The total size is not the only thing that matters. What matters is the fact that most pages make requests to as many as 10 domains and 50 URLs when they load. That means multiple DNS requests, multiple connections, etc. There also a lot of pages that load stuff through javascript and/or css which adds another stage or two of loading.
        • Re: (Score:2, Informative)

          by Anonymous Coward

          What matters is not (if we put privacy aside) that 10 domains are requested, but that there are 10 (mostly) different routes to 10 different servers. If a single of these routes or servers are slow, the website loaded will load slow as well.

        • I agree with this especially with everyone catching on to tabbed browsing that updates all of the tabs all of the time with no break. Just add more fat pipes. Simple answer.

    • What click counters, "+1 this" or "digg this" are you talking about? I guess you failed step 2 of installing a browser.

    • --the signal to noise raito is sucking major ass.--

      Agreed and didn't Google remove Boolean operations from their search engine? It's also harder to filter the noise.

      Facebook now seems to have most of the noise but I would say YouTube uses the most bandwidth although it's a close call.

  • Maybe Slashdot should get one of these AQM things........
  • by TiggertheMad ( 556308 ) on Wednesday May 09, 2012 @12:18AM (#39938037) Journal
    We all can see that the Internet is getting slower.

    Can we? It looks like it has been getting faster to me....

    According to researchers, the cause is persistently full buffers,

    What researchers? What buffers? Server buffers? Router buffers? Local browser buffers? Your statements are so vague as to be useless.

    and the problem is only made worse by the increasing availability of cheap memory, which is then immediately filled with buffered data.

    Buffering is a way of speeding servers up immensely. Memory is orders of magnitude faster than disk, and piling RAM on and creating huge caches can only help performance. I call bullshit on your entire claim. This summary is so awful, I don't even want to read whatever article you linked to.
    • by LiENUS ( 207736 )
      Router buffers, and full routing buffers is a bad thing, they're talking about buffering your packets, not buffering the contents of your disk. Here, I found you a good article to read about bufferbloat http://queue.acm.org/detail.cfm?id=2209336 [acm.org]
    • by Xtifr ( 1323 ) on Wednesday May 09, 2012 @12:36AM (#39938111) Homepage

      It is definitely a terrible summary, but the ACM article it links to is actually quite interesting. (You do know what the ACM is, don't you?) And bufferbloat has nothing to do with discs, so your objection is completely off base. It certainly would have helped if the summary had given you any idea what bufferbloat is, of course, so I understand your confusion. But it's a real thing. The problem is that the design of TCP/IP includes built-in error correction and speed adjustments. Large buffers hide bottlenecks, making TCP/IP overcorrect wildly back and forth, resulting in bursty, laggy transmission.

      • by Anonymous Coward

        The checkout line analogy is somewhat flawed also. A better example may be a situation where one has to move soda from soda fountains to a thirsty restaurant crowd with picky drinking habits (and the beverages go stale quickly). One can use bigger or smaller pitchers to move the drinks, but any particular customer can only drink so much at a time. You may get lucky and have a customer take a couple drinks at once, but more likely the server will end up throwing away the almost full pitcher because the drink

    • by Imagix ( 695350 )

      Buffering is a way of speeding servers up immensely. Memory is orders of magnitude faster than disk, and piling RAM on and creating huge caches can only help performance.

      You're thinking of caching, not buffering.

    • by 1s44c ( 552956 )

      Buffering is a way of speeding servers up immensely. Memory is orders of magnitude faster than disk, and piling RAM on and creating huge caches can only help performance. I call bullshit on your entire claim. This summary is so awful, I don't even want to read whatever article you linked to.

      I call bullshit on people calling bullshit on things without putting in the effort to even try and understand the story. It's about buffering in network devices causing excessive lag as TCP/IP wasn't built to handle large amounts of invisible store and forward buffering between endpoints. Huge caches don't always help performance, it depends on the nature of the thing being cached.

      'i call bullshit' is a stupid phrase too.

    • Buffering is a way of speeding servers up immensely. Memory is orders of magnitude faster than disk, and piling RAM on and creating huge caches can only help performance. I call bullshit on your entire claim

      What a coincidence.... try setting your cache to 10GB and surf for a few weeks, let it fill up. Then try turning off cache in your browser, and see how much faster things load.... if you're on a remotely broadband connection (more than about 1mbit), the difference will be enormous. With a broadband connection, it's faster to fetch the page than it is to search through your cache to see if you have a copy of the page locally and then load it.

      When it's so noticeable at an individual browser level, what makes

  • by Logic and Reason ( 952833 ) on Wednesday May 09, 2012 @12:23AM (#39938051)

    We all can see that the Internet is getting slower.

    Can we? I'd suggest that most people are unaware of any such trend, perhaps because it has happened too gradually and too unevenly. Indeed:

    A full solution has to include raising awareness so that the relevant vendors are both empowered and given incentive to market devices with buffer management.

    Exactly. Consumers don't know or care about low latency, so the market doesn't deliver it (that plus lack of competition among ISPs in general, but that's another kettle of fish).

    We need a simple, clear way for ISPs to measure latency. It needs to boil down to a single number that ISPs can report alongside bandwidth and that non-techies can easily understand. It doesn't need to be completely accurate, and can't be: ISPs will exaggerate just like they do with bandwidth, just like auto manufacturers do with fuel efficiency, etc. What matters is that ISPs can't outright make up numbers, so that a so-called "40 ms" connection will reliably have lower average latency than a "50 ms" connection. That should be enough for the market to start putting competitive pressure on ISPs.

    What kind of measure could be used for this purpose? Perhaps some kind of standardized latency test suite, like what the Acid tests were to web standards compliance? Certainly there would be significant additional difficulties, but could it be done?

    • by rioki ( 1328185 )

      Using a latency value for you connection is kind of nonsense, since it makes a big difference if I connect to a server in my country or on the other side of the world. (That is why CDNs even exist.) Especially since your ISP can only influence their end and maybe with with peering partners the operate.

      But honestly, I think a basic rating model could be devised, similar like energy efficiency. Something with grades from A-F, based on how much latency they introduce.

    • by Hatta ( 162192 )

      We need a simple, clear way for ISPs to measure latency.

      Ping?

    • by davecb ( 6526 )

      Reporting a simple average latency "across the Rogers network" in the case of my particular ISP, would be easy to understand.

      To use a car analogy., it's just like the signs before construction zones that say "Time to Warden Ave, 7 minutes, normally 4".

      --dave

  • by tlambert ( 566799 ) on Wednesday May 09, 2012 @12:33AM (#39938101)

    The boundary they are transiting is one between a fast network and a slower network, similar to what you see at a head-end at a LATA or broadband distribution point and leaf nodes like peoples houses, or one the other end, on the pipe into a NOC with multi gigabit interconnects much bigger than the pipes into or out of the NOC.

    The obvious answer is the same as it was in 1997 when it was first implemented on the Whistle InterJet: lie about the available window size on the slow end of the link so as to keep the fast end of the link from becoming congested by having all its buffers filled up with competing traffic.

    In this way, even if you have tasks which would otherwise eat all of your bandwidth (at the time, it was mostly FTP and SMTP traffic), you can still set aside enough buffer space on the fast side of the router on the other end of the slow link to let ssh or HTTP traffic streams make it through. Beats the heck out of things like AltQ, which do absolutely nothing to prevent a system with a fast link that has data to send you crap-flooding the upstream router so that it has no free buffers to receive any other traffic, and which it can't possibly hope to shove down the smaller pipe at the rate it's coming in the large one.

    Ideally this would be cooperatively managed, as was suggested at one point by Jeff Mogul (which is likely barred due to the lack of a trust relationship between the upstream and downstream routers, if nothing else). Think of it like half your router lives at each end of the link wire, instead of both sides living on one end.

    It's the job of the device on the border who happens to know there's a pipe size differential to control what it asks for from the upstream side int terms of the outstanding buffer space it's possible for inbound packets to consume (and to likewise lie about the upstream windows to the downstream higher speed LAN on the other end of the slow link).

    I'm pretty sure Julian Elischer tried to push the patches for lying about window size out to FreeBSD as oart of Whistle contributing Netgraph to FreeBSD.

    While people are looking at that, they might also want to reconsider the TCP Rate Halving work at CMU, and the LRP implementation from Peter Druschel's group out of Rice University.

    -- Terry

    • by jg ( 16880 )

      It is *any* transition from fast to slow, including your computer to your wireless link or vice versa from your home router to your computer.

      Bufferbloat is an equal opportunity destroyer of time.

    • the problem is you can't just 'fix' TCP. even if you make it work with both IPv4 TCP and IPv6 TCP, you're still missing the other protocols out there (SCTP, UDP, etc, etc), and more importantly, can NEVER fix protocols that are being encrypted (ie TCP inside VPN packets). the protocols that you aren't 'fixing' will just continue barging their way through.

      the only workable solution is to drop packets and assume the protocols themselves will adjust (which they will, if they are working correctly, and if they'

    • It's a part of the solution but not a complete solution. Boundary problems are different from center-of-network problems. Playing with TCP window sizes only works well at the edges and only in the outgoing direction (the 'inflight' sysctls that we've had forever), but does not completely solve the problem because you always need to add one or two additional packets above and beyond the calculated sweet spot to absorb changes in latency from other links and give the algorithm time to respond whenever reali

  • by Animats ( 122034 ) on Wednesday May 09, 2012 @12:47AM (#39938165) Homepage

    It's not clear from the paper whether packet dropping is per-flow, in some fair sense, or per link. There's a brief mention of fairness, but it isn't explored. It sounds like the new approach has no built-in fair queuing.

    Without fair queuing, whoever sends the most gets the most data through. Windows (especially) starts up TCP connections by sending as many packets as it can at connection opening. There used to be a convention in TCP called "slow start", where new connections started up sending only two packets, increasing if the round trip time turned out to be good. That was too pessimistic. But Windows now starts out by blasting out 25 or so packets at once. This hogs the available bandwidth through everything with FIFO queues.

    If the routers at choke points (where bandwidth out is much less than bandwidth in, like the entry to a DSL line) do fair queuing by flow, the problem gets dealt with there, as the excessive sending fights with itself, trailing packets on the biggest flows are sent last, and everything works out OK.

    "Bufferbloat" is only a problem when a small flow gets stuck behind a big one. A flow getting stuck behind the preceding packets of the same flow is fine; you want those packets delivered. Packet reordering is better than packet dropping, although more computationally expensive. Most CIsco routers offer it on slower links. Currently, this means links below 2Mb/s [cisco.com], which is very slow by modern standards. That's why we still have kludgy solutions like RED. This new thing is a better kludge, though.

    • If the routers at choke points (where bandwidth out is much less than bandwidth in, like the entry to a DSL line) do fair queuing by flow, the problem gets dealt with there, as the excessive sending fights with itself, trailing packets on the biggest flows are sent last, and everything works out OK.

      Yeah, that's a good idea, and also offers a solution for QoS sensitive services like VoIP.

      On the other hand when deciding for the size of the queue it also should be considered whether there are alternative routes. When no alternatives exists, a larger queue might be warranted.

      • by jg ( 16880 )

        AQM's don't usually look at the contents of what they drop/mark.

        We expect CoDel to be running on any bulk data queue; voip traffic, properly classified, would be in a independent queue, and not normally subject to policing by a CoDel.

        While 10 years ago, a decent AQM like CoDel might have been able to get latencies down for most applications where they should be, browser's abuse of TCP, in concert with hardware features such as smart nics that send line rate bursts of packets from single TCP streams has made

        • We expect CoDel to be running on any bulk data queue; voip traffic, properly classified, would be in a independent queue, and not normally subject to policing by a CoDel.

          I thought we're discussing algorithms that are net-neutrality compatible.

          in concert with hardware features such as smart nics that send line rate bursts of packets from single TCP streams

          I don't understand what you mean by single TCP stream and how the NIC does this multiplexing. I mean the router sees the different TCP port numbers. Anyway, fair queueing should be done not per TCP stream but per source host.

    • by jg ( 16880 )

      The article's subtitle is: "A modern AQM is just one piece of the solution to bufferbloat." We certainly expect to be doing fair queuing and classification in addition to AQM in the edge of the network (e.g. your laptop, home router and broadband gear). I don't expect fair queuing to be necessary in the "core" of the network.

      I'll also say that an adaptive AQM is an *essential* piece of the solution to bufferbloat, and a piece we've had no good solution to (until, we think, now).

      That's why this article rep

    • by Idbar ( 1034346 )
      Nothing is clear. Not the thought behind the algorithm proposed, not the comparison against other well known AQM algorithms.

      And they haven't published a journal paper with their thought process, but just weak results. I'm not sure what's all the fuzz about it, when this problem has been studied (as Active Queue Management not the newly coined "Bufferbloat") for more than 15 years, with many significant results.

      What needs to be done is manufacturers to allow implementations an support of AQM and ECN, n
      • by Animats ( 122034 )

        I tend to agree. Really, all you can do when forwarding packets is drop packets or reorder them. We need to see the complete details of the rules for doing that.

        (You can also ask the sender to slow down, which is what "explicit congestion notification" is about. It may not help all that much when the traffic is dominated by short-lived connections, which is what this paper claims to address.)

    • by jd ( 1658 )

      The "correct" strategy would seem to be to combine methods - have one of the fair queueing algorithms (eg: Hierarchical Fair Service Curve) and have a packet-dropping scheme on each queue. That's great for TCP, but UDP takes space too. Fortunately, there are algorithms designed for multimedia traffic (GREEN, BLACK, PURPLE and WHITE) and I'm guessing at least one of these can take care of the UDP side of things.

      I would not get too hung up on RED or variants (eg: GRED, WRED) - although it's the most common al

  • This seems like such an unstable system that it's practically a security issue. Could someone, in theory, purposely send bad traffic to as many internet relays (or whatever) as possible, causing them to stall and shutting down huge chunks of the internet?
    • by 1s44c ( 552956 )

      This seems like such an unstable system that it's practically a security issue. Could someone, in theory, purposely send bad traffic to as many internet relays (or whatever) as possible, causing them to stall and shutting down huge chunks of the internet?

      You want to DDOS core routers? You would need an insane amount of bandwidth. The only company that has enough malware on enough computers is Microsoft, they demonstracted this with SQL slammer.

  • My internets fine (Score:4, Insightful)

    by rhade ( 709207 ) on Wednesday May 09, 2012 @01:32AM (#39938357)
    We all can see that the Internet is getting slower *Citation needed* Have you tried turning your modem off and on again?
  • by clickclickdrone ( 964164 ) on Wednesday May 09, 2012 @02:11AM (#39938533)
    My first thought after reading the story was 'Hope whoever patents those ideas doesn't charge too much for them."
    • by Xtifr ( 1323 )

      Heh, I know exactly what you mean. The same thought definitely crossed my mind. Fortunately, if you read carefully, you'll see that they seem to be releasing their code as open source.

      "The open source project CeroWrt is using OpenWrt to explore solutions to bufferbloat. A CoDel implementation is in the works, after which real-world data can be studied. We plan to make our ns-2 simulation code available, as well as some further results."

      Not a guarantee, but it sounds promising.

  • If your ISP doesn't respect your TOS values then you're only ever going to get best effort service.

    Changing the technology of the queuing in the network won't help because your traffic is all going into the 'whatever is left' queue at the bottom of the priority stack (or next to the bottom of the priority stack if your ISP has implemented worse than best effort queuing to control p2p and worm traffic) below the provider's own traffic (ie voice, video services they sell) and any business traffic where the bu

    • I should expand this to say that even if your provider respects your TOS that the other ISPs in the path of traffic probably are not respecting it.

      I should also add that Joe Consumer is generally not trusted with regards to TOS and the provider(s) will protect themselves (and business customers) against accidental or deliberate mismarking of traffic (ie you setting your p2p traffic to TOS network control).

      • If users are allowed to set their own ToS/QoS, everything would just be set to the highest available priority. Even if the users didn't know how, there would be plenty of companies willing to set a TOS-setting program as 'InterNet Accelerator 2012' - and plenty of people willing to buy it if someone said it worked, even if they don't know how.
  • > But you can avoid the worst problems by having someone actively managing the checkout
    > queues, and this is the solution for bufferbloat as well: AQM (Active Queue Management).

    Can someone please implement this system at Heathrow to reduce the queues there?

  • From TFA-
    "...the longer delays are not excessive and permit link utilizations to be maintained at high levels."
    So we're adding another layer of management and still have "not excessive" delays. Awesome, sign me up.

  • Has Van Jacobson's research on "bufferbloat" ever been replicated? Because I'm pretty sure "the cause is persistently full buffers" is only "according to researcher" singular, unlike the claim of the submitter.

    (The linked article, by Jacobson and a collaborator, cites two sources: one is Jacobson's original article, and the other is by Jacobson and the same collaborator.)

    I'm not saying he's wrong; I'm saying this isn't very scientific.

    • by swalve ( 1980968 )
      I don't think it needs to be scientifically replicated, it sort of defines itself. If you have fast lines but slow roundtrip times, your packets are by definition getting stuck in buffers.

Think of it! With VLSI we can pack 100 ENIACs in 1 sq. cm.!

Working...