Controlling Bufferbloat With Queue Delay 134
CowboyRobot writes "We all can see that the Internet is getting slower. According to researchers, the cause is persistently full buffers, and the problem is only made worse by the increasing availability of cheap memory, which is then immediately filled with buffered data. The metaphor is grocery store checkout lines: a cramped system where one individual task can block many other tasks waiting in line. But you can avoid the worst problems by having someone actively managing the checkout queues, and this is the solution for bufferbloat as well: AQM (Active Queue Management). However, AQM (and the metaphor) break down in the modern age when Queues are long and implementation is not quite so straightforward. Kathleen Nichols at Pollere and Van Jacobson at Parc have a new solution that they call CoDel (Controlled Delay), which has several features that distinguish it from other AQM systems. 'A modern AQM is just one piece of the solution to bufferbloat. Concatenated queues are common in packet communications with the bottleneck queue often invisible to users and many network engineers. A full solution has to include raising awareness so that the relevant vendors are both empowered and given incentive to market devices with buffer management.'"
s/slower/laggier/ (Score:5, Insightful)
The Internet is not getting slower. It is becoming laggier. Comeon people, learn the difference.
Re:s/slower/laggier/ (Score:5, Insightful)
And smaller buffers will help. Larger buffers do almost nothing to increase throughput, but they can increase latency. Having buffers isn't a problem. Having buffers that are too large is a problem.
Re: (Score:3)
Depends...
Suppose you have a router has link A connected at 10Mbs, link B at 10Mbs, and link C at 300Kbps. You have a host on the far end of A sending packets to something on the far end of C. The traffic is highly bursty. TCP does reliability end to end, so if the host on the end of C misses packets because the router discarded them that is all traffic that has to run across link A again, which cuts down the available bandwidth for A to B. If the router had a large buffer the burst of traffic from A f
Re: (Score:2)
Re: (Score:2)
I think one of the problems is that some UDP traffic prefers some buffering (media streaming), and that's the bulk of traffic these days. What we probably need to do is take a different look at QOS, or at least have c
Buffer size is not the real problem (Score:2)
If you really want to address latency what routers should do is keep track of how long packets have been in the router (in clock ticks, milliseconds or even microseconds) and use that with QoS stuff (and maybe some heuristics) to figure out which packets to send first, or to drop.
For example, "bulk/
Re: (Score:2)
You're trying to solve a problem created by using large buffers. Especially when you have routers with different bandwidth links, it's important to limit the buffer size. Size should be limited per port.
Active buffer management and QoS can be useful, but those are primarily "solutions" to the problem of high latency caused by buffer bloat. Stop creating the buffer bloat in the first place by limiting the size of buffers.
Re: (Score:2)
Uh. The real problem is latency, not buffer size. Right? That's what people are complaining about - packets are taking longer than they should.
You propose solving the latency problem by reducing buffer size.
I propose solving the latency problem by reducing the maximum time packets are allowed to spend in a router (AKA latency).
Which do you think actually deals with the real issue? If the problem is latency, why not deal with it rather than go off on loosely related tangents?
BTW I finally read the article an
Re: (Score:2)
I propose solving the latency problem by not creating it with bloated buffers in the first place. Small buffers do help with throughput, and they do it with low latency. Large buffers add almost no additional throughput, but they do add latency. Solving a problem created by poor design just adds complexity.
Re: (Score:2)
It isn't that easy. Small buffers actually result in a high level instability that can leave them completely empty as often as it leaves them completely full. If the buffer is too small performance goes completely to hell on a permanent basis. It becomes highly chaotic.
There are plenty of situations where you want a bigger buffer to handle a very short-term burst of information, but where that same big buffer becomes a liability when the information is continuously in excess of available bandwidth.
Simila
Re: (Score:1)
Re: (Score:2)
Is this not a fancy latency-based algorithm? http://queue.acm.org/detail.cfm?id=2209336 [acm.org]
What do you propose that solves the problem better than that algorithm? And what is "right" by your definition?
As for my proposal, it's more of an approach - since it seemed to me that a lot of people were not using the "time/age in router"[1] of a packet to help determine whether to discard it or not. To back that up, just look at all the RED papers, and all the talk about "bufferbloat" ( where bloat=size of buffers). N
Re: (Score:2)
Re: (Score:1)
Yeah, buffers we have a lot of buffers haha -- Godfather part II more or less
Re: (Score:2)
It was meant to be funny and no one had rated it except overrated. No sense of humor this MOD. Well I'm glad I don't make money doing comedy because....there are many people that have buffered me out.
Re:s/slower/laggier/ (Score:5, Informative)
Yup, and another error in TFS is:
According to researchers, the cause is persistently full buffers.
should be "a cause".
Lame, misleading summaries is par for the course around here, though. But look on the bright side--it helps keep us on our toes, sorting sense from nonsense, and helps us spot the real idiots in the crowd. :)
At least this one had a link to a fairly reliable source. It wasn't just blog-spam to promote some idiot's misinterpretation of the facts. Might have been nice to also provide a link to bufferbloat.net [bufferbloat.net] or Wikipedia on bufferbloat [wikipedia.org], as well, for background information, but what can you do?
Re: (Score:2)
To be fair, this isn't exactly the first /. article discussing bufferbloat, so presumably both submitter and editor assumed we already knew what it was.
Re: (Score:2)
That may--barely--excuse the lack of context-providing links. It doesn't excuse the misstatements and misinformation in the summary.
Re: (Score:2)
TFA is actually one of the first coherent explanations of bufferbloat I've seen. Bufferbloat.net tells me they can fix my chaotic and laggy network performance, alright, fine. But... how?
Re: (Score:2)
TFA was great--TFS not so much. I agree that bufferbloat.net doesn't seem to be as useful as I thought on a first glance, but the Bufferbloat FAQ [wordpress.com] would have been a good resource.
Re: (Score:2)
Re: (Score:1)
Well, when I think of 'slow' I think of Mb/sec. In this respect, no, the Internet has not gotten slower, in fact, it has gotten 'faster'. However, when I think of 'laggy' I think of 'time it takes to load a webpage'. and since webpages and pretty much all files have been getting larger ...
Re: (Score:2)
Re: (Score:1)
To finish off this rant, I want to shake my tiny fist of rage at Gawker Media sites, and all similar sites, which seem to require javascript to even read the text in a primarily text article. Plain text dependant upon javascript! Oh, the humanity! Since when is it okay to bastardize the DOM and sane page design concepts to the point that you can hide the all of the body text from a user that doesn't want to go around allowing every rogue script some asshole implements to run on his/her box(en)?
Re: (Score:2)
if we weren't so pedantic, the buffers wouldn't have the need to store anything.
insightful modifier is insightful.
Re: (Score:2)
Why people think "performace" means "throughput" is something I'll never understand. Throughput is _always_ secondary to latency, and really only becomes interesting when it becomes a latency number (ie "I need higher throughput in order to process these jobs in 4 hours instead of 8" - notice how the real issue was again about _latency_).
-- Linus Torvalds
Where's the incentive? (Score:3)
Re: (Score:2)
In short, we will likely see better queuing methods integrated with future routers
Not holding my breath, given the age and demonstrated effectiveness of SFQ variants and their non-presence in modern routing platforms.
What TFA left me wondering was whether their algorithm will prove resilient to being combined with prioritization and connection-based/host-based/service-based fairness strategies and various ECN mechanisms.
Re:Where's the incentive? (Score:5, Informative)
Today, there is no incentive for an ISP to consider spending money on this. For their private customers, they sell QoS, which guarantees their customers a better queuing method. Extremely profitable. For consumers, it makes sense to simply continue investing in infrastructure.
You appear to be confused about the issue. This is not about capacity and oversubscription. This is about a pathology of queueing.
The packets leaving a router, once it has figured out where they go, are stored in a buffer, waiting their turn on the appropriate output interface. While there are a lot of details about the selection of which packet leaves when, you can ignore it and still understand this particular issue: Just assume they all wait in a single first-in-first-out queue and leave in the order they were processed.
If the buffer is full when a new packet is routed, there's nothing to do but drop it (or perhaps some other packet previously queued - but something has to go). If there are more packets to go than bandwidth to carry them, they can't all go.
TCP (the main protocol carrying high-volume data such as file transfers) attempts to fully utilize the bandwidth of the most congested hop on its path and divide it evenly among all the flows passing through it. It does this by speeding up until packets drop, then slowing down and ramping up again - and doing it in a way that is systematic so all the TCP links end up with a fair share. (Packet drop was the only congestion signal available when TCP was defined.)
So the result is that the traffic going out router interfaces tends to increase until packets occasionally drop. This keeps the pipes fully utilized. But if buffer overflow is the only way packets are dropped, it also keeps the buffers full.
A full buffer means a long line, and a long delay between the time a packet is routed and the time it leaves the router. Adding more memory to the output buffer just INCREASES the delay. So it HURTS rather than helping.
The current approach to fixing this is Van Jacobson's previous work: RED (Random Early Drop/Discard). In addition to dropping packets when the buffer gets full, an very occasional randomly-chosen packet is dropped when the queue is getting long. The queue depth is averaged - using a rule related to typical round-trip times - and the random dropping increases with the depth. The result is that the TCP sessions are signalled early enough that they back off in time to keep the queue short while still keeping the output pipe full.The random selection of packets to drop means TCP sessions are signalled in proportion to their bandwidth and all back off equally, preserving fairness. The individual flows don't have any more packets drop on the average - they just get signalled a little sooner. Running the buffers nearly empty rather than nearly full cuts round-trip time and leaves the bulk of the buffers available to forward - rather than drop - sudden bursts of traffic.
ISPs have a VERY LARGE incentive to do this. Nearly-full queues increase turnaround time of interactive sessions, creating the impression of slowness, and dropping bursty traffic creates the impression of flakeyness. This is very visible to customers and doing it poorly leaves the ISP at a serious competitive disadvantage to a competitor that does it well.
So ISPs require the manufacturers of their equipment to have this feature. Believe me, I know about this: Much of the last 1 1/2 years at my latest job involved implementing a hardware coprocessor to perform the Van Jacobson RED processing in a packet processor chip, to free the sea of RISC cores from doing this work in firmware and save their instructions for other work on the packets.
Huh? (Score:2)
Assuming you are a fellow USian, what competition?
Re: (Score:2)
Re: (Score:2)
That would be the DSL provider with long queues and lots of lag, the cable provider with long queues and lots of lag, and the "4G" operator (more like 3.25G speed) with long queues, lots of lag, signal strength issues, and a 100MB monthly cap.
If this theory of competition between the duopoly worked, cable and DSL would both have better customer service and lower rates...
Re: (Score:2)
Re: (Score:1)
Re: (Score:2)
You appear to be confused about the issue. This is not about capacity and oversubscription. This is about a pathology of queueing.
To be fair, it's about both.
Large queues are a problem, but they can be mitigated by adding more capacity (bandwidth). It doesn't matter how deep the queue can be if it's never used -- it doesn't matter how many packets can be queued if there's enough bandwidth to push every packet out as soon as it's put in the queue.
That said, your point about AQM being a valid solution to congestion is, of course, right on:
To avoid large (tens of milliseconds or more) queue backlogs on congested links, you use Active Que
Re: (Score:2)
I agree with much of what you say. But:
Large queues are a problem, but they can be mitigated by adding more capacity (bandwidth). It doesn't matter how deep the queue can be if it's never used -- it doesn't matter how many packets can be queued if there's enough bandwidth to push every packet out as soon as it's put in the queue.
Unfortunately, TCP ramps up until there IS congestion. Raise the capacity of the congested link and it either becomes congested at the higher capacity or some other link becomes t
Re: (Score:2)
RED seems to be a primitive hack job to me.
My proposal is this: http://tech.slashdot.org/comments.pl?sid=2837433&cid=39941029 [slashdot.org]
Lastly if RED doesn't take into account packet size when it drops then it hurts lower bandwidth channels with small packets disproportionately more than the ones with big packets, and most latency sensitive applications use small packets. When communicating across say the pacific ocean, unnecessary packet loss can hurt a lot more (2 *50 milliseconds?).
Re: (Score:2)
This is much of a bottleneck issue and user experience. The ones in the best interest to implement this are router manufacturers to provide a better user e
Re: (Score:2)
More fiber and less memory, sure. The buffers are probably in place to separate different customer and their quality of service since there is big money in that.
Re: (Score:2)
Re: (Score:2)
Remember AT&T and their 9 second 3G ping times (Score:5, Interesting)
Yep, same cause. They attempted to minimize packet loss by increasing the buffers in their network. The user experience was horrible.
http://blogs.broughturner.com/2009/10/is-att-wireless-data-congestion-selfinflicted.html
Re: (Score:3)
My InterTubes are BETTER because I HAVE ZERO LOSS!!!
Oddly enough such a business model turned out to be unsustainable due to
(1) it's finanically expensive (between one thing an another)
(2) doing this the less expensive way (ie by slathering on bigger buffers) introduces excessive latency (for some customer designated value of "excessive")
For the life of me I don't understand how ANYBODY can be a
Re: (Score:3)
And why... (Score:5, Interesting)
Is the internet getting slower? (laggier)
because the simplest pages are HUGE BLOATED MONSTROSITIES!
Between flash and ads. And every single page loading crap from all around the world as their 'ad partners', hit counters, click counters, +1 this, like this, digg this, and all the other stupid social media crap that has invaded the web. All this shit that serves no purpose other than to some marketers. And EVERY SINGLE PAGE has to have a 'comment' section and other totally useless shit tacked on as well.
Just this little page here on slashdot. With less than a dozen replies. Tops 80k so far. And that's with everything being blocked that can be.
slower? laggier? no... the signal to noise raito is sucking major ass.
Re: (Score:2)
And every single page loading crap from all around the world as their 'ad partners', hit counters, click counters, +1 this, like this, digg this, and all the other stupid social media crap that has invaded the web.
Amen, bro. I hate that crap being sprinkled all over. Even without the +1 buttons there's too many pages framed with various sidebars and menus.
Re: (Score:2)
If you used to have a 56kbit modem, and now you have a 10Mbit connection, that's going up by a factor of 200. A classic html page was maybe 5kB (no images), so now it should be allowed to be 1MB large. If you had a few of images then, that would account for a youtube video now.
Re: (Score:3)
Re: (Score:2, Informative)
What matters is not (if we put privacy aside) that 10 domains are requested, but that there are 10 (mostly) different routes to 10 different servers. If a single of these routes or servers are slow, the website loaded will load slow as well.
Re: (Score:2)
I agree with this especially with everyone catching on to tabbed browsing that updates all of the tabs all of the time with no break. Just add more fat pipes. Simple answer.
Re: (Score:2)
What click counters, "+1 this" or "digg this" are you talking about? I guess you failed step 2 of installing a browser.
Re: (Score:2)
--the signal to noise raito is sucking major ass.--
Agreed and didn't Google remove Boolean operations from their search engine? It's also harder to filter the noise.
Facebook now seems to have most of the noise but I would say YouTube uses the most bandwidth although it's a close call.
Maybe (Score:2)
Re:Maybe (Score:4, Funny)
They also have ATF (Assisted Troll Flagging) as a kind of belt-n-suspenders thing.
Summary so awful, it just hurts. (Score:4, Insightful)
Can we? It looks like it has been getting faster to me....
According to researchers, the cause is persistently full buffers,
What researchers? What buffers? Server buffers? Router buffers? Local browser buffers? Your statements are so vague as to be useless.
and the problem is only made worse by the increasing availability of cheap memory, which is then immediately filled with buffered data.
Buffering is a way of speeding servers up immensely. Memory is orders of magnitude faster than disk, and piling RAM on and creating huge caches can only help performance. I call bullshit on your entire claim. This summary is so awful, I don't even want to read whatever article you linked to.
Re: (Score:1)
Re:Summary so awful, it just hurts. (Score:5, Informative)
It is definitely a terrible summary, but the ACM article it links to is actually quite interesting. (You do know what the ACM is, don't you?) And bufferbloat has nothing to do with discs, so your objection is completely off base. It certainly would have helped if the summary had given you any idea what bufferbloat is, of course, so I understand your confusion. But it's a real thing. The problem is that the design of TCP/IP includes built-in error correction and speed adjustments. Large buffers hide bottlenecks, making TCP/IP overcorrect wildly back and forth, resulting in bursty, laggy transmission.
Re: (Score:1)
The checkout line analogy is somewhat flawed also. A better example may be a situation where one has to move soda from soda fountains to a thirsty restaurant crowd with picky drinking habits (and the beverages go stale quickly). One can use bigger or smaller pitchers to move the drinks, but any particular customer can only drink so much at a time. You may get lucky and have a customer take a couple drinks at once, but more likely the server will end up throwing away the almost full pitcher because the drink
Re: (Score:3)
Buffering is a way of speeding servers up immensely. Memory is orders of magnitude faster than disk, and piling RAM on and creating huge caches can only help performance.
You're thinking of caching, not buffering.
Re: (Score:2)
Buffering is a way of speeding servers up immensely. Memory is orders of magnitude faster than disk, and piling RAM on and creating huge caches can only help performance. I call bullshit on your entire claim. This summary is so awful, I don't even want to read whatever article you linked to.
I call bullshit on people calling bullshit on things without putting in the effort to even try and understand the story. It's about buffering in network devices causing excessive lag as TCP/IP wasn't built to handle large amounts of invisible store and forward buffering between endpoints. Huge caches don't always help performance, it depends on the nature of the thing being cached.
'i call bullshit' is a stupid phrase too.
Re: (Score:2)
Buffering is a way of speeding servers up immensely. Memory is orders of magnitude faster than disk, and piling RAM on and creating huge caches can only help performance. I call bullshit on your entire claim
What a coincidence.... try setting your cache to 10GB and surf for a few weeks, let it fill up. Then try turning off cache in your browser, and see how much faster things load.... if you're on a remotely broadband connection (more than about 1mbit), the difference will be enormous. With a broadband connection, it's faster to fetch the page than it is to search through your cache to see if you have a copy of the page locally and then load it.
When it's so noticeable at an individual browser level, what makes
Numbers & market incentives (Score:5, Interesting)
We all can see that the Internet is getting slower.
Can we? I'd suggest that most people are unaware of any such trend, perhaps because it has happened too gradually and too unevenly. Indeed:
A full solution has to include raising awareness so that the relevant vendors are both empowered and given incentive to market devices with buffer management.
Exactly. Consumers don't know or care about low latency, so the market doesn't deliver it (that plus lack of competition among ISPs in general, but that's another kettle of fish).
We need a simple, clear way for ISPs to measure latency. It needs to boil down to a single number that ISPs can report alongside bandwidth and that non-techies can easily understand. It doesn't need to be completely accurate, and can't be: ISPs will exaggerate just like they do with bandwidth, just like auto manufacturers do with fuel efficiency, etc. What matters is that ISPs can't outright make up numbers, so that a so-called "40 ms" connection will reliably have lower average latency than a "50 ms" connection. That should be enough for the market to start putting competitive pressure on ISPs.
What kind of measure could be used for this purpose? Perhaps some kind of standardized latency test suite, like what the Acid tests were to web standards compliance? Certainly there would be significant additional difficulties, but could it be done?
Re: (Score:2)
Using a latency value for you connection is kind of nonsense, since it makes a big difference if I connect to a server in my country or on the other side of the world. (That is why CDNs even exist.) Especially since your ISP can only influence their end and maybe with with peering partners the operate.
But honestly, I think a basic rating model could be devised, similar like energy efficiency. Something with grades from A-F, based on how much latency they introduce.
Re: (Score:2)
We need a simple, clear way for ISPs to measure latency.
Ping?
Re: (Score:2)
Reporting a simple average latency "across the Rogers network" in the case of my particular ISP, would be easy to understand.
To use a car analogy., it's just like the signs before construction zones that say "Time to Warden Ave, 7 minutes, normally 4".
--dave
Re: (Score:2)
Their problem setup is a speed boundary transition (Score:4, Interesting)
The boundary they are transiting is one between a fast network and a slower network, similar to what you see at a head-end at a LATA or broadband distribution point and leaf nodes like peoples houses, or one the other end, on the pipe into a NOC with multi gigabit interconnects much bigger than the pipes into or out of the NOC.
The obvious answer is the same as it was in 1997 when it was first implemented on the Whistle InterJet: lie about the available window size on the slow end of the link so as to keep the fast end of the link from becoming congested by having all its buffers filled up with competing traffic.
In this way, even if you have tasks which would otherwise eat all of your bandwidth (at the time, it was mostly FTP and SMTP traffic), you can still set aside enough buffer space on the fast side of the router on the other end of the slow link to let ssh or HTTP traffic streams make it through. Beats the heck out of things like AltQ, which do absolutely nothing to prevent a system with a fast link that has data to send you crap-flooding the upstream router so that it has no free buffers to receive any other traffic, and which it can't possibly hope to shove down the smaller pipe at the rate it's coming in the large one.
Ideally this would be cooperatively managed, as was suggested at one point by Jeff Mogul (which is likely barred due to the lack of a trust relationship between the upstream and downstream routers, if nothing else). Think of it like half your router lives at each end of the link wire, instead of both sides living on one end.
It's the job of the device on the border who happens to know there's a pipe size differential to control what it asks for from the upstream side int terms of the outstanding buffer space it's possible for inbound packets to consume (and to likewise lie about the upstream windows to the downstream higher speed LAN on the other end of the slow link).
I'm pretty sure Julian Elischer tried to push the patches for lying about window size out to FreeBSD as oart of Whistle contributing Netgraph to FreeBSD.
While people are looking at that, they might also want to reconsider the TCP Rate Halving work at CMU, and the LRP implementation from Peter Druschel's group out of Rice University.
-- Terry
Re: (Score:2)
It is *any* transition from fast to slow, including your computer to your wireless link or vice versa from your home router to your computer.
Bufferbloat is an equal opportunity destroyer of time.
Re: (Score:1)
the problem is you can't just 'fix' TCP. even if you make it work with both IPv4 TCP and IPv6 TCP, you're still missing the other protocols out there (SCTP, UDP, etc, etc), and more importantly, can NEVER fix protocols that are being encrypted (ie TCP inside VPN packets). the protocols that you aren't 'fixing' will just continue barging their way through.
the only workable solution is to drop packets and assume the protocols themselves will adjust (which they will, if they are working correctly, and if they'
Re: (Score:3)
It's a part of the solution but not a complete solution. Boundary problems are different from center-of-network problems. Playing with TCP window sizes only works well at the edges and only in the outgoing direction (the 'inflight' sysctls that we've had forever), but does not completely solve the problem because you always need to add one or two additional packets above and beyond the calculated sweet spot to absorb changes in latency from other links and give the algorithm time to respond whenever reali
Paper is ambiguous about what gets dropped (Score:4, Insightful)
It's not clear from the paper whether packet dropping is per-flow, in some fair sense, or per link. There's a brief mention of fairness, but it isn't explored. It sounds like the new approach has no built-in fair queuing.
Without fair queuing, whoever sends the most gets the most data through. Windows (especially) starts up TCP connections by sending as many packets as it can at connection opening. There used to be a convention in TCP called "slow start", where new connections started up sending only two packets, increasing if the round trip time turned out to be good. That was too pessimistic. But Windows now starts out by blasting out 25 or so packets at once. This hogs the available bandwidth through everything with FIFO queues.
If the routers at choke points (where bandwidth out is much less than bandwidth in, like the entry to a DSL line) do fair queuing by flow, the problem gets dealt with there, as the excessive sending fights with itself, trailing packets on the biggest flows are sent last, and everything works out OK.
"Bufferbloat" is only a problem when a small flow gets stuck behind a big one. A flow getting stuck behind the preceding packets of the same flow is fine; you want those packets delivered. Packet reordering is better than packet dropping, although more computationally expensive. Most CIsco routers offer it on slower links. Currently, this means links below 2Mb/s [cisco.com], which is very slow by modern standards. That's why we still have kludgy solutions like RED. This new thing is a better kludge, though.
Re: (Score:2)
If the routers at choke points (where bandwidth out is much less than bandwidth in, like the entry to a DSL line) do fair queuing by flow, the problem gets dealt with there, as the excessive sending fights with itself, trailing packets on the biggest flows are sent last, and everything works out OK.
Yeah, that's a good idea, and also offers a solution for QoS sensitive services like VoIP.
On the other hand when deciding for the size of the queue it also should be considered whether there are alternative routes. When no alternatives exists, a larger queue might be warranted.
Re: (Score:2)
AQM's don't usually look at the contents of what they drop/mark.
We expect CoDel to be running on any bulk data queue; voip traffic, properly classified, would be in a independent queue, and not normally subject to policing by a CoDel.
While 10 years ago, a decent AQM like CoDel might have been able to get latencies down for most applications where they should be, browser's abuse of TCP, in concert with hardware features such as smart nics that send line rate bursts of packets from single TCP streams has made
Re: (Score:2)
We expect CoDel to be running on any bulk data queue; voip traffic, properly classified, would be in a independent queue, and not normally subject to policing by a CoDel.
I thought we're discussing algorithms that are net-neutrality compatible.
in concert with hardware features such as smart nics that send line rate bursts of packets from single TCP streams
I don't understand what you mean by single TCP stream and how the NIC does this multiplexing. I mean the router sees the different TCP port numbers. Anyway, fair queueing should be done not per TCP stream but per source host.
Re: (Score:3)
The article's subtitle is: "A modern AQM is just one piece of the solution to bufferbloat." We certainly expect to be doing fair queuing and classification in addition to AQM in the edge of the network (e.g. your laptop, home router and broadband gear). I don't expect fair queuing to be necessary in the "core" of the network.
I'll also say that an adaptive AQM is an *essential* piece of the solution to bufferbloat, and a piece we've had no good solution to (until, we think, now).
That's why this article rep
Re: (Score:2)
And they haven't published a journal paper with their thought process, but just weak results. I'm not sure what's all the fuzz about it, when this problem has been studied (as Active Queue Management not the newly coined "Bufferbloat") for more than 15 years, with many significant results.
What needs to be done is manufacturers to allow implementations an support of AQM and ECN, n
Re: (Score:2)
I tend to agree. Really, all you can do when forwarding packets is drop packets or reorder them. We need to see the complete details of the rules for doing that.
(You can also ask the sender to slow down, which is what "explicit congestion notification" is about. It may not help all that much when the traffic is dominated by short-lived connections, which is what this paper claims to address.)
Re: (Score:2)
The "correct" strategy would seem to be to combine methods - have one of the fair queueing algorithms (eg: Hierarchical Fair Service Curve) and have a packet-dropping scheme on each queue. That's great for TCP, but UDP takes space too. Fortunately, there are algorithms designed for multimedia traffic (GREEN, BLACK, PURPLE and WHITE) and I'm guessing at least one of these can take care of the UDP side of things.
I would not get too hung up on RED or variants (eg: GRED, WRED) - although it's the most common al
attack much? (Score:1)
Re: (Score:2)
This seems like such an unstable system that it's practically a security issue. Could someone, in theory, purposely send bad traffic to as many internet relays (or whatever) as possible, causing them to stall and shutting down huge chunks of the internet?
You want to DDOS core routers? You would need an insane amount of bandwidth. The only company that has enough malware on enough computers is Microsoft, they demonstracted this with SQL slammer.
My internets fine (Score:4, Insightful)
You know as a species you're doing it wrong when (Score:4, Funny)
Re: (Score:2)
Heh, I know exactly what you mean. The same thought definitely crossed my mind. Fortunately, if you read carefully, you'll see that they seem to be releasing their code as open source.
"The open source project CeroWrt is using OpenWrt to explore solutions to bufferbloat. A CoDel implementation is in the works, after which real-world data can be studied. We plan to make our ns-2 simulation code available, as well as some further results."
Not a guarantee, but it sounds promising.
New Q Tech Useless if TOS Ignored (Score:2)
If your ISP doesn't respect your TOS values then you're only ever going to get best effort service.
Changing the technology of the queuing in the network won't help because your traffic is all going into the 'whatever is left' queue at the bottom of the priority stack (or next to the bottom of the priority stack if your ISP has implemented worse than best effort queuing to control p2p and worm traffic) below the provider's own traffic (ie voice, video services they sell) and any business traffic where the bu
Re: (Score:2)
I should expand this to say that even if your provider respects your TOS that the other ISPs in the path of traffic probably are not respecting it.
I should also add that Joe Consumer is generally not trusted with regards to TOS and the provider(s) will protect themselves (and business customers) against accidental or deliberate mismarking of traffic (ie you setting your p2p traffic to TOS network control).
Re: (Score:2)
Re: (Score:2)
I know...but thanks :-)
Alternative uses (Score:1)
> But you can avoid the worst problems by having someone actively managing the checkout
> queues, and this is the solution for bufferbloat as well: AQM (Active Queue Management).
Can someone please implement this system at Heathrow to reduce the queues there?
Covered by "Security Now" pretty well (Score:2)
Jim Getty's posted great info on his blog (Score:2)
Wonderful, just a different type of delay (Score:2)
From TFA-
"...the longer delays are not excessive and permit link utilizations to be maintained at high levels."
So we're adding another layer of management and still have "not excessive" delays. Awesome, sign me up.
bufferbloat science (Score:2)
(The linked article, by Jacobson and a collaborator, cites two sources: one is Jacobson's original article, and the other is by Jacobson and the same collaborator.)
I'm not saying he's wrong; I'm saying this isn't very scientific.
Re: (Score:2)
"Zip Line" (Score:1)
The first time I saw this was back about the mid '60s, when a bank got the idea to equalize wait time this way. It was called "Zip Line".
Re: (Score:2)
Re: (Score:2)
Not really... Tim Horton's has been doing it for years... they just have the queue line up parallel to the banks of cash registers, and can loop it back on itself. I think they can actually have more people in queue than you would with straight lines like the grocery checkout.
Additionally, when one person does take longer to process (say they're paying for their $15 order with pennies), they don't hold up the people in line behind them, because the queue just routes around them.
Re: (Score:3)
Mathematically optimal too, providing all customers/packets take equal time to process. The only problem is that in the real world it requires awkward physical queue layouts.
Yeah, it is irritating when you arrive at the airport at five o'god in the morning to catch a flight leaving at nine, and as the only customer, you have to walk a labyrinth a mile long to get to the way too cheerful check-in assistant who is patiently waiting.
Net result: Added latency.
Then you hit the security line, which is already full, presumably by people who spontaneously spawned from the walls between check-in and security. And in the security line, this one line approach does not appear to help spe
Re: (Score:2)
Tip: Even if you assume that everybody is physically able to do so, in today's US security theatre, that's likely to get the goons come running, or at the very least earn you the dreaded "s" scribbled on your boarding pass.
Re: (Score:2)
Re: (Score:2)
I would understand a "redundant" or perhaps a "troll".
But 'offtopic?'
Can I suggest the moderators to effectivelly READ Slashdot a little often?
http://tech.slashdot.org/story/11/12/03/0218257/bufferbloat-dark-buffers-in-the-internet [slashdot.org]