Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Networking Communications The Internet Wireless Networking

Comcast Reduced 'Working Latency' By 90% with AQM. Is This the Future? (apnic.net) 119

Long-time Slashdot reader mtaht writes: Comcast fully deployed bufferbloat fixes across their entire network over the past year, demonstrating 90% improvements in working latency and jitter — which is described in this article by by Comcast Vice President of Technology Policy & Standards. (The article's Cumulative Distribution Function chart is to die for...) But: did anybody notice? Did any other ISPs adopt AQM tech? How many of y'all out there are running smart queue management (sch_cake in linux) nowadays?
But wait — it gets even more interesting...

The Comcast official anticipates even less latency with the newest Wi-Fi 6E standard. (And for home users, the article links to a page recommending "a router whose manufacturer understands the principles of bufferbloat, and has updated the firmware to use one of the Smart Queue Management algorithms such as cake, fq_codel, PIE.")

But then the Comcast VP looks to the future, and where all of this is leading: Currently under discussion at the IETF in the Transport Area Working Group is a proposal for Low Latency, Low Loss Scalable Throughput. This potential approach to achieve very low latency may result in working latencies of roughly one millisecond (though perhaps 1-5 milliseconds initially). As the IETF sorts out the best technical path forward through experimentation and consensus-building (including debate of alternatives), in a few years we may see the beginning of a shift to sub-5 millisecond working latency. This seems likely to not only improve the quality of experience of existing applications but also create a network foundation on which entirely new classes of applications will be built.

While we can certainly think of usable augmented and virtual reality (AR and VR), these are applications we know about today. But what happens when the time to access resources on the Internet is the same, or close to the time to access local compute or storage resources? What if the core assumption that developers make about networks — that there is an unpredictable and variable delay — goes away? This is a central assumption embedded into the design of more or less all existing applications. So, if that assumption changes, then we can potentially rethink the design of many applications and all sorts of new applications will become possible. That is a big deal and exciting to think about the possibilities!

In a few years, when most people have 1 Gbps, 10 Gbps, or eventually 100 Gbps connections in their home, it is perhaps easy to imagine that connection speed is not the only key factor in your performance. We're perhaps entering an era where consistently low working latency will become the next big thing that differentiates various Internet access services and application services/platforms. Beyond that, factors likely exceptionally high uptime, proactive/adaptive security, dynamic privacy protection, and other new things will likely also play a role. But keep an eye on working latency — there's a lot of exciting things happening!

This discussion has been archived. No new comments can be posted.

Comcast Reduced 'Working Latency' By 90% with AQM. Is This the Future?

Comments Filter:
  • by Ungrounded Lightning ( 62228 ) on Saturday December 04, 2021 @09:43PM (#62048237) Journal

    But what happens when the time to access resources on the Internet is the same, or close to the time to access local compute or storage resources? What if the core assumption that developers make about networks â" that there is an unpredictable and variable delay â" goes away?

    The same thing that always happened when access to distant resources became acceptably fast. Back in the very early days of the internet and mailnet there was this little rhyme about how, as locality became less critical, people tended to select hosts without regard to locality:

    "A host is a host, from coast to coast
      And nobody talks to a host that's close,
      Unless the host that isn't close
      Is busy, hung, or dead!"

    • by AmiMoJo ( 196126 )

      These days most hosts are close thanks to CDNs. The user doesn't select a host, it's done automatically for them.

      • CDNs only improve downloads of cached content. Which is fine for streaming and general web browsing or downloading. But it doesn't do shit for two way communications so things like online gaming and video/audio conferencing aren't helped at all.

        You're never going to see sub millisecond latency as an end user for those types of applications. What Comcast is doing is finding ways to reduce additional latency that gets introduced at network "hops" as a result of buffer bloat.

        • For games that is an issue, but for human real-time communications, the gold standard is sub 100ms. That is the threshold before people start talking over each other. You are going to have trouble doing band-playing over zoom at that latency, but not have either video or audio dialog.
          • For games, VR, and interactive computer use, the situation is indeed grim. Even 5-10ms (equivalent to a single video frame of latency at 200 to 100fps) creates enough perceptible lag to feel like the scene is 'sloshing'.

            Somewhere around 2-3ms (~400-500fps, with one frame of latency), it becomes fast enough to not LOOK like blatant slosh... but then you fall off the cliff into the uncanny valley, because your BRAIN still knows something isn't quite right.

            The problem is with peripheral vision. If you're focus

  • Comcast goes out of its way to never spend any money in regions where they have an effective monopoly. Their infrastructure has been crumbling, while their fees have been skyrocketing.

    Oh, and AT&T ("DSL") isn't much better. We get saturation advertising for "Gigabit Fiber", but 95% of the SF Bay can't get anything faster than 2 or 3 Mbps.

    • Re: (Score:3, Interesting)

      by Anonymous Coward

      Can you prove that with data, or are you just making stuff up because you have an axe to grind?

      I work for Comcast, and I can tell you that, you are indeed quite wrong. Comcast operates one of the worlds most advanced cable networks, that doesnt happen by ignoring it. Remember when everyone started working from home in 2020? Network held up pretty good didnt it? Yes.

      Thank you, have a nice day.

      • Extremely well done. I’m a very satisfied customer. The only time I had a network outage was when you stopped supporting my modem model. Swapped it out and it’s been great since. In fact you doubled my bandwidth with no additional cost increase.

        Your automated customer service needs work though. The simple task of registering the new modem took several hours since the system doesn’t understand that task.

      • So advanced they have to set caps on it because they can't actually provide what they sell
      • I live in Silicon Valley (south San Jose) and Comcast/Xfinity here is down at least once a month, usually for 4-5 hours. I don't know why it goes down- and the notifications are very skimpy on details, but it is often enough to really piss me off. Oh, and while I get pretty decent downstream speeds, the upstream speeds are capped at 6M, and that really blows for the $200+ I spend.

        Of course, my only real option is AT&T U-verse that is really DSL here, and like 4M down and 1-ish meg up. If any real compe
        • My Comcast is 60 download and 12 upload and it’s always like that 24/7. I never have outages. However what I do find is the sites I’m trying to visit like slashdot are always going offline.
        • I've always tried to get DSL users especially, to slam a SQM - smart queue management - enabled device in front of their link. It makes even the worst DSL link a lot more tolerable.
      • Can you prove that with data, or are you just making stuff up because you have an axe to grind?

        I work for Comcast, and I can tell you that, you are indeed quite wrong. Comcast operates one of the worlds most advanced cable networks, that doesnt happen by ignoring it. Remember when everyone started working from home in 2020? Network held up pretty good didnt it? Yes.

        s/cable network/pots/ and reread what you just said because this is effectively current reality. Comcast is in denial no different than local carriers who thought ADSL would be good enough and therefore didn't have to invest in new last mile technology.

        Where I live some tiny mid sized ISP few have heard of went thru town getting everyone hooked up with fiber. You better believe I switched immediately. I am paying half what I was before (normal non-intro pricing) for 4x more down, 60x more up than Comcas

    • I have 384 kbps right now in Seattle. This story is just pure PR. We need to finally increase speeds since theyâ(TM)re only about an order of magnitude faster than dialup from twenty+ years ago.

    • by mtaht ( 603670 )
      The slower your internet, the *more* you benefit from AQM and SQM technologies, especially on DSL. While I'm a huge advocate of using openwrt to fix it, off the shelf, the evenroute v3 is pretty darn good for dsl especially (as it uses sch_cake with the correct dsl compensations). Especially if you can't upgrade your speeds, at least apply SQM to what you have, it will make your experience much more reliable, and pleasant. For some more details, see: https://openwrt.org/docs/guide... [openwrt.org]
  • No buffers, no buffer-bloat. It is cheap-ass ISPs that have this problem.

    • "I don't need this, I already have that". Some people don't have the choice of your ISP.
      • Given the choice between bufferbloat and having your bandwidth managed down to avoid it, the latter may be preferable, but the fact that it's necessary is still an indictment of the cheap-ass ISP, not something to brag about.
    • I think if you've got a seriously overprovisioned network, it doesn't matter what algorithm you use to manage buffers since they're always empty.

      • by gweihir ( 88907 )

        I think if you've got a seriously overprovisioned network, it doesn't matter what algorithm you use to manage buffers since they're always empty.

        Well, yes. But it is well-known that TCP/IP _needs_ a seriously overprovisioned network to work well. This is not a new insight in any way. Cheap-ass ISPs just try to squeeze more money out of their customers by skimping on bandwidth. With predictable results.

    • by rundgong ( 1575963 ) on Sunday December 05, 2021 @07:29AM (#62048935)

      No buffers just mean that you can't handle even the smallest burst over your connection rate without dropping packets. That really sounds like a cheap-ass ISP to me.

      Buffers are not there to fix your low connection speed. They are there to handle bursts, and bursts in traffic can happen at any connection speed.

      As long as you have 3 identical ports, it is always possible that two of them are sending to the third at the same time. No amount of over provisioning can fix that problem.

      Buffers are great, and easy, when you have small bursts because then the buffers empty themselves naturally. Buffer bloat arises when you have sustained traffic at max rate because then the buffers won't get a natural chance to empty, even if your traffic is no longer above your connection speed.

      AQM is supposed to let you handle big bursts but at the same time make sure the buffers do not stay full for long periods of time.
      If it works well, it is much better than "no buffers".

      • If you regularly have the problem that your big buffers fill up, you need to provision more bandwidth. That's congestion, not a "burst". Bursts clear up quickly enough to not be a problem, and AQM does nothing to help there. It can't, because the packets are already there and the egress bandwidth is full. What this "AQM" stuff effectively does is it selectively slows non-interactive traffic down by dropping some packets. It doesn't magically make everything low latency without affecting anything else. It's
        • Of course it is no magic solution, but there is no realistic way to provision in such a way that the receiver capacity in one node is greater than the total sender capacity of all the nodes it is connected to. This is of course depending on your specific network scenario, and some problems are solved with just adding more capacity, but in general it is not possible to solve all your network problems with just "more bandwidth". When downloading files it is almost always so that you have more send capacity th

          • no realistic way to provision in such a way that the receiver capacity in one node is greater than the total sender capacity of all the nodes it is connected to

            Nobody's asking for a 1:1 contention ratio. The only requirement is to provision enough capacity to not have regular congestion. The point is that AQM doesn't "fix" congestion. It just throttles bandwidth selectively to hide it. The customers get less than the paid-for bandwidth, but low latency is nice, I guess. The situation you claim to fix is not a burst but congestion: The senders can't cause a problem by filling up big buffers that aren't there. They slow down to the capacity of the receiver, which is

            • The situation you claim to fix is not a burst but congestion

              Correct. The problem is that you want big buffers to handle the bursts. If you just throw out the buffers you lose the benefit that they provide.

              The goal of AQM is to restore this property by selectively throttling traffic so that some traffic doesn't see congestion at all while other traffic is slowed down below the advertised bandwidth

              If it gets slowed down significantly below advertised bandwidth, then it has been poorly implemented.

              The point is that AQM doesn't "fix" congestion.

              Correct, it hides the problems caused by congestion. Your "solution" tries to fix the congestion problem by making the burst problem worse.

              For senders to slow down, their packets need to be dropped, not buffered

              Not true. https://en.wikipedia.org/wiki/... [wikipedia.org]
              With ECN you can slow down the sender before you need to drop packets.

              the reason why TCP doesn't start full blast but takes some time to reach the full capacity of the connection

              And everyone a

              • The internet ate everything else because it didn't put the "intelligence" into the network, it put it at the edge. All these traffic management attempts are the telcos trying to take over the internet from the computer network people. It's not going to work. Ultimately bandwidth is cheaper and better than the hardware you need to ration it if you don't build enough. Bigger buffers are only needed if your network is congested. They don't do anything for bursts, because bursts are short, like the capacity of
                • In terms of unpacking what you have to say as sanely as I can, let me first point you at the paper on the codel AQM which is an easier read than the papers on pie: https://queue.acm.org/detail.c... [acm.org] - and the most relevant ietf standard (rfc75670, https://datatracker.ietf.org/d... [ietf.org] which basically specifies that some form of AQM be available for every fast->slow transition.

                  You *are* correct in that a right-sized buffer (say, between 20 and 60ms) operates well at a fixed speed offered. Ethernet (except w
                  • Unfortunately, the underlying bandwidth available in *shared mediums* like wifi, lte, cable, many gpon services, starlink, even dsl, fluctuates, often quite widely, and the only way to right-size the buffer to the rate is via an AQM technology.

                    Yes, some of the effects of the congestion you describe can be mitigated by throttling. It's just more marketing friendly if you describe it your way*. If you have a congested access network and don't do anything about it, it's worse than AQM. If you solve the congestion, you don't need AQM.

                    *) I'll give you the benefit of the doubt and assume that you understand that the point of AQM is to slow down transmissions to a lower speed than the customer paid for in order to prevent a buffer build up.

                    • by mtaht ( 603670 )
                      I'm one of the authors of codel (rfc8289), fq-codel(rfp8290), cake, and pie, as well as a contributor to various congestion control algorithms in the linux kernel, and the director of the bufferbloat project.
                      Not to be too pointed but when you say this " I'll give you the benefit of the doubt and assume that you understand that the point of AQM is to slow down transmissions to a lower speed than the customer paid for in order to prevent a buffer build up."
                      I am mostly inclined to try and point you at the
                    • Comcast applying AQM to their bottleneck will do nothing to help your Wifi. I am not saying that traffic management has no applications, just that the applications are at the edge. If the ISP network is a bottleneck and you're not even using the bandwidth you paid for, then the ISP needs to invest in more bandwidth, not throttle your traffic to keep latency low. It is important to understand that this AQM business is just "throttle the bandwidth hogs" in a new coat, except nowadays the bandwidth hogs are wa
                    • by mtaht ( 603670 )
                      It would help, I suppose, if you a/b'd the differences in performance. The AQM in this deployment study makes uploads not interfere so much with downloads, benefiting all traffic equally, including netflix. There's no "throttling", the amount of uplink bandwidth available to the user is the same, it's just at 1/10th the induced latency. There's a bit more space for acks and bursts and videoconferencing as a result, also.
                      All networks have a bottleneck. The natural behavior of TCP is to saturate that, and
                    • There's no "throttling"

                      Of course there is. Dropping packets is a signal indicating to the sender to slow down. If an ISP drops packets before I saturate the bandwidth I pay for, I'm not getting what I pay for. An ISP can take advantage of the fact that not everybody uses the bandwidth to the fullest at the same time, which is why contention ratios exist, but ultimately the ISP has to provide enough bandwidth to satisfy the bandwidth demand. They took the money and must provide the agreed-upon service. Trying to improve the latenc

      • by gweihir ( 88907 )

        You are correct. If I go above the 1Gpbs symmetrical I have, packets get lost. You cannot squeeze more data through a fiber than it can carry.

        Seriously, I have done TCP/IP networking for 35 years now, including some large-scale traffic analysis stuff. Don't presume to lecture me. The article is about buffers in the network, not end-system side buffers. Apparently you missed that little detail. Apparently you also have no clue what the long-standing "buffer bloat" issue really is about. Buffers have no place

        • What I try to do when faced with opinions like yours, is to get them to challenge their assumptions, and go measure, with tools the measure the right things, like tcp RTT, under load, through any of a variety of technologies. Our goto tool is called flent (flent.org), and the rrul test in particular, is a nasty stress test of just about any network, corporate, wifi, your home, your ISP, and using it (in conjunction with wireshark packet captures) to illustrate where the dark buffers exist, is the best way t
  • by backslashdot ( 95548 ) on Saturday December 04, 2021 @10:11PM (#62048289)

    Is this the thing that dude a while back was complaining Starlink didn't do but needed to?

  • When your service suffers from regular intermittent outages, your customers don't give a crap about latency approaching zero.
    • Iffy.

      I have either very short duration outages of pretty horrific latency. It's hard to say which is which.

      Speed is reasonable, if overpriced.

      But low latency means VoiP becomes actually usable instead of some curiosity, as well as a whole host of communications I can imagine.

      Various types of networking become more efficient... there is a lot of promise here.

    • Except Comcast is extremely reliable. The only time I had an outage was when they stopped support for my out dated modem. After I upgraded my modem everything has been fine. I get faster than advertised speed 24/7. In fact at zero cost increase, they doubled my bandwidth in the last year. Very happy customer. 5 out of 5 stars.
      • I have Spectrum, so I cannot speak to Comcast, but I am not actually aware of any outages I've had in the past 1+ years. And as I use it for YouTubeTV, I would notice it immediately if I'm home and the TV is on. It seems infinitely more reliable than the mom 'n pop ISP I switched from.
        • Most of the big ISPs do a good job generally speaking. The complaints you hear about are usually caused by last-mile or other local issues, like shitty wiring in a building interfering with the docsis signal or an overloaded Node. But then there are some (Frontier for example) who make it their business to let everything rot.
  • Wasn't Robert X. Cringely writing about this extensively a few years ago? Bufferbloat was all he could talk about and it was clear, no one was paying attention. Or at least he said they weren't. Well, I was. But I didn't do anything about it. Till now.
    • by mtaht ( 603670 )
      Enough did. Eero (which is what he uses) certainly did. One of the reasons for Eero's success was they adopted fq-codel early. I was very happy - as were their users to see they finally shipped sqm a few weeks back (which combines a shaper with fq-codel) for their eero 6 product: https://www.reddit.com/r/eero/... [reddit.com]
  • Most governments will want to have the same kind of capability to spy on and censor Internet activity that they have today. Doing so with the applications of the future would be insanely resource intensive, even if advances in artificial intelligence make it technically feasible. Will governments step in with legislation to prevent the development of applications that they fear would be outside of their control?

    • Can you expand on how reduced latency and more cloud applications reduces the government's ability for surveillance or censorship? Or is this something specific to bufferbloat?

      • The issue is not to do with various kinds of performance improvements themselves, but the applications they will facilitate in the future. Blocking access to websites your government does not want you to see, and intercepting all email activity is pretty much a solved problem. On the other hand, it is already difficult and resource intensive to spy on audio and video conversations, identifying and recording automatically those of potential interest. Recording and analysing all the activity occurring in virt

  • by Vegan Cyclist ( 1650427 ) on Saturday December 04, 2021 @10:52PM (#62048345) Homepage

    > in a few years, when most people have 1 Gbps

    hahahhahahhhahahhahahahah ahahahahhahahahahaaah hahahahahhahaa hahahahhahahhahahahahha ahhahahahahahahahhahahahaha hahahahha!!!!

    • by Zitchas ( 713512 )

      This was my exact reaction. I don't forsee having even 0.1 Gbps within the next several years, let alone full 1 Gbps.

      I think it's stretching things to even say that "most people will have 1 Gbps easily available in their area," let alone actually have it, within the next couple years.

      • The slowest speed Comcast offers right now is already 0.06 Gbps. They are already close to 0.1 Gbps.
        • What's available is a closer to .01 Gbps up here in Maine, unfortunately. Residential plans cap out at ~15Mbps or so. Lots of places you can't even get 1Mbps.
          • Topsham checking in. I am sitting at .831Gbps on the 800Mbps residential plan according to an Ookla speed test.
            I had similar speeds in Bowdoinham with Comcast.

            Are there places in Maine that have crap broadband? Yes, but the areas served by Comcast, which is only around 10 towns, have been pretty good and reliable.

            Full disclosure, not all of Bowdoinham is serviced by Comcast. There are areas in that town that up until recently had no Cable or Broadband options.

            Considering Comcast has a total of 10 towns in M

    • On one hand, your laughter may be warranted. Certainly the majority of people in the world won't have 1 Gbps by then, but what about in developed countries, unlike the USA?

      OK but seriously folks, even in the USA we're starting to get pretty good access to Gbps internet for people who live in urban centers, and even for people who just live near them. However, pretty much all last mile internet in the USA is massively oversubscribed (read: woefully underprovisioned) so you can only get those speeds when your

      • Can't really speak to the oversubscription problem, but as DOCSIS 4.0 rolls out along with full duplex DOCSIS, an awful lot of people should be seeing 1 Gbps and some even will be 10 Gbps along with huge increases in upstream speeds.
        • Well, I'm using DOCSIS 3.0 and I have 400 Mbps service, and sometimes I see it and sometimes I don't so I have no confidence that my ISP (Suddenlink) could provide 1 Gbps reliably, let alone 10.

          I'm happy when I can get 400 Mbps, I can only actually use about 200 on my desktop because I don't have a wire to the router right now so I'm using another router as a bridge, the rest is left for the rest of the household. I wouldn't mind gig but I'm not really willing to pay more to get it.

    • > in a few years, when most people have 1 Gbps

      hahahhahahhhahahhahahahah ahahahahhahahahahaaah hahahahahhahaa hahahahhahahhahahahahha ahhahahahahahahahhahahahaha hahahahha!!!!

      I'm confused, do you not live in a first world country?

  • It sounds like they just mean near zero latency through their router. How will this help when you are connecting to someone 1/3 of the planet away?
    • It means a huge total reduction since there are tons of routers in between.
      • by jabuzz ( 182671 )

        I have just tried pinging a number of servers in Universities in the UK from a server located in data centre in a UK university. Yeah they are all within a few ms of the speed of light in glass on the straight line distance between them. I am not sure how this is going to help reduce latency. You canny change the laws of physics captain. Which is oddly appropriate as the server I was pinging from is in Scotland.

    • by fazig ( 2909523 )
      Why do you think the headline puts 'Working Latency' in quotes?

      It's still relevant since the latency that's added by the networking hardware can add up to significant amounts. So what this does is shave off some overhead that's added by the network IF everyone in between you and that other person 1/3 of the planet away adheres to this, bringing the delay closer to what is technically possible over optical fiber and or copper with around 2/3 of the speed of light.

      Hollow-core fibre could be the future and
  • by PopeRatzo ( 965947 ) on Saturday December 04, 2021 @11:32PM (#62048399) Journal

    Could one of you please explain to me in the simplest terms what "AQM" means? I'm tired and can't bring myself to RTFA. Some of you sound like you know what you're talking about, so it would be a big help.

    TIA

    • Think of it this way: you go to a special event and when you get there you find out that there is a three hour wait to get in so you give up and go home. AQM is like an automated system that would call or text you to notify you of the excessive wait time so you would just stay home instead of going to the event and turning around.
    • this [wordpress.com] seems like a good link.

      As I understand it, if there is a multi-hop path to a destination and packets get buffered waiting to traverse one of the hops then the systems at each end can't do much to detect the issue and modify their behavior. But if the ISP drops packets instead of allowing a queue to build (or has some other congestion notification mechanism) then the endpoints can detect the packet loss and, slow their sending rate, reducing the load on the link

      I suppose if the endpoints knew an expect

    • by Barnoid ( 263111 ) on Sunday December 05, 2021 @12:31AM (#62048505)
      AQM = Active Queue Management. The basic idea is to probabilistically drop packets before the queue becomes full. This as opposed to drop packets only at the tail when the buffer is already full. Apparently, this is particularly beneficial to reduce the latency of bursty traffic.
      • Thank you for the explanation.

        On a general note it's interesting how many methods to handle congestion don't actually work out that well in real life. Ethernet pause frames comes to mind.

    • by AmiMoJo ( 196126 )

      Active Queue Management.

      Normally routers use a basic FIFO, and when the buffer gets full they start dropping packets until there is space. Devices sending packets to the router notice that packets are being dropped and reduce the transmission rate in response.

      The problem with the FIFO is that if the buffer is large then packets get stuck in there for a long time, and latency increases. If the buffer is small it limits the maximum speed at which the router can route packets, and causes a lot of packet drops.

  • I produced a video showing what happens when you put a load on my Comcast broadband connection. https://www.youtube.com/watch?... [youtube.com] I also show the latency on WiFi https://www.youtube.com/watch?... [youtube.com] This is the tool used in the videos. I got my daughter to work on. https://github.com/sylvia-ou/n... [github.com]
    • great tool, great demo. THX!
      • I tried NetCheck again on my Comcast connection now that Comcast has made some changes to their DOCSIS protocol. I wasn't sure if it would work for me or not because I'm using my own modem and not Comcast, but it looks like a solid improvement. My video demo showing speedtest.net was showing around 105 ms ping for 95th percentile jitter. My new upstream 95th percentile is 61ms and 99th percentile is 64ms which seems to be like before, but my downstream 95th and 99th percentile is 55ms which is much bette
      • The full-throttle test on speedtest.net I did from my desktop with gigabit connection was able to get over 475 Mbps downstream. That pushed the 99th percentile ping jitter from 8ms to 55ms. When I ran speedtest.net from my WiFi laptop, WiFi constrained me to 273 Mbps because I only had 40 MHz of bandwidth. But NetCheck only reported a small bump up to 11ms 99th percentile jitter. So because my WiFi PHY rate was capped, it prevented jitter. If I were to do a large web download from my desktop and used s
  • Cheap ISP managers react to congestion by increasing buffer sizes, which causes bufferbloat. AQM is just trying to keep that strategy alive without actually providing the bandwidth the customers pay for. Any ISP tweaking their buffers instead of upgrading their networks to handle the traffic without congestion is a bad ISP.
    • by mtaht ( 603670 )
      Queue delay happens on every sufficiently long tcp transaction, which responds to a loss by backing off. Overbuffering as many have done, slows that essential signal down. Underbuffering makes it hard to fully utilize the link. Overbuffering used to be at epidemic proportions before the pandemic, but the drive for higher quality videoconferencing (more right-sized buffering) has helped get aqm tech more deployed. I try to explain all this humorously in this piece here: https://blog.apnic.net/2020/01... [apnic.net]
  • What's this target of 5ms ping times? That would totally suck, a regression of a factor of 10, as speedtest typically informs me of a ping time of 0ms. On the shell I get 0.4ms ping times to Google. This is with fiber tth, a PC with wired Gbit/s to a switch and then a router to the fibre transceiver.
    • by mtaht ( 603670 )
      "Working conditions" mean "latency under load". Load up that network with a big upload or download, or both, and what does your "ping time" fall to? Here's a pic of what happens to sonic's fiber (at 100mbit) when you do that, compared to the linux "cake" qdisc: http://www.taht.net/~d/sonic_1... [taht.net]
      I am seeing a LOT of 100mbit fiber networks that are seriously overbuffered on uploads, and underbuffered at 1gbit (a *target* of 5ms is better than a 5ms buffer)
      • Good point, just tried ping to Google whilst doing a speedtest. Ping time went up from 1.5ms to 33ms.
    • On the shell I get 0.4ms ping times to Google. This is with fiber tth, a PC with wired Gbit/s to a switch and then a router to the fibre transceiver.

      If this is true, then both your physical location and your routing are extremely unusual:
      0.4 ms at the speed of light is equal to 120 km, or a distance of 60 km (ping is round trip) to "Google"
      (whatever that actually means).

      Probably even less, since light in glass is quite a bit slower than light in vacuum.

      • My fiber is less than 3km from the backbone in Zurich, Switzerland, and Google.com translates to Google.ch which is probably in Zurich as well, perhaps even at the same backbone. That said, I currently get 1.5ms or thereabouts.
  • I've been a Comcast customer for the past decade.
    Running the DSL Reports test in the past always gave an abysmal score for bufferbloat, but I had not run it in ages.
    I ran it just now and got a bufferbloat score of "A".

    • by mtaht ( 603670 )
      Until now, AQM techniques were the province of early adopters and smart users. The 99.9% of people that would benefit needed the ISP to roll it out, and it seemed likely over the past year, that few, such as yourself, had noticed that their network had got subtly better, or why. I was very pleased that comcast finally told the world about it, and I hope more ISPs *ship* AQM (and SQM) technologies, on by default, in the future.
  • I love any improvement in network predictability, but...

    even if we fix the jitter and increase predictability, light speed is till throwing a wrench in the works. Typical wired network latencies from south america to the US are around 150-180ms on a good day. You need servers everywhere to fake low latency, which is expensive.

    Even at lightspeed in the vacuum, you cannot get a roundtrip under 46ms, on fiber is closer to 70ms, add a few hops and you get to the number we have today.

    If you make an applica

    • by mtaht ( 603670 )
      In the case of web pages, requests are overlapped, but yes, they take too long to load. And long RTTs are problematic for much traffic, but the definition of "working latency" is the queueing delay added you get when your system is loaded up.
  • That rarely happens, especially in the slashdot community. Thx!!!!
  • This is great news for Comcast customers because most of them have very low upstream bandwidth.

    I suffered from bufferbloat when I was on legacy DSL, VDSL, and even symmetric 50 Mbps fiber. While not entirely gone, the worst part of bufferbloat as tested by dslreports.com and waveform.com largely went away, going from F to B, on symmetric 300 Mbps fiber overprovisioned to 360 Mbps.

    • while I agree that the bufferbloat problem shows up the most on the slowest speed connections (especially dsl), modern AQM and fair queuing technologies can make the real world impacts much, much better. I also agree that at higher bandwidths and proper provisiong the direct need goes away (but the bufferbloat problem shifts to the wifi - where we have many other fixes shipping).
      That said *even at gigE*, most users will see some benefit from quality fair queuing, especially, coupled with AQM. FQ in parti
  • Iâ(TM)m getting 25-28 ms ping to their gateway on their their FTTH service. Clearly not all companies understand or care about something as important as latency.

  • You need a backbone capable of handling near-peak loads, tier 2 nets that have redundancy and at minimum the capacity to handle the rms of the actual peak loads.

    To avoid congestion caused by a few users hammering the net, have bandwidth guarantees and then any load above that on a best effort.

    If the router can't handle the load, get a better router. Former highend gear can get reused or cannibalised.

    If you've dark fibre and can't afford to light it all the time, light it up at peak times.

    Also, do less. At t

  • Dropping more packets and sooner. Sounds like great service.

Computer programmers do it byte by byte.

Working...