Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
The Internet Operating Systems BSD

Better Bandwidth Utilization 196

jtorin writes "Daniel Hartmeier (of OpenBSD fame) has written a short but interesting article which explains how to better utilize available bandwidth. In short it gives priority to TCP ACKs over other types of traffic, thereby making it possible to max both upload and download bandwidth simultaenously. Be sure to check ot the nice graphs! Also note the article on OpenBSD Journal. OpenBSD 3.3 beta is now stable enough for daily use, so why not download a snapshot from one of the mirrors and try it out?"
This discussion has been archived. No new comments can be posted.

Better Bandwidth Utilization

Comments Filter:
  • by Anonymous Coward on Wednesday March 05, 2003 @10:52AM (#5440355)
  • by arkanes ( 521690 ) <arkanes@NoSPam.gmail.com> on Wednesday March 05, 2003 @10:57AM (#5440398) Homepage
    If you have a non-shaped asynchronous connection, like most forms of DSL and cable, it's pretty easy to cap out your upstream. When you do that, your downstream goes through the floor because your ACKs don't get through. This just says that if your routers prioritize ACKs, your downstream will still be fine even if your upstream is saturated. This isn't exactly new, my cable ISP already does this.
  • Linux solution (Score:3, Informative)

    by eddy ( 18759 ) on Wednesday March 05, 2003 @10:58AM (#5440400) Homepage Journal

    The Linux Advanced Routing & Traffic Control HOWTO [lartc.org] discuss how to achieve the same thing on linux using QoS. See section 9.2.2.2 [lartc.org](Sample configuration)

  • by blkwolf ( 18520 ) on Wednesday March 05, 2003 @10:59AM (#5440412) Homepage
    You can find Daniels original email on the subject at:
    http://marc.theaimsgroup.com/?l=openbsd-pf&m= 10463 0260218727

    It contains a little more of the pf rules than the article does, and has all the relevant information you need except for the nice /.'d graphs

  • by Anonymous Coward on Wednesday March 05, 2003 @11:00AM (#5440421)
    Bandwidth is fixed. Any number of crappy operating systems can max out bandwidth. What they meant to say is how to reduce latency.
  • by gmuslera ( 3436 ) on Wednesday March 05, 2003 @11:01AM (#5440429) Homepage Journal
    Wondershaper uses other approach, like having a scheduler and (de)prioritize certain ports and hosts . I think that right now it not specially prioritize TCP ack, and, well, that could improve the good job what do wondershaper right now if it works.
  • by gmuslera ( 3436 ) on Wednesday March 05, 2003 @11:04AM (#5440459) Homepage Journal
    Second revisions are always better. It prioritizes small packets (of less of 64 bytes), and I suppose that this include ACKs :)
  • by somethingwicked ( 260651 ) on Wednesday March 05, 2003 @11:05AM (#5440463)
    That "Intro to the Internet" class from college is a little hazy now, but I don't recall it being as simple as the "internet" coming out of the pipe like water.

    Someone far more knowledgable than myself will get to correct me, but I seem to recall there was a process of-

    Send some stuff-wait for ACK.

    When you get the ACK, send some more.

    By turbocharging the ACKs, you are reducing that lag time
  • Re:Linux solution (Score:5, Informative)

    by pe1rxq ( 141710 ) on Wednesday March 05, 2003 @11:11AM (#5440499) Homepage Journal
    No it doesn't....
    It is a differend solution to a different problem caused by the same thing....

    The cause is the big cache in the modem, it results in a delay on outgoing traffic.
    One problem is that interactive traffic gets, well, less interactive (e.g. the echo characters in a remote shell have a delay). This is solved in the HOWTO you refered to.
    Another problem is that the downstream acks get delayed resulting in less downstream data. This is solved in the mentioned article.

    A combination of the two would be really great and could probably be done in both linux and openbsd.

    Jeroen
  • Re:The problem is (Score:5, Informative)

    by The Evil Couch ( 621105 ) on Wednesday March 05, 2003 @11:23AM (#5440565) Homepage
    it's a possible way to game the system, however they can also ignore what the packets are marked as and just boost the priority of the smaller packets, which are almost always system messages. if the bump up everything under 64 bytes, then they'd get the same effect, but without the possibility of someone cheating the system like that. though I'm pretty sure someone else has already done that.
  • by stratjakt ( 596332 ) on Wednesday March 05, 2003 @11:25AM (#5440582) Journal
    And from the explanation in the readme:

    "To make sure that uploads don't hurt downloads,
    we also move ACK packets to the front of the queue."

    It's pretty cool, it throttles your speeds to just under what the maximum should be, so that queueing will only happen on your linux box, and then you can prioritize what you want.
  • by spydir31 ( 312329 ) <hastur@noSpaM.hasturkun.com> on Wednesday March 05, 2003 @11:26AM (#5440583) Homepage
    better version, I think
    tc filter add dev $DEV parent 1:0 prio 10 u32 \
    match ip protocol 6 0xff \
    match u8 0x10 0x10 at nexthdr+13 \
    flowid 1:10
  • Slashdotted - Mirror (Score:5, Informative)

    by SILIZIUMM ( 241333 ) on Wednesday March 05, 2003 @11:34AM (#5440641) Homepage
    Since the website seems slashdotted now I've set up a mirror. You can see it there [infinit.net].
  • by Surak ( 18578 ) <surakNO@SPAMmailblocks.com> on Wednesday March 05, 2003 @11:51AM (#5440748) Homepage Journal
    Ummm...

    A) Zmodem is still around, at least in the *nix world. You can get lrzsz from here [www.ohse.de].
    Some telnet clients still support Zmodem, and you can use lrzsz to transfer files via telnet. Personally, I'd rather use ssh as it's a lot more secure, but in cases where either you can only use telnet or when you are on network you can trust (i.e., not the Internet), you can still use Zmodem.

    b) Zmodem is not, nor has it ever been a bidirectional protocol -- you can't upload and download at the same time unless you have two different connections. There *were* protocols that would let you do this (Puma comes to mind), but you most decidedly could NOT do this with Zmodem.

  • by somethingwicked ( 260651 ) on Wednesday March 05, 2003 @12:05PM (#5440851)
    Again, to the best of my recollection, what you are suggesting is similar to the TFTP and many streaming techniques approach:

    Take FTP and strip the overhead error checking and if something doesn't come out right, refresh and download it again.

    For streaming, you get more throughput, and every now and them you might miss a frame in exchange for the higher quality you can obtain with the lower overhead

  • by puzzled ( 12525 ) on Wednesday March 05, 2003 @12:10PM (#5440880) Journal


    It seems to me that a great many /. readers have a cursory knowledge of how TCP/IP works. This is true of almost every other topic and I don't have a generalized solution for ignorance, but in this case a quick read of the first volume of Stevens' excellent TCP/IP Illustrated Series should do the trick.

    Reading that book will give you a foundation to understanding how a single endpoint behaves in an IP network. If you want some understanding of the guts of a large scale internetwork I'd suggest the Cisco Press IP Quality of Service book.

    There are a great many things near and dear to /. reader's hearts - the god given right to steal music by treating a retail DSL/Cable connection like a dedicated wholesale circuit being the prime example - that are more easily understood after a read of these two books.

    If you're impatient you can look at my journal - I've covered some of the issues there.
  • TCP Daytona (Score:4, Informative)

    by Patrick ( 530 ) on Wednesday March 05, 2003 @12:15PM (#5440911)
    send pre-emptive ACKs before you get the data, right about when they would be expected.

    The technique you suggest is one of several proposed by Stefan Savage in TCP Congestion Control with a Misbehaving Receiver [washington.edu]. He called it TCP Daytona. :)

  • by Patrick ( 530 ) on Wednesday March 05, 2003 @12:20PM (#5440933)
    Send some stuff-wait for ACK.

    When you get the ACK, send some more.

    By turbocharging the ACKs, you are reducing that lag time

    Not quite. TCP streams use pipelining: you send N packets (N is the "window size"), and each time you get an ACK you send one more. So in the ideal case there's no lag, because the ACK for packet 3 lets you go ahead and send packet 10 (if N=7).

    When a packet (or its ACK) gets dropped, TCP assumes the network is congested, and cuts N in half, and very slowly increases it back to where it was. So after each dropped packet or ACK you have a while during which you're not using the full link. Several drops in a row can reduce your throughput by a factor of 100 or more.

    Prioritizing ACKs doesn't reduce the lag time. It reduces the likelihood that TCP will overreact and reduce its sending rate due to perceived congestion.

  • by JRHelgeson ( 576325 ) on Wednesday March 05, 2003 @12:22PM (#5440955) Homepage Journal
    For the benefit of all: The follwing is the article in its entirity - sans the graphics which can be seen at: (provided the servers are working)

    http://www.benzedrine.cx/ackpri-norm.jpg
    http://www.benzedrine.cx/ackpri-priq.jpg

    benzedrine.cx - Prioritizing empty TCP ACKs with pf and ALTQ Prioritizing empty TCP ACKs with pf and ALTQ

    Introduction ALTQ is a framework to manage queueing disciplines on network interfaces. It manipulates output queues to enforce bandwidth limits and priorize traffic based on classification.

    While ALTQ was part of OpenBSD and has been enabled by default since several releases, the next release will merge the ALTQ and pf configuration into a single file and let pf assign packets to queues. This both simplifies the configuration and greatly reduces the cost of queue assignment.

    This article presents a simple yet effective example of what the pf/ALTQ combination can be used for. It's meant to illustrate the new configuration syntax and queue assignment. The code used in this example is already available in the -current OpenBSD source branch.

    Problem I'm using an asymmetric DSL with 512 kbps downstream and 128 kbps upstream capacity (minus PPPoE overhead). When I download, I get transfer rates of about 50 kB/s. But as soon as I start a concurrent upload, the download rate drops significantly, to about 7 kB/s.

    Explanation Even when a TCP connection is used to send data only in one direction (like when downloading a file through ftp), TCP acknowledgements (ACKs) must be sent in the opposite direction, or the peer will assume that its packets got lost and retransmit them. To keep the peer sending data at the maximum rate, it's important to promptly send the ACKs back.

    When the uplink is saturated by other connections (like a concurrent upload), all outgoing packets get delayed equally by default. Hence, a concurrent upload saturating the uplink causes the outgoing ACKs for the download to get delayed, which causes the drop in the download throughput.

    Solution The outgoing ACKs related to the download are small, as they don't contain any data payload. Even a fast download saturating the 512 kbps downstream does not require more than a fraction of upstream bandwidth for the related outgoing ACKS.

    Hence, the idea is to priorize TCP ACKs that have no payload. The following pf.conf fragment illustrates how to set up the queue definitions and assign packets to the defined queues:

    ext_if="kue0"

    altq on $ext_if priq bandwidth 100Kb queue { q_pri, q_def }
    queue q_pri priority 7
    queue q_def priority 1 priq(default)

    pass out on $ext_if proto tcp from $ext_if to any flags S/SA \
    keep state queue (q_def, q_pri)

    pass in on $ext_if proto tcp from any to $ext_if flags S/SA \
    keep state queue (q_def, q_pri)
    First, a macro is defined for the external interface. This makes it easier to adjust the ruleset when the interface changes.

    Next, altq is enabled on the interface using the priq scheduler, and the upstream bandwidth is specified.
    I'm using 100 kbps instead of 128 kbps as this is the real maximum I can reach (due to PPPoE encapsulation overhead). Some experimentation might be needed to find the best value. If it's set too high, the priority queue is not effective, and if it's set too low, the available bandwidth is not fully used.
    Then, two queues are defined with (arbitrary) names q_pri and q_def. The queue with the lower priority is made the default.

    Finally, the rules passing the relevant connections (statefully) are extended to specify what queues to assign the matching packets to. The first queue specified in the parentheses is used for all packets by default, while the second (and optional) queue is used for packets with ToS (type of service) 'lowdelay' (for instance interactive ssh sessions) and TCP ACKs without payload.

    Both incoming and outgoing TCP connections will pass by those two rules, create state, and all packets related to the connections will be assigned to either the q_def or q_pri queues. Packets assigned to the q_pri queue will have priority and will get sent before any pending packets in the q_def queue.

    Result The following test was performed first without and then with the ALTQ rules explained above:

    • -10 to -8 minutes: idle
    • -8 to -6 minutes: download only
    • -6 to -4 minutes: concurrent download and upload
    • -4 to -2 minutes: upload only
    • -2 to 0 minutes: idle

    The first graphs shows the results of the test without ALTQ, and the second one with ALTQ:

    Image 1, ACK PRI Normal [benzedrine.cx]

    Image 2, ACK PRI PRIq [benzedrine.cx]

    The improvement is quite significant, the saturated uplink no longer delays the outgoing empty ACKs, and the download rate doesn't drop anymore.

    This effect is not limited to asymmetric links, it occurs whenever one direction of the link is saturated. With an asymmetric link this occurs more often, obviously.

    Related links

  • by teeker ( 623861 ) on Wednesday March 05, 2003 @12:37PM (#5441101)
    Err... you must have Zmodem confused with something else..it was one way only. You are right about the widely used part though, and not ony warez boards but everywhere. In fact it was the only thing going in the later BBS days.

    Maybe puma or one of those oddball protocols are bidirectional, but that was pretty useless to warez runners back in the day, because everybody knows that real k-k00l warez runners use USRobotics Courier HST 9600 high-speed modems, and they were only fast in one direction. Real warez runners spit on v.32 modems...Ahhh the good old days ;-) Sorry for the OT folks...
  • by meridian-gh ( 584679 ) on Wednesday March 05, 2003 @02:02PM (#5441775)
    It's really useful for things like Frame Relay WANs, where you can get mixed and matched speeds all over the place.

    For example, I have the equivilent of a T1 (1.544mb CIR Frame) going to Qwest. From Qwest, I have a 256k CIR Frame link going to a remote office.

    When the office sends data to me, it's fine. When I send back, there are massive amounts of Red Frames. Dropped packets means re-transmits which means delay. Delay is bad when you are running an interactive application over these links. Think of a garden hose connected to a fire hydrant. The garden hose could dump water into the fire hydrant fine (assuming the water for the hydrant is turned off elsewhere...). When the fire hydrant turns on, however...

    Now I have QoS maps based off of the DLCI for each office, so it throttles back our link to Qwest to match the remote connection, so everyone talks happily, instead of blasting the little link into oblivion. Now, Red Frames aren't seen very often, unless the Qwest circuit is saturated, and we get chopped back to our base CIR. It makes a difference. Not a huge one, but a noticeable difference.

    Traffic Shaping is your friend. It's all about making the mose efficient use of what you have. (Or making sure that you still have bandwidth when your roommate is leeching gigs of pr0n...) M

  • by anthony_dipierro ( 543308 ) on Wednesday March 05, 2003 @02:04PM (#5441794) Journal

    So if the network is congested and an ACK SHOULD time out but doesn't, TCP will keep on flooding the network, ruining the pool for everyone.

    No. If the downstream is flooded, the packets won't be received, and no ACK will be sent. ACKs have higher priority, but even that can't make them appear out of thin air.

  • by Froqen ( 36822 ) on Wednesday March 05, 2003 @02:57PM (#5442405)
    Windows XP uses a DDR Fairness technique to solve the same problem, I wonder how the two techniques compare?
    See "QoS for Modems and Remote Access" at this KB article [microsoft.com].
  • by Ded Bob ( 67043 ) on Wednesday March 05, 2003 @03:15PM (#5442586) Homepage
    RTM [freebsd.org] :) Specifically, dummynet [freebsd.org] is the part that does queueing.

    I just need to find out how to do this with ipf [freebsd.org] instead of ipfw [freebsd.org].
  • by golo ( 95789 ) on Wednesday March 05, 2003 @03:21PM (#5442657) Homepage Journal
    I've heard that this guys [allot.com] have implemented the same idea as one of the trick they use in their traffic shaping/QoS products. They're for WAN links IIRC so that any client (in the remote sites) can take advanatge of it.
  • ACK Shaping (Score:4, Informative)

    by nimrod_me ( 650667 ) on Wednesday March 05, 2003 @04:15PM (#5443250)
    This is what is known today as "ACK traffic shaping". First on the market, I believe, was packeteer (www.packeteer.com) with their PacketShaper.

    Unlike most conventional traffic shapers which queue and control the data rate on the outgoing channel, PacketShaper controls the rate of acknowledgements on the reverse channel.

    This is usually used to *slow* traffic. I.e., instead of having the router drop packets (thereby wasting resources until the source TCP understands that the net is congested and reduces load) it just slows the ACKs and the sender automatically reduces its sending rate.

    Anyway, the real nice thing about the OpenBSD implementation is that they merge their packet filter (pf) with the ALTQ queuing code. Now this is really powerful.

    Sounds like a good time for all BSDs to adopt this new combination instead of relying on less-capable mechanisms. E.g. FreeBSD has ipfw for filtering and dummynet for queue management. I don't know how pf compares with ipfw but ALTQ is definitely better than dummynet.

    Nimrod.
  • throttled (Score:3, Informative)

    by zquestz ( 594249 ) on Wednesday March 05, 2003 @05:09PM (#5443788) Homepage
    Just in case you don't run openbsd or linux (wondershaper) and are looking for ack packet priority, you can get throttled from http://www.intrarts.com/throttled.html and have the same functionality for Mac OS X and freebsd. It is great to see this information finally getting out to the public, as it does offer significant improvements in network performance.

Real Programmers don't eat quiche. They eat Twinkies and Szechwan food.

Working...