Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Networking Software Linux

Boosting Socket Performance on Linux 138

Cop writes "The Sockets API lets you develop client and server applications that can communicate across a local network or across the world via the Internet. Like any API, you can use the Sockets API in ways that promote high performance -- or inhibit it. This article explores four ways to use the Sockets API to squeeze the greatest performance out your application and to tune the GNU/Linux® environment to achieve the best results."
This discussion has been archived. No new comments can be posted.

Boosting Socket Performance on Linux

Comments Filter:
  • Be aware (Score:4, Funny)

    by 2.7182 ( 819680 ) on Thursday January 19, 2006 @04:58PM (#14512977)
    Some engineers at Berkeley have been looking at this for a while, but haven't gotten much credit for it.
    • Re:Be aware (Score:4, Insightful)

      by leonmergen ( 807379 ) <lmergen@gmaEEEil.com minus threevowels> on Thursday January 19, 2006 @05:02PM (#14513021) Homepage

      Exactly... especially with things like these, it's usually best for the entire internet if you just stick with the defaults... they are defaults for a reason, it might not be the best for you, but it's most likely the best for the internet as a whole.

      Reminds me of those people tweaking firefox settings to hammer all kind of webservers... sure, your browsing might be a slight bit faster, at the expense of the browsing of lots of other people...

      • Re:Be aware (Score:3, Interesting)

        Fasterfox also trips a lot of traps intended to catch content stealing bots.
      • Re:Be aware (Score:2, Interesting)

        by zcat_NZ ( 267672 )
        imbsc but I vaguely recall in the early days of web browsers, they would pull down the base page, and then one image at a time. Netscape opening multiple requests in parallel seemed like a massive abuse of webserver resources at the time, to me at least.
        • Netscape opening multiple requests in parallel seemed like a massive abuse of webserver resources at the time, to me at least.

          I'm glad I'm not the only one who remembers this. We used to call it, "Netrape", because of this behavior.

          I still kind of miss Mosaic.

        • Netscape opening multiple requests in parallel seemed like a massive abuse of webserver resources

          ...as opposed to Internet Exploder? AAMOF this had more to do with the absence of persistent connections in the HTTP 1.0. The server would simply close the socket at its end after servicing a request, so the client had to open a new connection for each new object in the page. That has changed in HTTP 1.1, among other reasons due to the server maxing out the number of open connections on the host.
      • Re:Be aware (Score:3, Interesting)

        by gbjbaanb ( 229885 )
        best for the internet as a whole
        are you sure?

        From a paper written by Phil Dykstra, back in 1999.

        "A recent example comes from the Pacific Northwest Gigapop in Seattle which is based on a collection of Foundry gigabit ethernet switches. At Supercomputing '99, Microsoft and NCSA demonstrated HDTV over TCP at over 1.2 Gbps from Redmond to Portland. In order to achieve that performance they used 9000 byte packets and thus had to bypass the switches at the NAP! Let's hope that in the future NAPs don't place 1500
    • But did they get their work patented? Otherwise in these days and times it (depressingly) doesn't seem to count.
    • Re:Be aware (Score:4, Informative)

      by jas0n ( 120727 ) on Friday January 20, 2006 @01:50AM (#14516695)
      Looks like a rip off of an OnLamp [onlamp.com] article from a few months ago, and not a very good one at that! At least the OnLamp [onlamp.com] article explained how to tweak a few more OS's and the math was correct. And just to add insult to injury the article on OnLamp was written by one of those Berkeley guys [lbl.gov] ;-)
  • by ChipMonk ( 711367 ) on Thursday January 19, 2006 @05:00PM (#14513003) Journal
    Judging by the response time from IBM's web server, it looks like they have yet to put their advice into practice.
    • Judging by the response time from IBM's web server, it looks like they have yet to put their advice into practice.

      ... or too many Slashdot visitors already did that exact thing... :-)

    • Because it seems to be beginning to crawl under a good ol' fashioned /.ing, here's the article text:

      Boost socket performance on Linux

      Four ways to speed up your network applications

      M. Tim Jones (mtj@mtjones.com), Senior Principal Software Engineer, Emulex

      17 Jan 2006

      The Sockets API lets you develop client and server applications that can communicate across a local network or across the world via the Internet. Like any API, you can use the Sockets API in ways that promote high perf
  • I mean really, I think we understand what you mean by just saying Linux.
  • Hello 1995 (Score:5, Insightful)

    by AKAImBatman ( 238306 ) <akaimbatman AT gmail DOT com> on Thursday January 19, 2006 @05:02PM (#14513020) Homepage Journal
    This reads like an article from the 90's. This being 2006 and all, I would hope that programmers know how to make effective use of TCP/IP sockets. I wonder if maybe they just yanked an article from 1995 and did a search/replace on s/Windows/GNU Linux/g.
    • This being 2006 and all, I would hope that programmers know how to make effective use of TCP/IP sockets.

      One of the great things about computers is they allow different implementations of the same idea. Because of this, someone who knows how to tune the networking on one OS may not know how to on Linux. Also, not everyone has been programming since 1995. Do you also complain when the weather report comes on the local news, because you've seen a weather report before?

      • Re:Hello 1995 (Score:3, Insightful)

        by AKAImBatman ( 238306 )
        One of the great things about computers is they allow different implementations of the same idea. Because of this, someone who knows how to tune the networking on one OS may not know how to on Linux.

        Now if only the article actually covered something specific to Linux, I'd agree with you. About the most useful thing it does is tell you the location of the same parameters that you muck with on every other system in existence. This info has only been around for Linux for, oh, more than a decade. Pick up any bo
    • Re:Hello 1995 (Score:4, Interesting)

      by epiphani ( 254981 ) <epiphani&dal,net> on Thursday January 19, 2006 @05:37PM (#14513284)
      Agreed. In fact, as someone who learned socket coding around 1999/2000 (and as a result do not have a good grasp on how to actively define register variables, compilers do that stuff for you these days) I did all of these things out of habit, and didnt fully understand them until this article.

      In the same line - where is the discussion of different FD table polling mechanisms? select() versus poll(), and wheres the writeup about Linux's epoll(). I would have been interested in an epoll() article, especially how it compares to FreeBSD's kqueue().
      • Re:Hello 1995 (Score:5, Informative)

        by pthisis ( 27352 ) on Thursday January 19, 2006 @06:39PM (#14513848) Homepage Journal
        In the same line - where is the discussion of different FD table polling mechanisms? select() versus poll(), and wheres the writeup about Linux's epoll(). I would have been interested in an epoll() article, especially how it compares to FreeBSD's kqueue().

        For the overview, you want Dan Kegel's c10k page:

        http://www.kegel.com/c10k.html [kegel.com]
        • Hello 2003. (Score:5, Interesting)

          by jd ( 1658 ) <imipak@ y a hoo.com> on Thursday January 19, 2006 @08:29PM (#14514651) Homepage Journal
          The paper is 2 years, 2 months old. Many of the arguments will still be valid, but the code in all cases will have evolved considerably. In addition, other code has certainly been developed (there's a hard real-time UDP patch for Linux [uni-hannover.de], for example) and the state of affairs is - if anything - much more muddled today.


          Documentation like this is great and extremely valuable. It would be much more valuable, however, if it remained current. For example, can the ABISS [sourceforge.net] project (which improves block I/O) be used at all? What do the numbers look like, when using profiling tools like Web100 [web100.org] (which profiles TCP communications)?


          Has anyone run the Linux or one of the *BSD kernels through DAKOTA [sandia.gov], KOJAK [fz-juelich.de] or PAPI [utk.edu] to determine where, precisely, bottlenecks are within the kernels? It's easy to theorise, but isn't it cleaner to measure?


          Now, I'm not saying these things aren't being done. They probably are, somewhere, by someone, but if the results aren't getting published we don't really know what impact what changes are going to have. The current method of evolving Operating System code in general is often a mix of personal theory and subjective experience based on non-random samples of activity. That can't really be a good way to do things, can it?


          If I'm wrong, feel free to say. If I'm right, then maybe it would be a good thing if someone (possibly me) put together some kind of testing kit for measuring Linux kernel performance and actually measured the stats for Linux kernels on some kind of regular basis.

    • Could be. But considering that I live in the past anyway, I find the article particularly useful.

      Vuja de rules!
    • This reads like an article from the 90's. This being 2006 and all, I would hope that programmers know how to make effective use of TCP/IP sockets.

      Actually, given that it's 2006, I would have thought that the socket layer would be smart enough to perform these sorts of "optimizations" for you automatically, by analyzing your usage patterns. There's no reason the programmer should have to deal with any of this crap, except maybe by providing a broad hint such as "Maximize throughput" or "Minimize latency."

      • Actually, given that it's 2006, I would have thought that the socket layer would be smart enough to perform these sorts of "optimizations" for you automatically

        To a certain degree, they are optimized. Since most network activity occurs through a higher level networking API (e.g. HTTP), the network performance is already optimized by the library. It's not all that often that you have to open a direct socket unless you happen to be writing such a library or server.

        Which just further points out how much this a
    • Yes, I thought most of the stuff has been addressed for years too. But I'm confused about this, which is new to me:

      BDP = link_bandwidth * RTT
      100MBps * 0.050 sec / 8 = 0.625MB = 625KB
      Note: I divide by 8 to convert from bits to bytes communicated.
      So, set your TCP window to the BDP, or 1.25MB.
      Where does .625MB turn into 1.25MB? If it was double, it might make sense for a send and receive window, but I doubt that is the case either.

      Is this a typo, or am I missing something in the calculation?

      • It's probably a "better safe than sorry" sort of thing. 0.625mb is just the minimum before you're guaranteed to have poor performance.
      • Where does .625MB turn into 1.25MB? If it was double, it might make sense

        What do you mean "if"?

        TWW

      • Re:Hello 1995 (Score:4, Informative)

        by AKAImBatman ( 238306 ) <akaimbatman AT gmail DOT com> on Thursday January 19, 2006 @10:14PM (#14515340) Homepage Journal
        The Linux kernel automatically doubles the buffer for its own use. In the article:

        Within the Linux 2.6 kernel, the window size for the send buffer is taken as defined by the user in the call, but the receive buffer is doubled automatically. You can verify the size of each buffer using the getsockopt call.


        From the MAN page [linuxmanpages.com]:

        NOTES

        Linux assumes that half of the send/receive buffer is used for internal kernel structures; thus the sysctls are twice what can be observed on the wire.


        The article could have better explained that in context. For the most part it's automatic though, so don't worry about it.
        • Linux assumes that half of the send/receive buffer is used for internal kernel structures; thus the sysctls are twice what can be observed on the wire.

          The article could have better explained that in context. For the most part it's automatic though, so don't worry about it.


          Thanks, that is the answer. Hopefully others will see it.

    • Why is this flagged "Insightful"? I thought it was a very well written article and I do a lot of network programming. What should an article about an API designed in 1983 in a language dating back to 1972 supposed to look like? And I doubt the poster actually read it considering it describes features specific to Linux 2.6 (e.g. I don't think 2.4 actually supported setting SO_{SND,RCV}BUF).
      • Why is this flagged "Insightful"?

        Because most of us know more than you think you do? ;-)

        What should an article about an API designed in 1983 in a language dating back to 1972 supposed to look like?

        Old.

        Barring that, defintitely not "News for Nerds" or "Stuff that Matters".

        And I doubt the poster actually read it considering it describes features specific to Linux 2.6 (e.g. I don't think 2.4 actually supported setting SO_{SND,RCV}BUF).

        You do realize that SO_SNDBUF and SO_RCVBUF are part of the POSIX standard [jaluna.com],
        • You do realize that SO_SNDBUF and SO_RCVBUF are part of the POSIX standard [jaluna.com], don't you?

          Yeah? So does this mean you think Linux is POSIX compliant? If so, then maybe you should spend more time coding than posting drivel on ./
          • Yeah? So does this mean you think Linux is POSIX compliant?

            For the most part? Yes. It's not fully POSIX compliant, but close enough. Patches exist in the wild that make it 100% POSIX. It's actually been a pretty big thing to the Linux community to reach a compliant state.

            If so, then maybe you should spend more time coding than posting drivel on ./

            I'm sorry, is your point that SO_[SND|RCV]BUF wasn't in 2.4? Or 2.2? Because (as we can see from this pretty manpage [ed.ac.uk] for Linux 2.0) it was. So there's no reason t
          • P.S. I noticed your previous post about physics lectures. You might find this link [fourmilab.ch] to be of great interest. It kind of helps visualize the Special Theory of relativity.

  • Here is the summary:

    The Sockets API lets you develop client and server applications that can communicate across a local network or across the world via the Internet. Like any API, you can use the Sockets API in ways that promote high performance -- or inhibit it. This article explores four ways to use the Sockets API to squeeze the greatest performance out your application and to tune the GNU/Linux® environment to achieve the best results.

    Here is the first paragraph of the article:

    The Sockets
  • by complexmath ( 449417 ) * on Thursday January 19, 2006 @05:15PM (#14513131)
    Tuning socket parameters is great and all, but the real performance problem with socket IO has to do with using select and poll. There are high-performance alternatives (which admittedly tend to vary from OS to OS) that are so far superior that I wouldn't even consider the default methods unless complete code portability were a crucial factor.
    • There are high-performance alternatives (which admittedly tend to vary from OS to OS) that are so far superior that I wouldn't even consider the default methods unless complete code portability were a crucial factor.

      It's funny you should mention this. I was thinking of the class libraries or frameworks, if you will, included with Java, MFC (if it's still used these days), Visual Age, and so on. Does this mean, and are you saying, that the only way to get the best performance from TCP/IP is to roll your own

      • Re:Code Portability (Score:5, Informative)

        by complexmath ( 449417 ) * on Thursday January 19, 2006 @06:00PM (#14513486)
        There was a Boost library in the works to encapsulate all of this rather nicely, but I'm not sure if it ever made it out of beta. ACE is another option, though that tends to be overkill for some projects. I rolled my own class wrapper around this stuff, but then I enjoy library programming.
    • the real performance problem with socket IO has to do with using select and poll
      That is true, but only under workloads where one process has a lot of sockets open. A (somewhat old) article on this subject is here [kegel.com].
      • That is true, but only under workloads where one process has a lot of sockets open. A (somewhat old) article on this subject is here [kegel.com].
        True enough. But how many applications nowadays are written expecting no more than ~32 simultaneous connections?
    • Are select() and poll() really that bad?

      Ok, the issue is how many fds you can pass. With select() you are limited to a bitmaps worth. And performance has never been much of an issue.

      Of course, poll() is a different matter -- if you are passing 100s or thousands of fds.

      But, what has this got to do with the tcp connection? Not much.

      So, you speed up poll() and still write small packets, and nagle won't write them out immediately... That's about the only connection here.

      Ratboy.
    • by hackstraw ( 262471 ) * on Thursday January 19, 2006 @07:02PM (#14514032)
      Try this:

      http://www.xmailserver.org/linux-patches/nio-impro ve.html [xmailserver.org] /dev/epoll

      The website is hideous, but there used to be benchmarks against different polling/selecting methods. If I remember correctly, its kinda trial and error, YMMV, kind of stuff. Its worth a look.

  • Nothing new (Score:1, Funny)

    by Anonymous Coward
    going from Socket 7 to Socket 462 to Socket 478 boosted it quite a damn bit over the years.
  • Comment removed based on user account deletion
    • Re:GNU/Linux®? (Score:3, Informative)

      by wfberg ( 24378 )
      Most probably it's just IBM policy to always acknowledge some one else's trademarks, so as not to get in trouble. Both GNU (yeah, I know! I knooow..) and Linux are registered trademarks (... of their respective owners, of course..)
    • Because they ARE registered trademarks?

      Duh?
    • Probably because of content provided at http://www.linuxmark.org/ [linuxmark.org]. I'm not 100% sure if that includes GNU/Linux as well, but in terms of the term Linux there is this on that same site: The registered trademark Linux® is used pursuant to a license from Linus Torvalds, owner of the mark in the U.S. and other countries.
  • ...on developerWorks, not the least of which, if I may say so, is the GLib tutorial [ibm.com] I wrote for them this past summer. If you wanted how to use various GLib collections and utilities - lists, tables, trees, quarks, relations, and all that - check it out. You can even download a nice PDF file for offline perusing.

    Folks who are thinking about writing something technical - give dW a shot. The editors are savvy folks and there's lots of good stuff up there already.

    Oh, and book plug [pmdapplied.com]!
  • to send signals to a network socket without writing code but using some ready made command-line tool (netstat?)? I've looked around for this but can't seem to find anything...
  • Nagle's algorithm (Score:5, Interesting)

    by Jeremi ( 14640 ) on Thursday January 19, 2006 @06:12PM (#14513606) Homepage
    For an application where I want both low latency AND high bandwidth, it's not enough to leave Nagle's algorithm on or off. If I leave it on, I'll get increased bandwidth, but >200ms latency due to the Nagle delay. If I leave it off, I get low latency, but the computer will (typically?) send out one network packet per send() call, which means inefficient use of bandwidth unless the calling code is very careful to call send() only with large amounts of data per call.


    To get around the above problems, I came up with the following scheme: Leave Nagle's algorithm enabled, but create a FlushSocket() function that merely disables Nagle on the socket, then calls send() on the socket with a 0-byte buffer, then enables Nagle again. This apparently forces the TCP stack to immediately send any data that it may have accumulated in its Nagle-buffer. Therefore the only thing the calling code has to remember to do is to call FlushSocket() whenever it has called send() one or more times and doesn't think it will be sending any more data any time soon.


    The above technique seems to work pretty well under Linux, Windows, and OS/X (and is more portable than Linux-specific flags like TCP_CORK, etc), but I haven't seen it documented anywhere. Is that simply an oversight, or is there some nasty downside to this technique that I'm overlooking?

    • aren't you just drastically increasing the number of system
      calls you have to pay for?

      if you have some knowledge about the natural grouping of data,
      it would be better to just turn nagle off and do buffering
      in user space (collect up enough data and send it all in one
      go)
      • if you have some knowledge about the natural grouping of data, it would be better to just turn nagle off and do buffering in user space (collect up enough data and send it all in one go) It is not about the "natural grouping" of the data at the user space.
        Most programmers do this "natural grouping" anyway and write the data to the socket in a single buffer only when they want them to be sent. The problem is that sometimes their grouping is not
        good enough and perform multiple writes when they could perfor
        • I think the best solution would be to have Nagle's on by default to address these issues and having a simple system call flush() that forces the transmission of a segment to be used whenever ever you write a small buffer with time-sensitive data.

          Not exactly, you'd better need a send() flag to tag data to be sent immediately. No need to slow down your application with another syscall.

          willy
    • Re:Nagle's algorithm (Score:3, Informative)

      by buck68 ( 40037 )
      You may be interested in a paper we wrote a few years back [1]. We also started with the premise that some applications require both minimal latency and maximal bandwidth. In our case the application was our own media streaming system. We came up with our own patch to TCP (in Linux). The patch provided a new socket option, we call TCP_MINBUF. The idea is that you need a certain minimum amount of buffer allow TCP's congestion window to function, but no more. Indeed, in the paper we show that the
    • To get around the above problems, I came up with the following scheme: Leave Nagle's algorithm enabled, but create a FlushSocket() function that merely disables Nagle on the socket, then calls send() on the socket with a 0-byte buffer, then enables Nagle again.

      I tried this in the past and it was not that good because of the added syscalls. In a pure network application, your worst ennemy are syscalls. And by avoiding this trick and carefully grouping your data into large writes, you both reduce the number
    • In the Linux kernel you don't need to do the empty send(). Turning Nagle off causes an imediate flush so it is enough to strobe nagle off and then back on.

      At one point I submitted a patch that would add a TCP_FLUSH "option" that saved the TCP_CORK and TCP_NDELAY flag values, called the low-level flush routine, and then reestablished the flags.

      It was rejected. (But I still use it from time to time on my own, love that Open Source. 8-)

      Meanwhile, just drop and restore Nagle as fast as you can, it will save y
      • In the Linux kernel you don't need to do the empty send(). Turning Nagle off causes an imediate flush so it is enough to strobe nagle off and then back on.


        That's a good point -- the only reason the send() is in there was because otherwise the trick doesn't work under MacOS/X. I will #ifndef __linux__ that line in my code though.

  • Wouldn't it be nice if C programmers were given an option similar to what fflush does for streams? Something like flush(sd) whenever you need to ignore Nagle's algorithm. In this way you can enable and disable nagling dynamically in your program without calling setsockopt to switch nagling on and off. This option is given for Java since you can easily convert a socket to any type of stream you wish, while most Stream objects have a member function flush(). Perhaps I am wrong and such an interface is alre
    • Perhaps I am wrong and such an interface is already provided in C but I personally never found one, while the necessity for it appears to be obvious.


      See my previous post above ("Nagle's Algorithm") for a way to do it.

      • Yes I read your post before and it seems like a nice way to do it. However I am asking for a clean system call solution-flush() that would do this without invoking setsockopt(). Also could you post the code of your Flush function? I find the description a little confusing at some points.
        • However I am asking for a clean system call solution-flush() that would do this without invoking setsockopt().

          I agree, that would be nice... good luck getting it into the POSIX standard anytime soon though. :^(

          Also could you post the code of your Flush function? I find the description a little confusing at some points.

          Sure, here is the code:

          void FlushSocketOutput(int s)
          {
          SetNaglesEnabled(s, false);
          send(s, NULL, 0, 0);
          SetNaglesEnabled(s, true);

          • That's essentially the solution I would suggest. Note, I have good sockets background, but never needed to do something like this.

            - disable nagle
            - set blocking mode
            - set tcp buffer to 0 bytes
            - write 0 bytes
            - put things back the way they were ...I suspect this would have the fflush()-like functionality he's looking for, not that I've ever tried it!

            Recall that fflush() blocks until the data makes it to disk; I expect he'd want to block until the socket buffers were empt
            • Recall that fflush() blocks until the data makes it to disk; I expect he'd want to block until the socket buffers were empty, too.

              I don't know if that really makes sense for networking though... the reason you'd want fflush() to block until the data makes it to disk is so that once your call to fflush() returns you know that your written data is safe in the event of a crash or power failure. (Although with too-clever hard drive firmware I'm not so sure even that's true anymore!). With networking on the ot

              • > so what would be the purpose is waiting?

                Beats the hell out of me!

                I've noticed that in the vast majority of instances (but not all) programmers looking for this type of solution are trying to apply a band-aid to a poor design anyhow. I try not to think too hard about poor designs. :)
  • Math error in paper? (Score:3, Informative)

    by Stiletto ( 12066 ) on Thursday January 19, 2006 @07:46PM (#14514385)

    throughput = window_size / RTT

    110KB / 0.050 = 2.2MBps

    If instead you use the window size calculated above, you get a whopping 31.25MBps, as shown here:

    625KB / 0.050 = 31.25MBps


    That's funny, I get 12.5MBps

    ???
  • I've used Azureus a lot on my Linux box, and one of its features is tunability and graphs. Number of connections, max up and down, etc, and watch the results. Now, I have a very asymmetric line (10:1 ratio). I've noticed that trying to use maximum upload and download at once can create sinewave patterns of slow response that look a lot like resonant feedback, and in extreme cases can wedge the line completely, throughput zero on all net apps. Running uploads at 20K and leaving the top 5K unused gets a far b
    • I have found that I also get this pattern and it has a lot to do with overloading the transmit feed on my cable modem. The cable modem works as a strict time division multiplexer. So to get maximum throughput you want to keep the transmit buffer full, but ir you over-fill it the packets are silently discarded. As your number of connections goes up the likelyhood of overrunning your modem buffer aproaches regular certianty.

      I run a Linux firewall, so I put in a six layer quality-of-service set. I put "ver
    • This is something a good router will take care of, but very few of them do, not even the customized Linksys routers, nor the linux routers like Smoothwall do BOTH of the required things.

      All you need to do to get outstanding performance on an asymetric line is the following:

      [ON THE ROUTER]
      1. Prioritize TCP ACK Packets as #1 to always go upstream first to your connection
      2. Restrict upload rate by 2% - 5% of the actual upload rate of the connection

      Do these three things, and enjoy a fast connection in both dire
  • GNU/Linux® ®? WTF®
  • Of course it is rather windows centric, but most of the issues apply across platforms (only a few talk about WSA functions)

    However Lame List [tangentsoft.net] contains a lot of wonderful nuggets.

    I must disagree with the article however, there are so SO few times that disabling the Nagle algorythm is the correct answer that the standard answer when someone asks about it on the networking forums is that the asker doesn't understand Nagle, and to reenable it. Telnet is even a bastard case in that your networking performance

    • by Animats ( 122034 ) on Thursday January 19, 2006 @09:38PM (#14515105) Homepage
      I really should fix the bad interaction between the "Nagle algorithm" and "delayed ACKs". Both ideas went into TCP around the same time, and the interaction is terrible. That fixed timer for ACKs is all wrong.

      Here's the real problem, and its solution.

      The concept behind delayed ACKs is to bet, when receiving some data from the net, that the local application will send a reply very soon. So there's no need to send an ACK immediately; the ACK can be piggybacked on the next data going the other way. If that doesn't happen, after a 500ms delay, an ACK is sent anyway.

      The concept behind the Nagle algorithm is that if the sender is doing very tiny writes (like single bytes, from Telnet), there's no reason to have more than one packet outstanding on the connection. This prevents slow links from choking with huge numbers of outstanding tinygrams.

      Both are reasonable. But they interact badly in the case where an application does two or more small writes to a socket, then waits for a reply. (X-Windows is notorious for this.) When an application does that, the first write results in an immediate packet send. The second write is held up until the first is acknowledged. But because of the delayed ACK strategy, that acknowledgement is held up for 500ms. This adds 500ms of latency to the transaction, even on a LAN.

      The real problem is that 500ms unconditional delay. (Why 500ms? That was a reasonable response time for a time-sharing system of the 1980s.) As mentioned above, delaying an ACK is a bet that the local application will reply to the data just received. Some apps, like character echo in Telnet servers, do respond every time. Others, like X-Windows "clients" (really servers, but X is backwards about this), only reply some of the time.

      TCP has no strategy to decide whether it's winning or losing those bets. That's the real problem.

      The right answer is that TCP should keep track of whether delayed ACKs are "winning" or "losing". A "win" is when, before the 500ms timer runs out, the application replies. Any needed ACK is then coalesced with the next outgoing data packet. A "lose" is when the 500ms timer runs out and the delayed ACK has to be sent anyway. There should be a counter in TCP, incremented on "wins", and reset to 0 on "loses". Only when the counter exceeds some number (5 or so), should ACKs be delayed. That would eliminate the problem automatically, and the need to turn the "Nagle algorithm" on and off.

      So that's the proper fix, at the TCP internals level. But I haven't done TCP internals in years, and really don't want to get back into that. If anyone is working on TCP internals for Linux today, I can be reached at the e-mail address above. This really should be fixed, since it's been annoying people for 20 years and it's not a tough thing to fix.

      The user-level solution is to avoid write-write-read sequences on sockets. write-read-write-read is fine. write-write-write is fine. But write-write-read is a killer. So, if you can, buffer up your little writes to TCP and send them all at once. Using the standard UNIX I/O package and flushing write before each read usually works.

      John Nagle

      • Ah, so you are the Nagle of the algorithm? How about an extension onto TCP as a concept:

        you can tell TCP that you are willing to accept d amount of delay, with the default being the 500 ms previously used and assigned. Thus protocols like X could state that they don't need to hang waiting for an ACK, while programs that should hang waiting for ACK will continue to do so.

        This extension would only require recompiling the programs that attempt to not do the prior default action of that delay, such as recompi
  • by bani ( 467531 )
    the article is all about TCP, which is great. how about an article on optimizing UDP though?
  • What really bums me out about doing network services on the Linux platform is that Linux does not support doors, a la Solaris, so you can't have multiple processes collaborating on a single socket service without a scheduler burp. There was a guy who implemented doors for 2.4, but his code was never adopted into the kernel, and now its rotting away....

    Linux is quite tragic that way. Hopefully there will be a Debian user-land on the OpenSolaris kernel soon, and then I can rock-n-roll again.

The use of money is all the advantage there is to having money. -- B. Franklin

Working...