Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
The Internet Security IT

HTTP/2 Zero-Day Exploited To Launch Largest DDoS Attacks In History (securityweek.com) 25

wiredmikey writes: A zero-day vulnerability named 'HTTP/2 Rapid Reset' has been exploited by malicious actors to launch the largest distributed denial-of-service (DDoS) attacks in internet history. One of the attacks seen by Cloudflare was three times larger than the record-breaking 71 million requests per second (RPS) attack reported by company in February. Specifically, the HTTP/2 Rapid Reset DDoS campaign peaked at 201 million RPS, while Google's observed a DDoS attack that peaked at 398 million RPS. The new attack method abuses an HTTP/2 feature called 'stream cancellation', by repeatedly sending a request and immediately canceling it.
This discussion has been archived. No new comments can be posted.

HTTP/2 Zero-Day Exploited To Launch Largest DDoS Attacks In History

Comments Filter:
  • Either nobody was aware of the security implication or nobody cared. Both are really bad. Amateurs have no business designing Internet protocols.

    • Re: (Score:2, Interesting)

      by Anonymous Coward

      Paid professionals and experts screw up too, if you know any history of kernel DoS bugs, DDoS's, CVEs, processor bugs, bridge/tower collapses, malpractice, or insurance for any of these.
      Nobody has any business designing Internet protocols, but it happens anyway. Sorry man.

    • Re: (Score:2, Informative)

      by Anonymous Coward

      Many eyes strikes again.

      • In a sense you are correct. There are simply too many eyes in the committees that form these protocol opinions, at some point someone breaks down and lets something pass just to get it through committee review so they donâ(TM)t have to sit through yet another week of meetings on the damn topic.

        HTTP/2 is a solution looking to a problem. They reimplement TCP on UDP even though delayed ack has been a thing for well over a decade, in the same time breaking literally years of firewall rules so that in pract

        • by gweihir ( 88907 )

          Indeed. And that is what makes the whole bunch of designers of HTTP/2 amateurs: They tried to fix something that was not broken and made it worse. Such people are offensive in their arrogance and stupidity.

    • by Casandro ( 751346 ) on Tuesday October 10, 2023 @02:44PM (#63916025)

      RFCs are like opinions, they reflect your goals and values.
      HTTP/2 was mostly developed by the big Web oligopolies. They have different values or goals than what you and me might have. For example, they like complexity, as complexity means that there will be fewer implementations, meaning that their implementations will be more popular and that they can control more of the Web. For a company like Cloudflare such a bug is heaven sent, as it greatly increases the number of potential customers. For cloud hosters such a bug means more sales, as auto scaling will spin up more instances.

      In short, while for most people HTTP/2 and HTTP/3 are terrible ideas, there are companies benefiting from that.

      • Can you give some specific examples? I'm an expert in the standards and would be curious what you view as corporately-influenced features, overcomplexities, or terrible ideas.
        • Well first of all, HTTP/2 is already highly complex. It adds features that make little sense, like having multiple streams. If you want to have that, you can just have multiple TCP connections... or you can use request pipelining in HTTP 1.1.
          In fact we see one of those features being abused here.

          There also seems to be not much actual performance benefit from HTTP/2, except in some contrived scenarios.

      • HTTP/2 was mostly developed by the Google children. Any attempts to put a handbrake on some of the more stupid ideas they'd dreamed up for their shiny new protocol were either ignored or shouted down. What Google wants, Google gets.
        • by gweihir ( 88907 )

          Indeed. And Google engineering, while never good, got worse and worse over the years.

      • by gweihir ( 88907 )

        So you think "malicious intent" instead of "incompetence"?" Could be. Of course, have too much of that approach and the world burns.

  • Meh, I remember when slashdot was a thing.
  • I've always disabled HTTP/2 on every public facing server. Glad I made the right decision.
  • How it works (Score:5, Informative)

    by PhrostyMcByte ( 589271 ) <phrosty@gmail.com> on Tuesday October 10, 2023 @02:12PM (#63915877) Homepage

    HTTP/2 defines a "max concurrent streams" limit. The server enforces the limit but leaves it up to the client to stay under the limit. In this case, you can have unlimited requests without relying on the connection's latency. It has other mechanisms to give the client throttling feedback and ultimately reject requests if a client gets out of hand.

    HTTP/3 (or QUIC, rather) instead defines a "max usable stream ID" that is incremented by the server at its leisure. In this case, you no longer have unlimited requests and are dependent on the server managing that max ID to avoid latency bottlenecks. It makes this feedback mechanism a little more explicit, and maintaining max throughput little more complex.

    This "rapid reset" attack here exploits it being up to the client in HTTP/2: the client can open/close a ton of requests in a single packet, and if the server isn't expecting it, it might begin processing each one of them. All while still technically being under the concurrency limit, because they get immediately cancelled.

    (I code HTTP clients/servers)

    • by arglebargle_xiv ( 2212710 ) on Tuesday October 10, 2023 @03:42PM (#63916247)

      Dr. Egon Spengler: There's something very important I forgot to tell you.

      Dr. Peter Venkman: What?

      Dr. Egon Spengler: Don't rapid-reset the streams.

      Dr. Peter Venkman: Why?

      Dr. Egon Spengler: It would be bad.

      Dr. Peter Venkman: I'm fuzzy on the whole good/bad thing. What do you mean, "bad"?

      Dr. Egon Spengler: Try to imagine all legitimate network traffic as you know it stopping instantaneously and every network connection on your server exploding at the speed of light.

      Dr. Raymond Stantz: Total internet traffic reversal.

      Dr. Peter Venkman: Right. That's bad. Okay. All right. Important safety tip. Thanks, Egon.

  • Sending multiple requests in bulk is something HTTP supports at least since version 1.1. Why do people claim that this is new to HTTP/2, and why do they apparently multiplex streams for this?

    • In HTTP/2 responses can be received interleaved and out of order.

      Some HTTP/1.1 clients don't do pipelining because the in-order responses causes head-of-line blocking. The preference is often towards using multiple connections to avoid this unintentional latency dependency between responses.

      • Yes, but then again, we are talking about websites. There is no reason why the server should not be able to send data at line speed and we are talking about small objects. If the TCP connection blocks because of retransmissions, there's nothing HTTP/2 can do about it.

        • Re: (Score:3, Informative)

          Requests have different latency. One API call might take 1 second for the server to do something, and the UX isn't great if that blocks your 5ms image retrieval from SSD.

          Pipelining can be very appropriate for server-to-server communication if used intentionally with understanding this issue, but this unpredictability is inappropriate for browsers or as a default on "general" clients.

    • by Anonymous Coward

      Why do people claim that this is new to HTTP/2, and why do they apparently multiplex streams for this?

      HTTP pipelining (with bulk requests) is not the same as HTTP/2 multiplexing. HTTP/1.1 pipelining requires that response headers must be sent in the same order as the requests, which means that fast responses must still wait for the potentially-slower response's header to first be sent.

  • by Gibgezr ( 2025238 ) on Tuesday October 10, 2023 @04:58PM (#63916483)

    I've read several articles on this, and they all are the same: they state everything but who the attacks were targeted at. I mean we know that Clodflare and Amazon AWS are mitigating it, but who were the attacks actually aimed at?

"One lawyer can steal more than a hundred men with guns." -- The Godfather

Working...