HTTP 2.0 Will Be a Binary Protocol 566
earlzdotnet writes "A working copy of the HTTP 2.0 spec has been released. Unlike previous versions of the HTTP protocol, this version will be a binary format, for better or worse. However, this protocol is also completely optional: 'This document is an alternative to, but does not obsolete the HTTP/1.1 message format or protocol. HTTP's existing semantics remain unchanged.'"
Re:Binary protocol.. and what else? (Score:5, Informative)
It's nice to have a link to the draft. But couldn't we have just a little more "what's new" than it's binary? This is slashdot... Filled with highly techincal people. At least a rundown of the proposed changes would be very helpful in a discussion. The fact that they're proposing a binary protocol doesn't really matter to anyone besides anyone who wants to telnet to a port and read the protocol directly.
from quick glance, multiple transfers and communications channels("streams" in the drafts lingo) can be put through the single connection, cutting tcp connection negotiations.
Re:Makes sense (Score:5, Informative)
Nope, that's like saying hamburgers are a core part of cows.
You make hambugers out of cows, you don't make cows out of hamburgers.
You make TCP out of IP, you don't make IP out of TCP.
Re:Makes sense (Score:5, Informative)
It might be bloated and slow. But it is also easily extendable and human readable.
Human readable yes, extendable no. Well, it's not extendable in any meaningful way. Even though it looks like it on a quick look, if you read the spec you quickly realize there really is no generic structure to a message -- you cannot parse an HTTP request if you do not fully understand it. Even custom headers like the commonly used X-Foo-Whatever are impossible to parse or even simply ignore, so implementations just use an unspecified de-facto parsing and pray to the web gods that it works.
This makes HTTP parsers very complicated to write correctly and even moreso if you want to build a library for others to extend HTTP with. This isn't a text VS binary issue, but simply a design flaw. Hopefully HTTP 2.0 fixes this.
As they say, HTTP 1.1 isn't going anywhere -- this'll be a dual-stack web with 2.0 being used by new browsers and 1.1 still available for old browsers/people.
Re:Makes sense (Score:3, Informative)
*silence*
Re:Makes sense (Score:3, Informative)
Ditto.
The HTTP header is miniscule compared to the HTML/images on the web page. Making it binary is a Stupid Fucking Idea.
Re:Does it improve coders? (Score:4, Informative)
Whatever problems you imagine are already being suffered in the form of SPDY. HTTP/2.0 emerged from SPDY and SPDY is supported by popular clients including Chrome and Firefox which handle traffic from Google, Twitter and Facebook, all of whom are serving SPDY today.
Wireshark has been picking apart SPDY for a couple years now. Developers see decomposed HTTP traffic in their browser consoles or HTTP APIs with little awareness of SPDY, so they rarely care.
Bandwidth costs money. Those rare instances when someone has to bench-check an HTTP transaction using a raw TCP stream have a really low priority.
Re:Makes sense (Score:4, Informative)
What?
First, compression is semantically identical to the uncompressed data. Anything "leaked" by compression would be "leaked" by the uncompressed data.
Second, compression removes entropy making it more difficult to predict the cleartext. Which is why it's common to compress data before encryption (assuming your application is amenable, full disk encryption would be a case where compression would be awkward.)
Re:Can someone describe in human terms (Score:4, Informative)
The payloads are already often gzipped - but it's one connection per one file most of the time. If you need images, CSS, Javascript, more tiny images, then those are all separate HTTP connections and on some servers, they are serially requested over one connection. Combine it into one connection, and you can parallelize the download process into one connection, prioritize HTML over resources, and only send cookies once instead of once per file.
Re:Makes sense (Score:4, Informative)
The mechanism for spanning multiple lines is starting continuation lines with white space. If you read a line feed afollowed by a non white space character, you know the current header is over and the next one is starting.
Re:Makes sense (Score:5, Informative)
Re: Makes sense (Score:4, Informative)
Yeah, let's hinder the 99.999% scenario for the benefit of the 0.001% one.
Rationale (Score:5, Informative)
The rationale for http-2.0 is available in the http-bis charter. [ietf.org] Quoting the spec:...
As part of the HTTP/2.0 work, the following issues are explicitly called out for consideration:
It is expected that HTTP/2.0 will:
Re:Makes sense (Score:4, Informative)
Everything about your post is wrong. Cookies are transmitted with each request. ASP.NET Webform's viewstate is transmitted to and from each page, if enabled. Although HTTP 2.0 doesn't solve either of these problems, and the text nature of most HTTP requests has very little to do with the speed of the protocol. Changing to a binary protocol will likely shave off TENS of bytes on each request. You won't notice, even with a timer accurate to the millisecond.
Re:Makes sense (Score:5, Informative)
No, the parent is right, and this weakness has been demonstrated in recent HTTPS attacks like BEAST and CRIME.
It works like this. You visit a site that has malicious JavaScript which sends a HTTPS request to some site (like your bank). This request will include whatever known plain-text that the JavaScript wants to send, *plus* any cookies you have stored for the target site, possibly including authentication cookies. If the plain text happens to match part of that authentication cookie, then the compressed headers will be smaller than if they if they don't match. If the attacker can monitor this encrypted traffic and see the sizes of the packets, then they can systematically select the known plaintext to slowly learn the value of the authentication cookie.
This can be done today in about half an hour. And the attack setup is feasible - consider a public WiFi access point that requires you to keep a frame open in order to use their WiFi. This gives them both the MITM and JavaScript access needed to perform this attack.
Sorry for posting as AC - slashdot logged me out and I have a meeting in 5 minutes.