Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
For the out-of-band Slashdot experience (mostly headlines), follow us on Twitter, or Facebook. ×
The Internet IT

HTTP 2.0 Will Be a Binary Protocol 566 566

earlzdotnet writes "A working copy of the HTTP 2.0 spec has been released. Unlike previous versions of the HTTP protocol, this version will be a binary format, for better or worse. However, this protocol is also completely optional: 'This document is an alternative to, but does not obsolete the HTTP/1.1 message format or protocol. HTTP's existing semantics remain unchanged.'"
This discussion has been archived. No new comments can be posted.

HTTP 2.0 Will Be a Binary Protocol

Comments Filter:
  • Makes sense (Score:3, Interesting)

    by thetagger (1057066) on Tuesday July 09, 2013 @01:29PM (#44227569)

    HTTP is the world's most popular protocol and it's bloated and slow to parse.

  • by Animats (122034) on Tuesday July 09, 2013 @01:44PM (#44227775) Homepage

    The big change is allowing multiplexing in one stream. It's a lot like how Flash multiplexes streams.

  • by lgw (121541) on Tuesday July 09, 2013 @01:49PM (#44227847) Journal

    Makes it harder to troubleshoot by using telnet to send basic HTTP commands

    Since we're using a tool in the first place, it's just as easy to use a tool that understands the binary format. Back before open source toolchains had really caught on as a concept, human readable formats were a big plus, because proprietary tools could be hard to come by. Not really a concern these days, as long as the binary format is unencumbered.

  • by Anonymous Coward on Tuesday July 09, 2013 @01:53PM (#44227909)

    This is FAR from a done deal. The binary/ASCII question is being hotly debated.

  • Re:Makes sense (Score:4, Interesting)

    by Trepidity (597) <delirium-slashdot@@@hackish...org> on Tuesday July 09, 2013 @01:57PM (#44227965)

    Not particularly bloated or slow to parse, especially on modern hardware. HTTP/2.0, which is basically a minorly tweaked version of Google SPDY, doesn't even claim speedups more than about 10%.

  • by Trepidity (597) <delirium-slashdot@@@hackish...org> on Tuesday July 09, 2013 @01:59PM (#44227979)

    It reads almost like they reimplemented all of TCP inside of HTTP, complete with stream set-up and teardown, queuing, congestion control, etc. Why not just use... TCP to manage multiple streams?

  • Re:Makes sense (Score:5, Interesting)

    by dgatwood (11270) on Tuesday July 09, 2013 @02:09PM (#44228129) Homepage Journal

    I frequently get fairly close to the raw protocol, using curl, and have even been known to manually make HTTP requests in a telnet session on occasion. That said, I'm assuming a future version of curl would simply translate the headers and stuff into the historical format for human readability, making this sort of change fairly unimportant in the grand scheme of things.

  • What a clustferfuck (Score:4, Interesting)

    by l0ungeb0y (442022) on Tuesday July 09, 2013 @02:13PM (#44228205) Homepage Journal

    Seems it's going binary to have EVERYTHING be a stream, with frame based communications, different types of frames denoting different types of data and your "web app" responsible for detecting and handling these different frames. Now I get that there's a lot of demand for something more than Web Socket, and I know that non-Adobe video streaming such as HLS are pathetic, but this strikes me as terrible.

    Really, why recraft HTTP instead of recrafting browsers? Why not get Web Socket nailed down? Is it really that hard for Google and Apple to compete with Adobe that instead of creating their own Streaming Media Services they need HTTP2.0 to force every server to be a streaming media server?

    Adobe's been sending live streams from user devices to internet services and binary based data communication via RTMP for several years, but HTML5 has yet to deliver on the bandied about "Device API" or whatever it's called this week even though HTML5 pundits have been bashing on Flash for years.

    So if Adobe is really that bad and Flash sucks that much, why are we re-inventing HTTP to do what Flash has been doing for years?
    Why can't these players like Apple and Google do this with their web browsers, or is it because none of these players really wants to work together because no one really trusts each other?

    At the end of the day, we all know it's all just one big clusterfuck of companies trying to get in on the market Adobe has with video and the only way to make this happen in a cross-platform way is to make it the new HTTP standard. So instead of a simple text based protocol, we will now be saddled with streaming services that really aren't suited to the relatively static text content that comprises the vast majority of web content.

    But who knows, maybe I'm totally wrong and we really do need every web page delivered over a binary stream in a format not too different from what we see with video.

  • Re:Port multiplier (Score:5, Interesting)

    by tepples (727027) <tepples&gmail,com> on Tuesday July 09, 2013 @02:18PM (#44228283) Homepage Journal
    With TCP, you need a separate port number for each service. For example, a Doom server runs on port 666, and an RDP server runs on port 3389. With Web Sockets, you can put all services behind one port and give each a separate path. For example, ws://example.com/doom lets a client open a Doom session, and ws://example.com/rdp lets a client open an RDP session. The advantage of using a path instead of a port number is that it can host a far larger number of services. For example, two users of a server could run game servers on ws://example.com/~WaffleMonster/doom and ws://example.com/~tepples/doom.
  • Re:Makes sense (Score:5, Interesting)

    by omnichad (1198475) on Tuesday July 09, 2013 @02:23PM (#44228337) Homepage

    Except cookies. And even worse - ViewState variables posted on badly coded .NET applications. Some of those are near the hundred kilobyte range.

  • Hyper TEXT (Score:2, Interesting)

    by Jeremiah Cornelius (137) on Tuesday July 09, 2013 @02:56PM (#44228771) Homepage Journal

    What part of the term "HyperTEXT" did the working group fail to understand?

    Really, outside of snooping/privacy discoveries, this is the SINGLE WORST piece of news I have seen in 15 + years on Slashdot. We could see this coming with the push for DRM in the HTTP spec. Here we watch the other shoe drop.

    "They" don't like this web. They are entrenched media and information companies, and the elite financial and political powers that rely on framing/controlling public information - while collecting rents for doing so.

    You think that your issues with security and control of your own platform are bad now? Wait til your browser is rejected by sites you need to do business with - because you won't parse HTML/HTTP 2.0. Linking this with crap like Intel TPM/TXT and you are well into Orwell Telescreen territory.

    "Optional"? Five years from now, we'll all see if you can get your driver's license, pay your phone bill or shop at Amazon without this one...

  • Re:Makes sense (Score:3, Interesting)

    by Anonymous Coward on Tuesday July 09, 2013 @04:06PM (#44229831)

    You can write a simple HTTP/1.0 parser, maybe. Try implementing HTTP/1.1 Pipelining some time.

    Also, most HTTP parsers don't obey MIME rules when parsing structured headers. Regular expressions are insufficient. The vast majority of HTTP libraries don't fully support the specification, even at the 1.0 level. But most don't notice because you never see those complex constructs, except perhaps from an attacker trying to get around a filter--where on implementation perceives X and the other Y.

    I've written an HTTP+RTSP+RTP non-blocking, re-entrant, restartable (no callbacks, which makes application composition easier) C library for streaming media. Libraries like curl just don't cut it for anything serious.

    If your idea of implementing something relies on using regular expressions for parsing, I can guarantee you that you're not properly implementing the specification. I don't know of any useful specification, save maybe SPF (which has a broken synax which relies on NFA backtracking behavior), that doesn't involve grammar constructs beyond the capabilities of regular expression.

  • Re:Makes sense (Score:5, Interesting)

    by Chelloveck (14643) on Tuesday July 09, 2013 @04:09PM (#44229873) Homepage

    Binary vs. text doesn't make any real difference for debugging. Ethernet frames are binary, IP is binary, TCP is binary. We cope just fine. It may be more difficult to do a quick-and-dirty "echo 'GET / HTTP/2.0' | nc localhost 80", but so what? You can still use HTTP/1.1 or even HTTP/1.0 for that, and you're going to haul out the packet analyzer for any serious debugging anyway.

    What I really don't like is that they're multiplexing data over a single TCP connection. I understand why they're doing it, but it seems like re-inventing the wheel. Congestion control is tricky to get right so I see HTTP/2.1 and HTTP/2.2 coming out hot on the heels of HTTP/2.0 as they iron out problems that have already been solved elsewhere.

UNIX enhancements aren't.

Working...