HTTP/2 Finalized 171
An anonymous reader writes: Mark Nottingham, chair of the IETF HTTP working group, has announced that the HTTP/2 specification is done. It's on its way to the RFC Editor, along with the HPACK specification, where it'll be cleaned up and published. "The new standard brings a number of benefits to one of the Web's core technologies, such as faster page loads, longer-lived connections, more items arriving sooner and server push. HTTP/2 uses the same HTTP APIs that developers are familiar with, but offers a number of new features they can adopt. One notable change is that HTTP requests will be 'cheaper' to make. ... With HTTP/2, a new multiplexing feature allows lots of requests to be delivered at the same time, so the page load isn't blocked." Here's the HTTP/2 FAQ, and we recently talked about some common criticisms of the spec.
IE once again kills innovation (Score:3, Interesting)
MS will only support this in Spartan on Windows 10. Which means 7 and XP are already out of support.
Re: (Score:3, Informative)
However, you are correct that IE 11 on Windows 7 is not going to support H
Re: (Score:3, Insightful)
Which will mean one of three things:
1) HTTP/2 will not be used in the wild for anything important. Due to the sheer number of Win7/8 machines out there with IE versions that don't support it.
2) Users on windows will move away from IE. Once people leave they don't typically come back so IE will eventually become an also ran.
3) Microsoft will fear loosing to much browser market share, will back pedal and backport spartan.
Re:IE once again kills innovation (Score:5, Insightful)
4) People enable it on their servers and those with browsers that do support it enjoy the benefits (and possibly some of the side-effects) that it brings, while everybody else will either chug along on HTTP/1.1 or even HTTP/1.0, or switch to a browser that does support HTTP/2, and whether or not older versions of IE support it remains a non-issue.
Re:IE once again kills innovation (Score:5, Insightful)
If people do "switch to a browser that does support http/2", it will be for some other reason unrelated to the protocol - getting http/2 support will be serendipitous. Very few people other than tech heads are going to care about the protocol, one way or the other... they won't even be aware of it.
Re:IE once again kills innovation (Score:5, Funny)
The masses will switch as soon as they see "Did you know Facebook could be faster --> Click Here ---"
Re: (Score:2)
The masses will switch as soon as they see "Did you know Facebook could be faster --> Click Here ---"
Somehow I doubt that'll be any more effective than those old website buttons that said "This site is optimized for Netscape Navigator - click here to download".
Re:IE once again kills innovation (Score:4, Funny)
The web is a different place than it used to be. Let me take you back to 199[345].
There were four kinds of Internet Users:
Group 1)
Has just arrived at your GeoCities page with its "optimized for Netscape" banner after following several webring links. They had only recently finished unboxing their Packard Bell and working out the relationship between the mouse and the cursor.
The were sitting in front of Windows 3.1x feeling a mix of awe and pride in their AOL dialing skills and terror they might some how break this machine having just spent nearly a months salary on it, because the kids teacher said they should get a PC. They were not about to download anything let alone install it. They still had the sakes from last time they tried something like that, and continue to wonder who this Gen. Protection Fault is and what he did to their computer.
Group 2)
Were practically experts by today's standards. They maybe had a 286 from a few years back and remembered some DOS commands. This and their command of cutting and pasting into notepad from "View Source" in Navigator has enabled them FTP their very own page to GeoCities that folks in group 1 are now viewing.
Group 3)
Has some professional or academic experience using a platform other than DOS and Netware. They are already frustrated back the lack of development the X11R2 edition of Navigator is seeing. Its fine they because all the stuff they think is really worth while is still available via BBS, and someone was good enough to install Lynx and internet gateway in case they do want to look at GeoCities. They had formed their opinion about what browser was good an proper and nothing was going to make them change, EVER.
Group 4)
Mac users, this group was small and mostly ashamed of themselves during this period. They clung to the belief their shitty platform was in someway superior to Microsoft's shitty platform running on Packard Bell (it wasn't). They really did not having anything to choose from besides Netscape, no matter what the banners indicated and they knew it.
In short things were nothing like today; well actually group 3 hasn't change much. Groups 1 and 2 merged; but the fear is gone. These people will run anything now. Ask them to put their password in so they can run NoIreallyAmATrojanLookingToStealYourOnlineBankingPassword.exe and they probably will if you promise them some extra Facebook likes on their posts or something.
Group 4) Is all self assured again. Some group 3 folks are joining them, although they still don't really mix at parties.
Re: IE once again kills innovation (Score:2)
Problem is http2 is for Web services. If you have to support older IE hence http1 then you can't do the same things. ... or write 2 different versions of your site.
Re:IE once again kills innovation (Score:5, Interesting)
3) Microsoft will fear loosing to much browser market share, will back pedal and backport spartan.
Didn't work for DirectX, don't think it will work for this.
Re: (Score:2)
Windows XP is completely out of support, while Windows 7 is out of mainstream support (as of January 2015), so not supporting them with new features is only right.
Re: (Score:2)
Microsoft's influence over anything is slipping away. I'm finding less need for anything Microsoft in my IT world these days. The behemoth of each APP is crushing the life out of the lungs.
The problem you have, is that OFFICE is such a behemoth that it is almost unusable, but everyone requires people to use Apps like Word, when all they need is WordPad.
Re: (Score:2)
HTML/2 strong point is centered around Web Services communication. So this is non-browser to to non-browser communication.
Re: IE once again kills innovation (Score:4, Funny)
It's hard to take your knowledge in this matter seriously, since you call it HTML/2.
Re:IE once again kills innovation (Score:4, Funny)
Re: (Score:2)
I get that response all the time. Then they tell me it needs to support version 7 of "the internet".
Re: (Score:2)
Re: (Score:2)
Webservers are going to have to support both for years.
Applications are going to have to support both for years, possibly eternity. The whole HTTP 2.0 process was driven mostly by Google, who wanted HTTP changed to reduce the load on their servers (heaven knows what sort of uproar would have resulted if Microsoft had tried this sort of thing). Unfortunately the resulting design, while it may make Google's job easier, is incredibly difficult to implement for things like embedded devices. The HTTP 2.0 WG's response when this was pointed out, repeatedly, was "le
Heaven forbid! Actual news for nerds on Slashdot! (Score:5, Funny)
Re: (Score:3)
And don't forget... (Score:3)
...to pay your $699 licensing fee you cock-smoking teabaggers.
(binary protocol)-- (Score:3)
I'm really going to miss being able to telnet to a server and troubleshoot using plain text. Feels like a lot of simple has disappeared from the internet
Re:(binary protocol)-- (Score:5, Informative)
I'm really going to miss being able to telnet to a server and troubleshoot using plain text. Feels like a lot of simple has disappeared from the internet
Yes, HTTP/2 is a multipllexing binary framing layer, but it has all the same semantics of HTTP/1.x on top.
HTTP/2 is 'just' an optimization. So if your webserver supports HTTP/1.x and HTTP/2 then you can still use telnet to check whatever you want and it should give the same result as HTTP/2.
But you also have to remember:
The IETF which is the group of people who design the Internet protocol made this statement:
https://www.iab.org/2014/11/14... [iab.org]
"Newly designed protocols should prefer encryption to cleartext operation."
The W3C made a similar statement, there are also drafts with the intention to moving to HTTPS by default.
So it is all moving to TLS protocols like HTTPS or STARTTLS for SMTP anyway. Those are clearly not text protocol either.
So even if you want to interact with the text protocol used inside that TLS-encrypted connection, you'll need to use a tool because netcat or telnet won't cut it.
Let's look at HTTP, because this is a HTTP/2 article.
That tool could be the openssl s_client, but as you can see that is kind of cumbersome:
echo -en "HEAD / HTTP/1.1\nHost: slashdot.org\nConnection: close\n\n" | openssl s_client -ign_eof -host slashdot.org -port 443 -servername slashdot.org
But I suggest you just use:
curl -I https://slashdot.org/ [slashdot.org]
The main developer for cURL works for Mozilla these days and is one of the people working on the HTTP/2 implementation in Firefox and is writing a document explaining HTTP/2: http://daniel.haxx.se/http2/ [daniel.haxx.se]
So as you would expect Curl supports HTTP/2:
https://github.com/http2/http2... [github.com]
Basically every browser include 'developer tools' which will also let you see the headers and everything else you are used from HTTP/1.x.
I would rather see we all move to using encrypted protocols then that we can still use telnet.
Re: (Score:2)
echo -en "HEAD / HTTP/1.1\nHost: slashdot.org\nConnection: close\n\n" | openssl s_client -ign_eof -host slashdot.org -port 443 -servername slashdot.org
Thanks, Lennie. I'm self-learning the whole HTTP (simple) and TLS (not simple) protocols out of interest as I work as an ad-hoc server admin (devops). This one-liner answers a whole slew of questions that I had, and points me in the direction to learn others. Thank you!
Re: (Score:2)
I know, that is why I mentioned:
"Let's look at HTTP, because this is a HTTP/2 article."
Anyway, for those that don't know:
echo -en "MAIL FROM:\nRCPT TO:\nDATA\nSubject: test messsage\n\ntest message body\n\n.\nquit\n" | openssl s_client -host gmail-smtp-in.l.google.com. -port 25 -starttls smtp -ign_eof
Re: (Score:2)
Ahh, cool, didn't know nmap included that.
Re: (Score:2)
how to you test a https currently with telnet? you don't, you use a tool (openssl s_client -connect ip:port), then you test like telnet
with http2, you will also have to use a tool to connect.. then you can do what ever you wan...
(and by the way, chrome and firefox will only allow http2 with TLS, so even if it was plain text, you still would need to use openssl s_client to test )
Some ad networks are still HTTP-only (Score:2)
and by the way, chrome and firefox will only allow http2 with TLS
Will they support HTTP/2 opportunistic encryption [github.io] (TLS with the http scheme)? Or will webmasters have to negotiate with their ad networks and other third-party providers for support for the https scheme before switching to HTTP/2?
Great if optimizing the wrong thing is your thing (Score:5, Insightful)
Web pages being slow has nothing to do with HTTP and all to do with bloat eating every bit of performance that is available to the marketers who are in charge of the web now. A binary protocol to save a couple of bytes in times where every web page loads at least three "analytics" scripts weighing dozens of kbytes each. Marketing rots the brain.
Re: (Score:2)
Re: (Score:3)
Re:Great if optimizing the wrong thing is your thi (Score:5, Informative)
The main problem isn't the size of the stuff that gets loaded. What's dozens of kb these days? Even a megabyte takes a tenth of a second to transfer on my connection. The problem is latency: it takes more than 100ms for that megabyte of data to even start flowing, let alone get up to speed. That's what the multiplexing is intended to deal with.
That said, the root cause of all this is the sheer amount of unnecessary stuff on pages these days. Fancy multiplexing or not, no request can finish faster than the one that's never made.
Pay per bit still exists (Score:3)
What's dozens of kb these days? Even a megabyte takes a tenth of a second to transfer on my connection.
Which means you could burn through a 5 GB/mo cap in less than 9 minutes.[1]
Cellular ISPs in the United States charge a cent or more for each megabyte. Satellite and rural DSL ISPs tend to charge half a cent per megabyte.[2] This means cost-conscious users will have to either install ad blockers or just do without sites that put this "sheer amount of unnecessary stuff on pages".
[1] 5000 MB/mo * 0.1 s/MB * 1 min/60 s = 8.3 min/mo
[2] Sources: major cell carriers' web sites; Exede.com; Slashdot/a [slashdot.org]
Re: (Score:2)
"The main problem isn't the size of the stuff that gets loaded [...] the root cause of all this is the sheer amount of unnecessary stuff on pages these days"
So the main problem *is* in fact the size of that stuff, isn't it?
Oh! by the way, not everybody's connection is like yours, specially over mobile networks.
Re:Great if optimizing the wrong thing is your thi (Score:4, Interesting)
No... maybe. It depends.
Amdahl's law [wikipedia.org] is in full force here. There comes a point where increasing the bandwidth of an internet connection doesn't make pages load faster, because the page load time is dominated by the time spent setting up connections and requests (i.e. the latency). Each TCP connection needs to do a TCP handshake (one round trip), and then each HTTP request adds another round trip. Also, all new connections need to go through TCP window scaling, which means the connection will be slow for a few more round trips. Keep-alive connections help a bit by keeping TCP connections alive, but 74% of HTTP connections only handle a single transaction [blogspot.co.uk], so they don't help a great deal.
Oh! by the way, not everybody's connection is like yours, specially over mobile networks.
Mobile networks (and, yes, satellite) tend to have high latency, so round-trips are even more of the problem there. Also... when people shop for internet connections, they tend to concentrate on the megabits, and not give a damn about any other quality metrics. So that's what ISPs tend to concentrate on too. You'll see them announce 5x faster speeds, XYZ megabits!!, yet they don't even monitor latency on their lines. And even if your ISP had 0ms latency, there's still the latency from them to the final server (Amdahl's law rearing its ugly head again).
Given all that, I think I'm justified in saying that the main problem with page loading times isn't the amount of data but the number of round-trips required to fetch it. Reducing the amount of data is less important than reducing the number of, or impact of, the round-trips involved. And that's the main problem that HTTP/2 is trying to address with its fancy binary multiplexing.
(Now, if your connection is a 56k modem with 2ms latency, then feel free to ignore me. HTTP/2 isn't going to help you much.)
Re: (Score:2)
I feel like there's something important in the observation that advertising usually funds communication/information technologies. It seems like a kind of economic failure to me. But I can't quite put my finger on exactly what it is.
Re:Great if optimizing the wrong thing is your thi (Score:5, Informative)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
This protocol will enable new types of web applications, things like realtime multiplayer games that need to constantly stream lots of packets at a lower latency. Things that are not done today because the current technology simply can not provide a good enough experience. Plus it will reduce your server load which is nice (marketing scripts and ads are served from third party servers).
Re: (Score:2)
Speaking of Real Time Multiplayer Games ...
https://www.ingress.com/ [ingress.com]
(aka CalvinBall)
Join the Resistance
Resistance is Freedom!
Re: (Score:2)
Indeed. Someone needs to do a study of how harmful to the environment all the horrible marketing Javascript running on web pages is. It's about the only thing that pegs my CPU when my computer is in "normal" use now ( "normal" being "what normal people do with a computer", because "what programmers do with a computer" is often a little more intense.)
I'm not someone who so far has joined with the ad-blocking movement, on the proviso that websites need *some* way to keep the server running, but I'm getting
No free lunch. (Score:2)
Marketing rots the brain.
It also pays the bills.
Re:Great if optimizing the wrong thing is your thi (Score:4, Informative)
Most of that bloat you speak of is delivered via Javascript. A few weeks ago, I finally put my foot down and made my default browser Firefox with NoScript. I have a (very) small list of sites allowed to execute scripts, and most of the time I will browse a website in its broken state, since I can still see all the text and images. It even foils the news websites that use Javascript to "faux-paywall" their content behind canvases, as opposed to only sending part of the story content from the server (I'm still shaking my head as to who on the business side thought this was a good business stance since one can still read most of the articles for free, but that's another discussion entirely). But lo and behold, once I started a staunch NoScript policy, page loads completed much faster, cookie sprawl was reduced, and Firefox's memory usage stayed relatively low. I also started to learn which scripts from which servers were truly allowed for things like externally-served comment systems (disqus, etc.), and also noticed the way that some webpages end up triggering a cascading dependency of server connections due to scripts calling scripts on other servers, etc.
One of the worst instances of cascading Javascript sprawl that I've seen was a page from The Verge, with 33 domains allowed (including theverge.com) executing 133 scripts. The SBNation "eulogy for RadioShack" article had the "script counter" go over 160. Oh, and that's leaving out web fonts, which NoScript also blocks (which also reveals how often some websites use custom fonts to draw vector-based icons; you can see the Unicode codes for each character since the font isn't loaded). Vox seems to love abusing Javascript in their designs; it's most of the reason why I've abandoned reading Polygon (the other reasons being the banal editorial content, aside from a few notable articles). In comparison, Slashdot is currently asking for 21 scripts, but is running perfectly fine without any Javascript enabled (despite nagging here and there).
I've ended up moving YouTube browsing to Pale Moon, since it natively supports the HTML5 player without issues. I may end up moving all of my browsing to Pale Moon with NoScript, since it natively supports a status bar.
The whole situation with the Javascript bloat reminds me of the scene from Spaceballs where Lonestar tells Princess Vespa, "Take ONLY what you NEED to SURVIVE." We're stuck with a bunch of prima donna web designers who want to duct tape on more adverspamming and social spying avenues to their website, and not standing back and taking a look at how bad it's impacting the user experience, not to mention the bloat from the hundreds of extra scripts and objects loaded by the browser, as well as the tens of connections to third-party servers.
Re: (Score:2)
Those analytics scripts are usually from an analytics service provider, and so are incredibly well cached, and shared across services. If you are repeatedly downloading the 15.7KB Google Analytics JS file, you are doing something wrong.
If you'd spent any time looking in to HTTP performance, you'd know there is a lot to be desired. HTTP/2 is a step in the right direction. It's not saving a couple of bytes, it's saving a shit-tonne of bytes, using connections more efficiently, and improving encryption supp
Re: (Score:2)
"for real web developers who use it properly, it is still a good thing in my opinion."
Well, real web developers basically ignore what HTTP is right now, focusing on HTML, why is this going to change with the new HTTP version?
Not really happy (Score:5, Interesting)
As the author of an open source webserver [hiawatha-webserver.org], I must say that I'm not really happy with HTTP/2. It adds a lot of extra complexity to the server side of the protocol. And all sorts of ugly and nasty things in HTTP/1 (too much work to go into that right now) have not been fixed.
What I have experienced is that SPDY (and therefor also HTTP/2) will only offer more speed if you are Google or are like Google. Multiplexing doesn't offer that much speed increase as some people would like you to believe. Often, the content of a website is located on multiple systems (pictures, advertisements, etc), which still requires that the browser uses more than one connection, even with HTTP/2. Also, HTTP/1 already allows a browser to send multiple requests without waiting for the response of the previous request. This is called request pipelining, but is turned off by default in most browsers. What I also often see is that a browser makes a first request (often for a CGI script) and the following requests (for the images, JS, CSS, etc) are never made due to browser caching. So, to me HTTP/2 adds a lot of complexity with almost no benefits in return.
Then why do we have HTTP/2? Well, because it's good for Google. They have all the content for their websites on their own servers. Because IETF failed to come up with a HTTP/2 proposal, a commercial company (Google in this case) used that to take control. HTTP/2 is in fact a protocol by Google, for Google.
In my experience, you are far better off with smart caching. With that, you will be able to get far better speed-increase results than HTTP/2 will ever offer. Specially if you use a framework that communicates directly with the webserver about this (like I did with my PHP framework [banshee-php.org]). You will be able to get hundreds to thousands requests per second for a CGI script instead of a few tens of requests. This is a speed increase that HTTP/2 will never offer.
I think this is a failed change to do it right. HTTP is just like SMTP and FTP one of those ancient protocols. In the last 20 years, a lot has changed. HTTP/1 worked fine for those years. But for where the internet is headed, we need something new. Something completely new and not a HTTP/1 patch.
Re: (Score:3)
Then why do we have HTTP/2? Well, because it's good for Google....HTTP/2 is in fact a protocol by Google, for Google.
Those are interesting points, but I don't understand why Google would ever be able to hijack a standard like this when there are plenty of other big players such as Microsoft, Facebook, and Amazon to counteract their Machiavellian schemes. I could believe that HTTP/2 might benefit big organizations more than a typical little guy like me who uses shared web hosting (and not even a CDN), but is there something about HTTP/2 that uniquely favors Google over all the other big players?
Re: (Score:2)
This is a evolution step, it still requires to work with http/1.1 designed sites.
When sites are designed with http/2 in mind, it is time for the http/3, where more things could be changed (and deprecate http/1.1)
Is not perfect, but changing too many things leads to the ipv6 problem... is good, but people don't want to break things and so don't change anything.
Re:Not really happy (Score:5, Informative)
And for the http/1.1 pipeline, it simple don't work... 90% of the sites can work with it, but others fail and fail badly... the page take forever to load, waiting for resources that were asked but never received. all browsers tried to enabled it and all had to revert it due to the (few but important) problems founds that are impossible to solve without a huge whitelist/blacklist of servers (impossible to really implement and a pain for all those important and old internal servers)
So the 2 major issues that http/2 want to solve are really the tls slow start and a working pipeline... By announcing http/2, a browser knows that this things do work and are safe to use, no more guessing games and workarounds for bad servers.
Re: (Score:2)
Didn't browsers announce for HTTP 1.1 as well? Why should expectations be significantly different w.r.t. pipelining?
Re: (Score:3)
What happened to HTTP/1.1 pipelining is fortunately not a common case: popular webserver breaks the standard and doesn't get fixed, making the standard de facto useless. While it's not impossible for the same to happen with HTTP/2.0, this type of situation is more the exception than the norm.
All popular webservers today strive for a good degree of standard compliance.
But, maybe as importantly, as pointed out before, that was not the only problem with HTTP/1.1 pipelining. You also have the head of line block
Re:Not really happy (Score:5, Informative)
You might be happier if you researched why HTTP pipelining is off in most browsers and what problem it actually wants to solve.
First, HTTP/1.1 pipelining and HTTP/2.0 are not about increasing the number of request your server can handle. It's mainly about reducing the effect of round trip latency in page loading times, which is significant.
If a browser is fetching a simple page with 10 tiny elements (or even cached elements subject to conditional GET) but the server round trip latency is 100 ms, then it will take over 1 second to load the page.
HTTP was a first attempt to solve this problem, by allowing the server to send multiple requests over the connection without waiting for the earlier requests to complete.
If you can pipeline N requests, the round trip latency contribution is divided by N.
However, HTTP/1.1 pipelining has two issues which led most browsers to disable it by default (it's not because they enjoy implementing features not be used):
- There are or were a number of broken servers which do not handle pipelining correctly.
- HTTP/1.1 pipelining is subject to head of line blocking: the server serves the requests in order and a small/fast request may have to wait inline behind a larger/slow request.
Instead, browsers make multiple parallel requests to each server.
However, because some servers (eg, Apache) have problems with large numbers of requests so browsers use arbitrary low limits (eg, 4-8 parallel requests per hostname).
HTTP/2.0 attempts to solve these shortcomings by:
a) being new, hopefully without broken servers out there
b) using multiplexing, allowing the server to serve the requests in whatever order. Thus, no head-of-line blocking.
So. HTTP/2.0 is, fundamentally, HTTP/1.1 pipelining without these shortcomings. We hope.
Re: (Score:2)
- There are or were a number of broken servers which do not handle pipelining correctly.
Wrong, there are so few servers that are broken that neither Mozilla nor Google could determine what software was broken. The "unknown" broken software was possibly malware or antivirus. Today pipelining works fine. It's disabled because Google rushed SPDY out to preempt pipelining from being turned on.
b) using multiplexing, allowing the server to serve the requests in whatever order. Thus, no head-of-line blocking.
Also wrong. Using multiple connections is faster when there is packet loss, because only one connection's bandwidth is slowed down by a lost packet and the others proceed normally. Head of line blocking
Re: (Score:2)
Using multiple connections is faster...
You sir, are an idiot. Please stop spouting horrible information. There are only a few situations where multiple connections are faster and it primarily has to do with load balancing over aggregate links or laod balancing CPU cores. Multiple TCP streams are almost always a bad thing. They may seem faster, but only because other people do it, over all, it's slower.
Lots of TCP connections between two end points is known to exasperate issues like buffer bloat and global synchronization. Each additional TCP c
Re: (Score:2)
Today pipelining works fine. It's disabled because Google rushed SPDY out to preempt pipelining from being turned on.
I stopped reading your comment right here.
Pipelining was introduced in HTTP/1.1, standardized in RFC 2616 which was released in 1999. We've had 16 years to get pipelining working well. 13 if you count up to when Google first started seriously experimenting with SPDY.
(Disclaimer: I work for Google, but has nothing to do with my opinions on this issue. I thought pipelining was obviously broken long before I started working for Google.)
Re: (Score:2)
Nobody cared before, because CPUs and browser layout engines were the bottleneck not the network.
Nonsense. With some notable exceptions, network has always been the primary bottleneck.
Re: (Score:2)
It's disabled because Google rushed SPDY out to preempt pipelining from being turned on.
The pipelining spec dates back to what ... 1997? SPDY was released in 2012.
That's a hell of a pre-preemption you have right there. Even IE 8 had pipelining support a whole 4 years before SPDY was announced. Are you telling me Microsoft disabled it by default there too because one of their biggest competitors may release something to replace it 4 years later?
Re: (Score:2)
Also, HTTP/1 already allows a browser to send multiple requests without waiting for the response of the previous request.
Incorrect. HTTP/1.1's pipelining is blocking, in that the responses must be returned in the order they were requested. It has a front of line blocking issue.
Re: (Score:3)
What I have experienced is that SPDY (and therefor also HTTP/2) will only offer more speed if you are Google or are like Google.
This is so worn out it's even in the FAQ [daniel.haxx.se].
Multiplexing doesn't offer that much speed increase as some people would like you to believe.
It's good for every high-latency connection, like all the small sites have. I run a server on a VM on a generic Xeon box behind a DSL line, four hops from a backbone, for security and privacy reasons. It'll be great for my use case - I can't really
Re: (Score:2)
Multiplexing doesn't offer that much speed increase as some people would like you to believe
Limit your browser to one connection per server and tell me that with a strait face. TCP has horrible congestion control and should not be used to multiplex web requests.
Re: (Score:2)
Also, HTTP/1 already allows a browser to send multiple requests without waiting for the response of the previous request.
But it doesn't have a decent mechanism for sending responses before they're requested [github.io]. With HTTP/2, your server can say "here's the page HTML, and here's a stream for the favicon linked in the HTML headers, and here's another stream for the JavaScript". The quickest request is the one you don't have to make.
Re: (Score:3)
4 000 000 000 is 10 characters once the whitespace is stripped out. Roughly the same number in HEX is FF FF FF FF, a saving of 20%
You're kidding, right? The number 4 billion can be represented in 32 bits, or the same total space as 4 ASCII characters.
Re: (Score:2)
Let's not forget that apps are often needing to pass out Ids now, and are tending toward using 64 bit Ids to deal with big data. That's either 8 bytes in binary or closer to 16 bytes as ASCII, and that's not even counting whitespace that may also need to be around it.
Re: (Score:2)
4 ASCII characters can be represented in 28 bits.
Broader than CGI (Score:2)
What I also often see is that a browser makes a first request (often for a CGI script)
2015.
Still thinking running sites via CGI is relevant.
I'm pretty sure that "CGI" in Aethedor's post referred not to CGI proper [wikipedia.org] but in a broader metonymic sense to any HTTP response that isn't just the verbatim contents of a file [wikipedia.org]. I get the idea that some pedants hate metonymy.
Finally (Score:2)
Re: (Score:2)
From the draft... (Score:3, Interesting)
"HTTP/2... also introduces unsolicited push of representations from servers to clients."
Seriously? Do we need yet ANOTHER way for a server to push unwanted code and malware onto our client systems? This is the greatest gift we could POSSIBLY give to the cybercriminals who want to break into our systems.
How about we think of security somewhere in this process, instead of pretending it's someone elses's problem?
Re: (Score:3)
I think you misunderstand what is going on here. Server push is basically a way for sites to pre-populate a browser's cache, by sending it suggested requests for things it doesn't yet know it will need ("push promises"), and the responses to said requests (the pushes themselves). If the server pushes responses to requests that the client never actually ends up making - not even to the extent of pulling the data from its local cache - then the pushed data will never be processed.
Unless you are sitting on an
Re: (Score:3)
"HTTP/2... also introduces unsolicited push of representations from servers to clients."
Seriously? Do we need yet ANOTHER way for a server to push unwanted code and malware onto our client systems?
Yes, we do.
What you're missing here is that this is pushing content that the browser was going to request in a few hundred milliseconds anyway. Why was the browser going to request it? Because the web page included a link to it.
The only way this change could affect security is if you assume a threat model where the attacker is able to modify the web server to get it to push additional content to the browser but is somehow unable to modify the content the server is already pushing anyway. If the attacker
Re: (Score:2)
"HTTP/2... also introduces unsolicited push of representations from servers to clients."
Seriously? Do we need yet ANOTHER way for a server to push unwanted code and malware onto our client systems?
Yes, we do.
What you're missing here is that this is pushing content that the browser was going to request in a few hundred milliseconds anyway. Why was the browser going to request it? Because the web page included a link to it.
Well, that sounds to me like my ad-blocking software won't work as well, because stuff that it would have stopped my browser from downloading will be pushed to me whether I request it or not :(
Re: (Score:2)
Note that server push applies to content the browser could request _from the same server_.
Usually ad content is served from third party servers.
But ultimately yes, this can lead to wasting some of your bandwidth in content your browser would have otherwise not asked.
Slashdot (Score:5, Interesting)
While we're discussing HTTP, what's wrong with Slashdot lately?
I keep getting timeouts, signed out, nothing but the front page, everything else loaded from a CDN (that just gives me browser security warnings because it's not coming up as slashdot.org), etc.
Are you guys under attack, or just unable to keep a site with ~100 comments per article up?
Re: (Score:2)
Re: (Score:2)
It's been this way for weeks now, off and on. I figure it must be Dice just screwing around with the site, trying to figure out how to get it working with a CDN, and maybe even with SSL! But failing and then reverting the normal, working, no-SSL site. Then trying again. At least a couple of times a week.
Re:Slashdot (Score:4, Funny)
Remember when getting Slashdotted meant that *another* site had problems?
Re: (Score:2)
Re: (Score:2)
Everytime their auto-play videos they Slashdot themselves.
Please run spellcheck this time (Score:5, Funny)
So you don't having something like HTTP_REFERER again
Re: (Score:2)
That still bites me every now and then after all these years, and continues to introduce discussions as to at what point in backend code the typo should be fixed.
Re: (Score:2)
It's by design. Datacompresion eats consecutively ocuring and pointles leters which in turn means a faster and more eficient protocol.
Let's see if HTTP/2 is adopted faster than IPv6. (Score:3)
Existing standards that are "good enough" tend to be hard to replace.
Re: (Score:2)
IPv6 needs 100% buy-in from all participants or else you need to run/pay for bridging services who converts between the two. HTTP/2 is backward compatible meaning any participants will transparently fall back on HTTP 1/1.1 if it's not supported. Plus, there are far fewer vendors of HTTP servers / clients than there are for IPv4 based software and hardware products.
Re: (Score:2)
That isn't at all true. My laptop has both IPv6 and IPv4 addresses. When I make a request to google.com, I'm using IPv6, when reaching sites that don't have IPv6 I fall back to IPv4. As a user I don't even notice this.
Similarly, HTTP/2 has to be implemented on clients and servers before it will be functional, else both endpoints need to agree to fall back on HTTP/1.1.
There's some additional network configuration needed before IPv6 is useful, but no need to convert anything.
Re: (Score:2)
Re: (Score:2)
There are fewer parties - a few servers and a few clients, both which are updated fairly frequently (the servers because admins, the clients because auto-update) are the only ones that matter. Google already supports HTTP/2 (nee SPDY) so a huge percentage of internet traffic is already set up to use it as soon as browsers update (Chrome has had it as SPDY for years, Firefox has it or will soon).
The v6 slowness was always the ISP (both on the client and server) and the CPE. Now that most of the big US ISPs h
Re: (Score:2)
Existing standards that are NOT good enough, where replacements are not backwards compatible are hard to replace.
This is very different from IPv6 where both the server and the client have an automatic fallback ability. It's more like USB2.0 vs USB1.0 or SSL3 vs SSL2 (security issues with these aside). HTTP/2 can be implemented by any server regardless of the client (like Apache's mod_SPDY). It can be implemented by any client regardless of the server (Chrome already implements SPDY). All transparent to the
Re: (Score:2)
HTTP does not require encryption. It's a hypertext protocol.
However, it can be transported by anything else. You could have HTTP over UDP, or pigeon post if you wanted.
What you are confusing is a need for security with a completely inappropriate layer for such security. The same way that embedding IP details into FTP DATA packets is stupid and wrong.
HTTP/2 over TLS is what you want. And that doesn't give a shit what HTTP/2 does in terms of security.
Actually, infinitely more important are TLS and DNSSEC
Re: (Score:2)
HTTP/2 over TLS could have been made mandatory. But for some easy-to-guess reason they "decided" otherwise.
Because its the wrong "they" to be making that decision. The working group for HTTP/2 should never be dictating how they feel its use should be restricted. There's plenty of other opportunities for people at the appropriate levels of the chain to make that recommendation. This is a big part of the point of a layered technology.
Re: (Score:2)
HTTP/2 over TLS could have been made mandatory.
Not if they're adhering to the OSI networking model. HTTP is an Application protocol, while encryption is a Presentation layer function. There shouldn't be any dependencies between layers.
Re: Slashdot is dead! (Score:3, Informative)
"It'll" is a contraction, not slang, you fucking nitwit.
Re: (Score:2)
If a checkbox is not checked does it still not come in on the post / get ?
That's HTML, that has nothing to do with HTTP.
Is a textarea still not a text control?
That's HTML, that has nothing to do with HTTP.
Re: (Score:2)
If a checkbox is not checked does it still not come in on the post / get ?
if not then it is still broken on arrival.
Is a textarea still not a text control?
if not then it is still broken on arrival.
I think you're confusing HTTP with HTML.
Re: (Score:2)
Re: (Score:2)
Now, let's also improve html by removing all JS and other VMism from it
Would you rather have to reload the entire comment page when you fold or unfold a subtree of the comments?
Would every web browser have to be a news reader? (Score:2)
If comment sections were to switch to NNTP, whose responsibility would it be to implement all the purpose-designed protocols for display in the context of a web document? Would every web browser have to implement mail, NNTP, torrents, etc.? The only major browser I know of that does anything remotely like that is SeaMonkey, and even then it segregated news and web views last time I checked.
Re: (Score:2)
Define "properly". I have posted to Slashdot on a Wii game console using the "Internet Channel powered by Opera" app. It was dog slow, but it was better than not working at all, which is what would happened had Slashdot used NNTP.
A wild noscript purist appears (Score:2)
Now, let's also improve html by removing all JS and other VMism from it
Comment sections and forums aren't the only application of DHTML (manipulation of the HTML DOM through script) that loss of script might break. What workarounds for lack of script would you propose for these?
Re: (Score:2)
create a suitable standard
Good luck getting it adopted. For the first several years, users of iOS couldn't even upload pictures or videos from the camera's storage through a web form using the <input type="file"> element that has been around since Netscape 2.
and implement it natively if you really have to.
When I create and publish native applications for my platform, good luck implementing the environment in which to run them on your platform.