Thanks for all the kind words on SPDY; I wish the magazine authors would ask before putting their own results in the titles!
Regarding standards, we're still experimenting (sorry that protocol changes take so long!). You can't build a new protocol without measuring, and we're doing just that - measuring very carefully.
Note that we aren't ignoring the standards bodies. We have presented this information to the IETF, and we got a lot of great feedback during the process. When the protocol is ready for a RFC, we'll submit one - but it's not ready yet.
We love help- if you're passionate about protocols and want to lend implementation help, please hop onto spdy-dev@google.com Several independent implementations have already cropped up and the feedback continues to be really great.
Since we've got it direct from the horse's mouth -
- Why server push? Nobody seems to think it's a good idea and it makes things more complicated for everybody involved, including proxies. What is the rationale for this feature.
- Why did you name it "SPDY" to show "how compression can help improve speed" when SSL already supports compression?
- In the performance measurements in the whitepaper, what HTTP client did you use and what multiple connection multiplexing method was used if any? How were the results for HTTP obtained? For instance the whitepaper says an HTTP client using 4 connections per domain "would take roughly 5 RTs" to fetch 20 resources, implying theory math. Were situations like 10 small requests finishing in the time it takes to transfer 1 large request taken into account? (ie in practice multiple requests can be made without increasing total page load time)
- The main supposed benefit seems to be requesting more than one resource at once. Then a request could stall the connection while being processed (ie doubleclick ad) and hold up everything after it, so then you add multiplexing, priorities, canceling requests, and all that complication. Why not just send a list of resources and have the server respond back in whatever order they are available? This provides the same bandwidth and latency with superior fault handling (if the connection closes the browser has only one resource partially transferred instead of several).
- The FAQ kind of reluctantly admits that HTTP pipelining basically has the same benefits in theory as SPDY except if a resource takes a while and holds up the remaining ones. So what benefit would SPDY have over just fixing pipelining so that the server can respond in whatever order it chooses? The only real problem with HTTP tunneling is fixed-order and bad implementations (ie IIS), correct?
Barring really good explanations it looks to me like SPDY is just very complicated and increases speed basically as a side-effect of solving other imaginary problems.
server push: We have some debates about this internally, but it seems the market is deciding that push is important-- e.g. image inlining into the HTML. Server push allows you to accomplish the same, but gives the benefit of having them known as individual resources by a single name, and thus cacheable. I believe it may be particularly beneficial for people on high-rtt devices like mobile. If you look at data just about anywhere, you can see that RTT is the real
performance measurement: In the whitepaper, as per my recollection, Chrome was the client for all of the measurements that we did.
Since top sites have more resources than most sites, on average more than 6 per host [google.com], and since Chrome has a low connection limit [google.com] and had blocking problems preventing parallel loads [webkit.org] (since there's no data on the metrics there's no way to know what webkit bugs were present) the results are then far less impressive. In fact, these performance numbers are pretty much meaningless, wouldn't you agree?
HTTPs fault handling is terrible, actually. When you're sending a request and you don't receive the response, you don't know if the request was processed or not. This is particularly fun for non-idempotent transactions like say.. charging your credit card. SPDY includes mechanisms for telling the client (assuming the connection wasn't broken) that the server rejected the request
What? When do you "not receive a response" for a request and it isn't a broken connection? If the server rejec
performance measurement: In the whitepaper, as per my recollection, Chrome was the client for all of the measurements that we did.
Since top sites have more resources than most sites, on average more than 6 per host [google.com], and since Chrome has a low connection limit [google.com] and had blocking problems preventing parallel loads [webkit.org] (since there's no data on the metrics there's no way to know what webkit bugs were present) the results are then far less impressive. In fact, these performance numbers are pretty much meaningless, wouldn't you agree?
They are perfectly meaningful. If you don't like our findings, the most productive thing to do is to create an e
Once again thank you for taking the time to answer my questions. I wish the answers, in general, were something more substantial than just 'we're Google trust us we're smart durr', since that's essentially what you've written here, for instance:
There is external research on this topic. Feel free to look it up as we did.
I expect somebody who is pushing a new protocol based on published research to be able to at least cite their sources. Even on the web the only research identified are two powerpoint presentations by the same guy, with no review. Frankly, even just based off the f
Consider, client requests a bunch of stuff, one or more of those stuff happen to be dynamically generated, eg php. Server does not know how long those things will take to generate.
It does know how long it will take to grab the static files which were also requested, but those static files may not be useful to the client until it gets the dynamic one(s).
In a straight "server sends whatever file it wants first", the server has a couple of choices send the static files first and then the dynamic
Regarding standards, we're still experimenting (sorry that protocol changes take so long!).
Not to diminish your work or its importance in any way, but do you not see anything wrong with implementing this in production in Chrome and Google websites, long before there is a standard? I mean specifically the fact that it makes the combination of Chrome and Google websites perform much better together, than if you replace either with a competitor's product. Your browser competitors cannot compete with that, since they don't own powerful websites like you do. Your server-side competitors can't compete
From what I can see, it's an open draft, so I don't see why no-one else can implent it. Maybe they should thank Google for doing the hard work (creating it, and putting it into a popular open source browser) instead.
Seriously, if people thought like you when the web was new, we'd never have a tag for displaying images, because that would be unfair for all the browsers and servers that didn't implent it.
I don't mean that innovation is bad. I was careful to clarify that before. And I further clarified that I did not mean to diminish in any way his work.
It is still a valid concern, that a new nonstandard protocol is being used to optimize a specific combination of super-popular website plus popular client software, as it puts all competitors at a disadvantage. The purpose of standards is to level the playing field.
And the purpose of innovation is to move us forward. You can - and should - still innovat
a specific combination of super-popular website plus popular client software
Well, you're wrong on that part, at least. Chrome is happy to use SPDY protocol to all web servers, not just google's (verified by someone earlier in the thread, via https://github.com/donnerjack13589/node-spdy [github.com] and chrome://net-internals/ ), and I'm sure google's servers are happy to serve content over SPDY to any browser that asks for it. There is a detailed draft around the protocol, and there's open source code out there implenting it (for example in chromium).
The one thing I appreciate, is your not selling this as "Chrome makes the web faster" the way Microsoft did back in the 90's. By creating their own extensions, and trying to sell everybody on how much better IE5 with IIS was then Netscape with Apache.
You have added it to chrome and to google sites. Some may notice a speed difference, some may not. Meanwhile the protocol, such as it is, is free to use and implement without anyone having to reverse engineer it. Which is a pretty decent earnest money down to c
In the real world, packets loss rates are typically 1-2%, and RTTs average 50-100 ms in the U.S.) The reasons that SPDY does better as packet loss rates increase are several:
SPDY sends ~40% fewer packets than HTTP, which means fewer packets affected by loss.
But the packets are bigger. If packets are lost due to noise, increasing the size of a packet increases the probability of having an error within it. 10 dollars says you "tested" this in a simulation by fixing the probability of losing a packet, instea
Q: Doesn't HTTP pipelining already solve the latency problem?
A: No. While pipelining does allow for multiple requests to be sent in parallel over a single TCP stream, it is still but a single stream. Any delays in the processing of anything in the stream (either a long request at the head-of-line or packet loss) will delay the entire stream.
This does not make sense. You're still using TCP, which is a reliable transport protocol, which means packet loss is dealt with at the TCP level,
From Sharp minds come... pointed heads.
-- Bryan Sparrowhawk
SPDY clarifications (Score:5, Informative)
Thanks for all the kind words on SPDY; I wish the magazine authors would ask before putting their own results in the titles!
Regarding standards, we're still experimenting (sorry that protocol changes take so long!). You can't build a new protocol without measuring, and we're doing just that - measuring very carefully.
Note that we aren't ignoring the standards bodies. We have presented this information to the IETF, and we got a lot of great feedback during the process. When the protocol is ready for a RFC, we'll submit one - but it's not ready yet.
Here are the IETF presentations on SPDY:
http://www.ietf.org/proceedings/80/slides/tsvarea-0.pdf [ietf.org]
and
https://www.tools.ietf.org/agenda/80/slides/httpbis-7.pdf [ietf.org]
I've also answered a few similar questions to this here: http://hackerne.ws/item?id=2420201 [hackerne.ws]
We love help- if you're passionate about protocols and want to lend implementation help, please hop onto spdy-dev@google.com Several independent implementations have already cropped up and the feedback continues to be really great.
Re:SPDY clarifications (Score:4, Interesting)
Since we've got it direct from the horse's mouth -
- Why server push? Nobody seems to think it's a good idea and it makes things more complicated for everybody involved, including proxies. What is the rationale for this feature.
- Why did you name it "SPDY" to show "how compression can help improve speed" when SSL already supports compression?
- In the performance measurements in the whitepaper, what HTTP client did you use and what multiple connection multiplexing method was used if any? How were the results for HTTP obtained? For instance the whitepaper says an HTTP client using 4 connections per domain "would take roughly 5 RTs" to fetch 20 resources, implying theory math. Were situations like 10 small requests finishing in the time it takes to transfer 1 large request taken into account? (ie in practice multiple requests can be made without increasing total page load time)
- The main supposed benefit seems to be requesting more than one resource at once. Then a request could stall the connection while being processed (ie doubleclick ad) and hold up everything after it, so then you add multiplexing, priorities, canceling requests, and all that complication. Why not just send a list of resources and have the server respond back in whatever order they are available? This provides the same bandwidth and latency with superior fault handling (if the connection closes the browser has only one resource partially transferred instead of several).
- The FAQ kind of reluctantly admits that HTTP pipelining basically has the same benefits in theory as SPDY except if a resource takes a while and holds up the remaining ones. So what benefit would SPDY have over just fixing pipelining so that the server can respond in whatever order it chooses? The only real problem with HTTP tunneling is fixed-order and bad implementations (ie IIS), correct?
Barring really good explanations it looks to me like SPDY is just very complicated and increases speed basically as a side-effect of solving other imaginary problems.
Re: (Score:3)
I'm one of the other people who works on SPDY.
server push: We have some debates about this internally, but it seems the market is deciding that push is important-- e.g. image inlining into the HTML. Server push allows you to accomplish the same, but gives the benefit of having them known as individual resources by a single name, and thus cacheable. I believe it may be particularly beneficial for people on high-rtt devices like mobile. If you look at data just about anywhere, you can see that RTT is the real
Re: (Score:1)
performance measurement: In the whitepaper, as per my recollection, Chrome was the client for all of the measurements that we did.
Since top sites have more resources than most sites, on average more than 6 per host [google.com], and since Chrome has a low connection limit [google.com] and had blocking problems preventing parallel loads [webkit.org] (since there's no data on the metrics there's no way to know what webkit bugs were present) the results are then far less impressive. In fact, these performance numbers are pretty much meaningless, wouldn't you agree?
HTTPs fault handling is terrible, actually. When you're sending a request and you don't receive the response, you don't know if the request was processed or not. This is particularly fun for non-idempotent transactions like say.. charging your credit card. SPDY includes mechanisms for telling the client (assuming the connection wasn't broken) that the server rejected the request
What? When do you "not receive a response" for a request and it isn't a broken connection? If the server rejec
Re: (Score:2)
performance measurement: In the whitepaper, as per my recollection, Chrome was the client for all of the measurements that we did.
Since top sites have more resources than most sites, on average more than 6 per host [google.com], and since Chrome has a low connection limit [google.com] and had blocking problems preventing parallel loads [webkit.org] (since there's no data on the metrics there's no way to know what webkit bugs were present) the results are then far less impressive. In fact, these performance numbers are pretty much meaningless, wouldn't you agree?
They are perfectly meaningful. If you don't like our findings, the most productive thing to do is to create an e
Re: (Score:2)
Once again thank you for taking the time to answer my questions. I wish the answers, in general, were something more substantial than just 'we're Google trust us we're smart durr', since that's essentially what you've written here, for instance:
There is external research on this topic. Feel free to look it up as we did.
I expect somebody who is pushing a new protocol based on published research to be able to at least cite their sources. Even on the web the only research identified are two powerpoint presentations by the same guy, with no review. Frankly, even just based off the f
Re: (Score:2)
Consider, client requests a bunch of stuff, one or more of those stuff happen to be dynamically generated, eg php. Server does not know how long those things will take to generate.
It does know how long it will take to grab the static files which were also requested, but those static files may not be useful to the client until it gets the dynamic one(s).
In a straight "server sends whatever file it wants first", the server has a couple of choices
send the static files first and then the dynamic
Re: (Score:2)
Regarding standards, we're still experimenting (sorry that protocol changes take so long!).
Not to diminish your work or its importance in any way, but do you not see anything wrong with implementing this in production in Chrome and Google websites, long before there is a standard? I mean specifically the fact that it makes the combination of Chrome and Google websites perform much better together, than if you replace either with a competitor's product. Your browser competitors cannot compete with that, since they don't own powerful websites like you do. Your server-side competitors can't compete
Re: (Score:2)
From what I can see, it's an open draft, so I don't see why no-one else can implent it. Maybe they should thank Google for doing the hard work (creating it, and putting it into a popular open source browser) instead.
Seriously, if people thought like you when the web was new, we'd never have a tag for displaying images, because that would be unfair for all the browsers and servers that didn't implent it.
Re: (Score:2)
It is still a valid concern, that a new nonstandard protocol is being used to optimize a specific combination of super-popular website plus popular client software, as it puts all competitors at a disadvantage. The purpose of standards is to level the playing field.
And the purpose of innovation is to move us forward. You can - and should - still innovat
Re: (Score:2)
a specific combination of super-popular website plus popular client software
Well, you're wrong on that part, at least. Chrome is happy to use SPDY protocol to all web servers, not just google's (verified by someone earlier in the thread, via https://github.com/donnerjack13589/node-spdy [github.com] and chrome://net-internals/ ), and I'm sure google's servers are happy to serve content over SPDY to any browser that asks for it. There is a detailed draft around the protocol, and there's open source code out there implenting it (for example in chromium).
I see this as no different from, say, some b
Re: (Score:2)
Yes, it is enabled in production for Chrome for any site that advertises SPDY.
Google definitely does advertise SPDY compatibility, thus Chrome may speak SPDY when talking to Google pages.
Re: (Score:3)
The one thing I appreciate, is your not selling this as "Chrome makes the web faster" the way Microsoft did back in the 90's. By creating their own extensions, and trying to sell everybody on how much better IE5 with IIS was then Netscape with Apache.
You have added it to chrome and to google sites. Some may notice a speed difference, some may not. Meanwhile the protocol, such as it is, is free to use and implement without anyone having to reverse engineer it. Which is a pretty decent earnest money down to c
Re: (Score:1)
What kind words? Most of the comments here seem negative.
Re: (Score:2)
But the packets are bigger. If packets are lost due to noise, increasing the size of a packet increases the probability of having an error within it. 10 dollars says you "tested" this in a simulation by fixing the probability of losing a packet, instea
Re: (Score:2)
This does not make sense. You're still using TCP, which is a reliable transport protocol, which means packet loss is dealt with at the TCP level,