HTTP Intermediary Layer From Google Could Dramatically Speed Up the Web 406
grmoc writes "As part of the 'Let's make the web faster' initiative, we (a few engineers — including me! — at Google, and hopefully people all across the community soon!) are experimenting with alternative protocols to help reduce the latency of Web pages. One of these experiments is SPDY (pronounced 'SPeeDY'), an application-layer protocol (essentially a shim between HTTP and the bits on the wire) for transporting content over the web, designed specifically for minimal latency. In addition to a rough specification for the protocol, we have hacked SPDY into the Google Chrome browser (because it's what we're familiar with) and a simple server testbed. Using these hacked up bits, we compared the performance of many of the top 25 and top 300 websites over both HTTP and SPDY, and have observed those pages load, on average, about twice as fast using SPDY. Thats not bad! We hope to engage the open source community to contribute ideas, feedback, code (we've open sourced the protocol, etc!), and test results."
Oh that's wonderful (Score:5, Funny)
Now we can see Uncle Goatse twice as fast.
Is he your biological uncle? (Score:2)
Or simply an older man who likes to fondle you?
Re:Is he your biological uncle? (Score:4, Funny)
oldermanwholikestofondleyou.cx
To follow the goatse.cx standard, I believe it should be http://oldermanwholikestofondleyour.co.ck
It's only $250 to register a .co.ck address!
Re: (Score:2)
Do you have a link for you uncles web page ?
Re:Oh that's wonderful (Score:4, Interesting)
Re: (Score:3, Funny)
Re:Oh that's wonderful (Score:5, Funny)
Just over 2 hours.
Re:Oh that's wonderful (Score:5, Funny)
I want my old Internet back.
ME TOO!
Re:Oh that's wonderful (Score:5, Interesting)
Slasdhot should track where moderators spend their mod points. Those who spend it all on the first five posts should be disqualified from moderating.
Re: (Score:3, Interesting)
Or make it so you can only mod one comment per story.
Re:Oh that's wonderful (Score:5, Insightful)
Re: (Score:3, Informative)
Slashdot moderation is like the average fuel consumption on your car's trip computer. If you reset it while rolling down a hill you'll get insane MPG figures, but after that it'll fix itself up in the long run and evaluate to a correct value.
Before you click! (Score:4, Funny)
Re: (Score:2, Interesting)
content will be loaded before you click!
Sounds like those "dialup accelerators" from back in the '90s ... the ones that would silently spider every link on the page you're currently viewing in order to build a predictive cache.
Re:Before you click! (Score:5, Interesting)
Which of course led to quite amusing results when some failure of a web developer made an app that performed actions from GET requests. I've heard anecdotes of entire databases being deleted by a web accelerator in these cases.
From RFC2616:
Re:Before you click! (Score:4, Informative)
Yes. thedailywtf.com has such stories. I specifically remember one, where the delete button of database entries was a GET link from the list page. So Google's little spider went there, and crawled the entire list. Requested every single delete link address on the page. I think it was not even linked from anywhere. The crawler got there by reading out the referrer addresses from when the developers came to Google from a link on that site.
And if I remember correctly, it of course was a non backuped production database. The only one in fact. Must have been fun. :)
Re:Before you click! (Score:5, Funny)
>>>Sounds like those "dialup accelerators" from back in the '90s ...
Hey I still use one of those you insensitive clod! It's called Netscape Web Accelerator, and it does more than just prefetch requests - it also compresses all text and images to about 10% original size. How else would I watch 90210 streaming videos over my phoneline?
Why I can almost see what looks like a bikini. Man Kelly is hot... ;-)
Re: (Score:3, Funny)
But seriously...
the accelerator (compression) is really useful, and I couldn't imagine using dialup without it. It makes those slow 28k or 50k hotel connections look as fast as my home DSL hookup. (Except for the blurry images of course.)
Re: (Score:3, Funny)
Its not the same, really...
SPDY could do prefetching (in which case it'd be server push, instead of a new pull), but mainly what it does is it lets a lot of requests use the same connection, and does compression on the HTTP headers.
Thats essentially almost all of the current performance advantage (for today).
Re: (Score:3, Informative)
Yup. They're pretty big (cookies can be HUGE!)
Take a look here:
http://sites.google.com/a/chromium.org/dev/spdy/spdy-whitepaper
"Header compression resulted in an ~88% reduction in the size of request headers and an ~85% reduction in the size of response headers. On the lower-bandwidth DSL link, in which the upload link is only 375 Kbps, request header compression in particular, led to significant page load time improvements for certain sites (i.e. those that issued large number of resource requests). We foun
Re: (Score:2)
In the future, the content will be loaded before you click!
Wouldn't you have to have some thiotimoline [wikipedia.org] and water in your mouse for that to work? Thiotimoline ain't cheap, you know.
addin not needed (Score:4, Informative)
and faster still.. (Score:4, Insightful)
remove flash, java applets ad's
20X faster!
Re:and faster still.. (Score:4, Funny)
You could also remove images, CSS, Javascript, and text, imagine the time savings!
Re:and faster still.. (Score:5, Funny)
Remove the content too. It's all meaningless stuff like this post.
Re:and faster still.. (Score:5, Insightful)
Ye are joking, but ye are correct. Take this slashdot page. I used to be able to participate in discussion forum with nothing more than a 1200 baud (1kbit/s) modem. If I tried that today, even with all images turned off, it would take 45 minutes to load this page, mainly due to the enormous CSS files
It would be nice if websites made at least *some* attempt to make their files smaller, and therefore load faster.
Re: (Score:3, Funny)
So save the CSS to your HD and put a filter in an extension/proxy/etc to replace the CSS URL with your local file. Wait, isn't that what the cache is for? Hmm...
Re: (Score:3, Funny)
I heard of a program called DeCSS. Maybe that's what it does!
Re:and faster still.. (Score:5, Interesting)
Cache headers are set to one week so unless you're clearing your cache every page load it's amounts to nothing.
If anything the scripts are bigger, but again, cached. Besides AJAX comments were a huge improvement for those of us on dialup- no more loading the whole page every time you did anything.
CSS and JS, when used correctly make things faster for users, even (and sometimes especially) for those of us on slow connections.
Re: (Score:3, Insightful)
Shhh...If people start thinking /. discussions work, half the people here won't have anything to complain about and will have to go back to spending the day working.
Re: (Score:3, Interesting)
I think Flash should be made illegal. Yesterday I visited a website 100% made in Flash. I had to wait for it to load and then none of the links worked. Many Flash sites' links don't work in Firefox, I have no idea why. I suspect incompetent developers.
I sent a furious email to the company saying I was going to choose one of their competitors just because of the lousy website. I got a reply from their CEO basically saying "go ahead, we don't give a fuck".
Flash is like cake icing. A little bit tastes and
Re: (Score:3, Insightful)
Slashdot could use the help (Score:2, Funny)
If only the Google engineers can do something about Slashdot's atrociously slow Javascript. Like maybe they can remove the sleep() statements.
What, just because the original poster pulls a "look at me, I did something cool, therefore I must be cool!" doesn't mean I have to go along with it.
slashdot (Score:3, Interesting)
If only the Google engineers can do something about Slashdot's atrociously slow Javascript.
I've noticed a discernible difference in /. loadtime, in favor of Google Chrome vs FF 3.x on Mac OSX at home. And that's just the Chrome dev channel release. I was pleasantly surprised.
Re:Slashdot could use the help (Score:4, Insightful)
They need start with practicing what they preach...
http://code.google.com/speed/articles/caching.html [google.com]
http://code.google.com/speed/articles/prefetching.html [google.com]
http://code.google.com/speed/articles/optimizing-html.html [google.com]
They turn on caching for everything but then spit out junk like
http://v9.lscache4.c.youtube.com/generate_204?ip=0.0.0.0&sparams=id%2Cexpire%2Cip%2Cipbits%2Citag%2Calgorithm%2Cburst%2Cfactor&fexp=903900%2C903206&algorithm=throttle-factor&itag=34&ipbits=0&burst=40&sver=3&expire=1258081200&key=yt1&signature=8214C5787766320D138B1764BF009CF62A596FF9.D86886CFF40DB7F847246D653E9D3AA5B1D18610&factor=1.25&id=ccbfe79256f2b5b6 [youtube.com]
Most cache programs just straight up ignore this. Because of the '?' in there. It ends up being a query to static data.
Then never mind the load balancing bits they put in there with 'v9.lscache4.c.'. So even IF you get your cache to keep the data you may end up with a totally different server and the same piece of data just served from another server. There have been a few hacks to 'rewrite' the headers and the names to make it stick. But those are just hacks and while they work they seem fragile.
The real issue is at the HTTP layer and how servers are pointed at from inside the 'code'. So instead of some sort of indirection that would make it simple for the client to say 'these 20 servers have the same bit of data' they must assume that the data is different from every server.
Compression and javascript speedups are all well and good but there is a different more fundamental problem of extra reload of data that has already been retrieved. As local network usage is almost always faster than going back out to the internet. In a single user environment this is not too big of a deal. But in a 10+ user environment it is a MUCH bigger deal.
Even the page that talks about optimization has issues
http://code.google.com/speed/articles/ [google.com]
12 cr/lf right at the top of the page that are not rendered anywhere. They should look at themselves first.
Re: (Score:3, Insightful)
How is this different from Web servers that serve up gzipped pages?
Well, for one, gzipping output doesn't have any effect on latency.
Re: (Score:3, Interesting)
Re: (Score:3, Funny)
The technical term for that is a Speedup Loop [thedailywtf.com]. All good software developers use them... for certain values of 'good'.
Suspicious.... (Score:3, Interesting)
From the link
We downloaded 25 of the "top 100" websites over simulated home network connections, with 1% packet loss. We ran the downloads 10 times for each site, and calculated the average page load time for each site, and across all sites. The results show a speedup over HTTP of 27% - 60% in page load time over plain TCP (without SSL), and 39% - 55% over SSL.
1. Look at top 100 websites.
2. Choose the 25 which give you good numbers and ignore the rest.
3. PROFIT!
How about telling Analytics to take a hike? (Score:5, Insightful)
And all other "add this piece of Javascript to your Web page and make it more awesomer!"
Yes, yes, they're useful. And you can't fathom a future without them. But in the meantime I'm watching my status bar say, "completed 4 of 5 items", then change to "completed 11 of 27 items", to "completed 18 of 57 items", to "completed... oh screw this, you're downloading the whole Internet, just sit back, relax and watch the blinkenlights".
Remember when a 768kbps DSL line was whizzo fast? Because all it had to download was some simple HTML, maybe some gifs?
I want my old Internet back. And a pony.
Re:How about telling Analytics to take a hike? (Score:5, Funny)
And all other "add this piece of Javascript to your Web page and make it more awesomer!"
Yes, yes, they're useful. And you can't fathom a future without them. But in the meantime I'm watching my status bar say, "completed 4 of 5 items", then change to "completed 11 of 27 items", to "completed 18 of 57 items", to "completed... oh screw this, you're downloading the whole Internet, just sit back, relax and watch the blinkenlights".
Remember when a 768kbps DSL line was whizzo fast? Because all it had to download was some simple HTML, maybe some gifs?
I want my old Internet back. And a pony.
That's why smart web developers put those scripts at the end of the body.
Re: (Score:3, Insightful)
It's also why smart users filter them outright with something like AdBlock - anything that I see in the browser history that looks like a tracking/stats domain or URL gets blocked on sight. Come to think of it, I could probably clean it up publish it as an AdBlock filter list if anyone's interested; there's only a few dozen entries on there at the moment, but I'm sure that would grow pretty quickly if it was used by a more general a
Re:How about telling Analytics to take a hike? (Score:4, Interesting)
It's also why smart users filter them outright with something like AdBlock - anything that I see in the browser history that looks like a tracking/stats domain or URL gets blocked on sight. Come to think of it, I could probably clean it up publish it as an AdBlock filter list if anyone's interested; there's only a few dozen entries on there at the moment, but I'm sure that would grow pretty quickly if it was used by a more general and less paranoid userbase.
What's paranoid about insisting that a company bring a proposal, make me an offer, and sign a contract if they want to derive monetary value from my personal data? Instead, they feel my data is free for the taking and this entitlement mentality is the main reason why I make an effort to block all forms of tracking. I never gave consent to anyone to track anything I do, so why should I honor an agreement in which I did not participate? The "goodness" or "evil-ness" of their intentions doesn't even have to be a consideration. Sorry but referring to that as "paranoid" is either an attempt to demagogue it, or evidence that someone else's attempt to demagogue it was successful on you.
Are some people quite paranoia? Sure. Does that mean you should throw out all common sense, pretend like there are only paranoid reasons to disallow tracking, and ignore all reasonable concerns? No. Sure, someone who paints with a broad brush might notice that your actions (blocking trackers) superficially resemble some actions taken by paranoid people. Allowing that to affect your decison-making only empowers those who are superficial and quick to assume because you are kowtowing to them. This is what insecure people do. If the paranoid successfully tarnish the appearance of an otherwise reasonable action because we care too much about what others may think, it can only increase the damage caused by paranoia.
Re: (Score:3, Insightful)
It's not "free for the taking". It's "free in exchange for free content on the web".
(Note, I'm not arguing against ad blockers or the like.. just like I 30 second skip through the vast vast vast majority of commercials on my Tivos, and FFed through them on my VCR before that.)
Re: (Score:3, Interesting)
Because the costs of doing so would outweigh the benefits, leading to no one agreeing to the use of their data, no ad revenue, and ultimately no professional web sites (except those that charge a fee to view). This situtation is termed a "market failure", in this case because of high transaction costs. Therefore, society standardizes the agreeme
Re: (Score:3, Interesting)
Instead, they feel my data is free for the taking and this entitlement mentality is the main reason why I make an effort to block all forms of tracking.
What about your sense of entitlement to get their content under your conditions?
Re: (Score:3, Informative)
Adsense is embedded where the ads are going to be, Google Maps scripts are embedded where the map is going to be, etc.
This doesn't have to be the case, unless you're still coding per 1997 standards. Even with CSS 1, you can put those DIVs last in the code and still place them wherever you want them to be.
It's what I do with the Google ads (text only ads, FWIW) on one of my personal sites - so the content loads first, and then the ads show up.
Re: (Score:2)
You forgot to yell at the kids to get off your internet.
Re: (Score:2)
I want my old Internet back. And a pony.
If Slashdot does OMG Ponies again will that satisfy your wants and needs?
Re: (Score:2)
Jeebus. I remember when my 1200 baud modem felt whizzo fast compared to my old 300 baud modem.
And, yes, I can already see the "get off of my lawn" posts below you, and I'm dating myself. :-P
Cheers
Re: (Score:3, Insightful)
I want my old Internet back. And a pony.
LOL. I'd suggest disabling javascript and calling it a day.
Alternatively, use a text-based browser. If the webpage has any content worth reading, then a simple lynx -dump in 99% of cases will give you what you want, with the added bonus of re-formatting those mile-wide lines into something readable.
On the other hand, I suspect most people don't want the "old internet". What was once communicated on usenet or email in a few simple lines, for example, now increasingl
Solving the wrong problem (Score:5, Interesting)
The problem isn't pushing the bits across the wire. Major sites that load slowly today (like Slashdot) typically do so because they have advertising code that blocks page display until the ad loads. The ad servers are the bottleneck. Look at the lower left of the Mozilla window and watch the "Waiting for ..." messages.
Even if you're blocking ad images, there's still the delay while successive "document.write" operations take place.
Then there are the sites that load massive amounts of canned CSS and Javascript. (Remember how CSS was supposed to make web pages shorter and faster to load? NOT.)
Then there are the sites that load a skeletal page which then makes multiple requests for XML for the actual content.
Loading the base page just isn't the problem.
Comment removed (Score:5, Insightful)
Re: (Score:2)
Yes. Quite often completely non-functional, because the site requires Javascript to do anything.
Usually this is followed by an assertion that the site's developer is a clueless knob--which may be true, but doesn't help at all. This is the Web we deserve, I suppose: 6 megabit cable connections and dual-core 2.5 gigahertz processors that can't render a forum page for Pokemon addicts in under 8 seconds.
Re: (Score:3, Funny)
So if Google sped up the non-ad web, they would have more room for their ads?
SNEAKY!!
Re:Solving the wrong problem (Score:5, Funny)
I think you mean SNKY
Re: (Score:2)
But can't you see how SPeeDY will solve ALL these? (Score:2)
?
No. Neither can I. It will let them *push* adverts at you in parallel though... *before you asked for them*
Google wanting more efficient advert distribution... No, never...
Re: (Score:2, Insightful)
What, you think after the first load that CSS file isn't cached in any way? Inline styles slow down every time, CSS just the first. CSS was supposed to make styling elements not completely braindead. You want to change the link colors with inline styles from red to blue? With inline styles - enjoy your grepping. You're bound to forget some of 'm, too.
Bitching about ad loading times and huge JS libraries? Sure, go ahead.
Re: (Score:2)
So... when you try to load slashdot, the requests that fetch the content don't get rolling until the request that fetches the ad finishes... and SPDY allows all of the requests to be processed concurrently so the content doesn't have to wait for the ad...
How is that solving the wrong problem again?
Re: (Score:2)
As the OP said, they're solving the wrong problem. It's not a transport issue, it's design issues. And many websites are rife with horrible design [worsethanfailure.com].
Re:Solving the wrong problem (Score:4, Insightful)
CSS can make things shorter and faster if they just remember to link to it as a static file.
You can't cache something that changes, and anything, like CSS and Javascript, that's caught in the on-the-fly generation of dynamic and uncacheable text in spite of actually being static, is just going to clog up the tubes.
In fact, thanks to slashdot's no-edits-allowed policy, each comment itself is a static unchangeable snippet of text. Why not cache those?
Sending only the stuff that changes is usually a good optimization no matter what you're doing.
CSS and javascript themselves aren't bad. Failing to offlink and thus cacheable-ize them however, is.
Cloud gaming (Score:2)
Re: (Score:2)
For basic casual flash stuff, there will almost certainly be flash support(since Adobe seems to at least be promising to get off their ass about reasonably supporting non wintel platforms). In the longer term, Google's work on making javascript really fast will, when combined with SVG or WebGL, allow flash level games to be produced with stock web technologies.
Application Layer... (Score:3, Interesting)
Doesn't that mean that both the client and the server have to be running this new application to see the benefits of this? Essentially either one or the other is still going to be using HTTP if you don't set it up on both, and its only as fast as the slowest piece.
While a great initiative, it could be a while before it actually takes off. To get the rest of the world running on a new protocol will take some time, and there will no doubt be some kinks to work out.
But if anyone could do it, it'd be Google.
Re: (Score:2)
A plugin gets it into something like firefox. Then, as long as there is a way for a webserver like apache to allow both requests (http or spdy), it shouldn't be that hard because you arn't storing your web pages in (static or dynamic) in a different format so it shouldn't be that much work to add the [apache] module once it is written.
Re: (Score:3, Informative)
Yes, it means that both sides have to speak the protocol.
That is why we want to engage the community to start to look at this work!
All the parentheses in the summary... (Score:2)
Am I the only one imagining a ventriloquist controlling a snarky dummy that counters all the points in the summary with dubious half-truths?
Cool.... but it's not http (Score:5, Insightful)
So which ports are you planning to use for it?
Re:Cool.... but it's not http (Score:5, Informative)
Right now the plan is to use port 443. We may as well make the web a safer place while we make it faster. .. what we have planned right now, is:
The plans for indicating how a client/server speaks SPDY is still somewhat up in the air..
UPGRADE (ye olde HTTP UPGRADE).
and, putting some string into the SSL handshake that allows both sides to advertise which protocols they speak. If both speak SPDY, then it can be used.
This is nice because you don't have the additional latency of an additional roundtrip (and that latency can be large!)
Not a terribly new concept. (Score:5, Informative)
Re: (Score:3, Insightful)
It may be noble in goal, but AOL's implementation makes things hell on sysadmins trying to load-balance AOL users' connections. In a given session, even a page load, I can expect connections from n number of (published) AOL proxies, *and* the user's home broadband IP. It's not possible to correlate them at layer 3, so nasty layer-7 checks get used instead, and AOL users wind up getting shoved into non-redundant systems.
Yeah, right... but WHY?!? (Score:2, Insightful)
I mean reinventing the wheel, well why not, this one is old and let say we have done all we could with HTTP...
But why, WHY should you call that with a stupid name like SPDY?!? It's not even an acronym (of is it?).
It sound bad, it's years (decade?) before it is well supported... but why not. Wake me when it's done ready for production.
I guess they start to get bored at Google if they are trying not rewrite HTTP.
Re: (Score:3, Interesting)
If, though, your business model largely depends on creating webapp UIs that are good enough to compete with native local UIs, HTTP's latency and other issues are going to strike you as a fairly serious problem(particularly if the future is very likely going to involve a lot more clients connecti
While we're at it ... (Score:5, Interesting)
While we're at it, let's also make processing web pages faster.
We have a semantic language (HTML) and a language that describes how to present that (CSS), right? This is good, let's keep it that way.
But things aren't as good as they could be. On the semantic side, we have many elements in the language that don't really convey any semantic information, and a lot of semantics there isn't an element for. On the presentation side, well, suffice it to say that there are a _lot_ of things that cannot be done, and others that can be done, but only with ugly kludges. Meanwhile, processing and rendering HTML and CSS takes a lot of resources.
Here is my proposal:
- For the semantics, let's introduce an extensible language. Imagine it as a sort of programming language, where the standard library has elements for common things like paragraphs, hyperlinks, headings, etc. and there are additional libraries which add more specialized elements, e.g. there could be a library for web fora (or blogs, if you prefer), a library for screenshot galleries, etc.
- For the presentation, let's introduce something that actually supports the features of the presentation medium. For example, for presentation on desktop operating systems, you would have support for things like buttons and checkboxes, fonts, drawing primitives, and events like keypresses and mouse clicks. Again, this should be a modular system, where you can, for example, have a library to implement the look of your website, which you can then re-use in all your pages.
- Introduce a standard for the distribution of the various modules, to facilitate re-use (no having to download a huge library on every page load).
- It could be beneficial to define both a textual, human readable form and a binary form that can be efficiently parsed by computers. Combined with a mapping between the two, you can have the best of both worlds: efficient processing by machine, and readable by humans.
- There needn't actually be separate languages for semantics, presentation and scripting; it can all be done in a single language, thus simplifying things
I'd be working on this if my job didn't take so much time and energy, but, as it is, I'm just throwing these ideas out here.
Re:While we're at it ... (Score:4, Insightful)
e have a semantic language (HTML) and a language that describes how to present that (CSS), right? This is good, let's keep it that way.
But things aren't as good as they could be. On the semantic side, we have many elements in the language that don't really convey any semantic information, and a lot of semantics there isn't an element for. On the presentation side, well, suffice it to say that there are a _lot_ of things that cannot be done, and others that can be done, but only with ugly kludges. Meanwhile, processing and rendering HTML and CSS takes a lot of resources.
The problem is that worrying about semantic vs presentation is something that almost no one gives a s**t about, because it is an artificial division that makes sense for computer science reasons, not human reasons. I don't sit down to make a web page and completely divorce the content vs the layout; the layout gives context and can be just as important as the content itself in terms of a human brain grasping an attempt at communication.
I know I shouldn't use tables for presentation but I just don't care. They are so simple and easy to visualize in my head, and using them has never caused a noticeable slowdown in my app, caused maintenance headaches, cost me any money, etc. The only downside is listening to architecture astronauts whine about how incorrect it is while they all sit around and circle-jerk about how their pages pass this-or-that validation test.
In oh so many ways writing a web app is like stepping back into computer GUI v1.0; so much must be manually re-implemented in a different way for every app. Heck, you can't even reliably get the dimensions of an element or the currently computed styles on an element. Lest you think this is mostly IE-vs-everyone else, no browser can define a content region that automatically scrolls its contents within a defined percentage of the parent element's content region; you've gotta emit javascript to dynamically calculate the size. This is double-stupid because browsers already perform this sort of layout logic for things like a textarea that has content that exceeds its bounds. And guess what? This is one of the #1 reasons people want to use overflow:auto. Don't waste screen real-estate showing scrollbars if they aren't necessary, but don't force me to hard-code height and width because then I can't scale to the user's screen resolution.
This kind of crap is so frustrating and wastes MILLIONS upon MILLIONS of man-hours year after year, yet we can't even get the major browser vendors to agree to HTMLv5 and what little bits (though very useful) it brings to the table. So please spare me the semantic vs presentation argument. If just a few people gave a s**t and stopped stroking their own egos on these bulls**t committees and actually tried to solve the problems that developers and designers deal with every day then they wouldn't have to worry about forcing everyone to adopt their standard (IPv6), the desire to adopt it would come naturally.
A novel idea (Score:4, Interesting)
How about we don't use HTTP/HTML for things they were not designed or ever intended to do? You know, that "right tool for the right job" thing.
What's the right tool for what's web apps today? (Score:3, Insightful)
"right tool for the right job"
Fair enough.
What's the right tool to deliver to your users rich applications which are
I don't know of any tool other than HTTP/HTML. I can imagine something with ssh and X forwarding, but windows boxes don't come with X preinstall
How about downsides... (Score:3, Interesting)
It's not all rosy as the short documentation page explains. While they are trying to maximize throughput and minimize latency, they are hurting other areas. 2 obvious downsides I see are:
1. Server would now have to keep holding the connection open to the client throughout the client's session, and also keep the associated resources in memory. While this may not be a problem for Google and their seemingly limitless processing powers, a Joe Webmaster will see their web server load average increase significantly. HTTP servers usually give you control over this with the HTTP keep-alive time and max connections/children settings. If the server is now required to keep the connections open it would spell more hardware for many/most websites;
2. Requiring compression seems silly to me. This would increase the processing power required on the web server (see above), and also on the client - think underpowered portable devices. It needs to stay optional - if the client and server both play and prefer compression, then they should do it; if not, then let them be; also keeping in mind that all images, video and other multimedia are already compressed - so adding compression to these items would increase the server/client load _and_ increase payload.
Re: (Score:3, Informative)
As a server implementor I can tell you that I'd rather have 1 heavily used connection than 20 (that is a LOW estimate for the number of connections many sites make(!!!!!!!)). Server efficiency was one of my goals for the protocol, in fact!
When we're talking about requiring compression, we're talking about compression over the headers only.
In any case, as someone who operates servers... I can't tell you how many times I've been angry at having to turn of compression for *EVERYONE* because some browser advert
HTTP-NG Revisited (ten years later!) (Score:5, Informative)
The good news is that SPDY seems to build on the SMUX ( http://www.w3.org/TR/WD-mux [w3.org] ) and MUX protocols that were designed as part of the HTTP-NG effort, so at least we're not reinventing the wheel. Now we have to decide what color to paint it.
Next up: immediate support in FireFox, WebKit, and Apache -- and deafening silence from IE and IIS.
Would appreciate it if instead... (Score:3, Insightful)
SPDY (Score:3, Insightful)
Cache control 4tw. A lot of the user perception problems SPDY is trying to solve can be solved by utilizing already-existing protocol features and the farms of cache servers at ISPs for your active content.
The latency differences between a user going all the way to your server and grabbing your content vs. going to ISP's cache server to get it can be huge when you consider a separate connection for each part of the page. When coupled with the decreased response time (checking a cache file and responding with a 304 is a lot easier on your server than pulling your content out of a database, formatting it and sending the entire page) makes a huge end-user perception difference. It also frees resources on your web server faster because you are sending 20-30 bytes instead of x kb. The faster your server can get rid of that connection the better.
Doing this reduces the load on your server(especially connection utilization), your bandwidth utilization, speeds up the download of your page (since it avoids the need to leave the ISP for your content download) and generally makes you a better network citizen.
Of course this requires developers that understand the protocol.
What I want to know is will ISP cache servers will have this implemented?
Re: (Score:3, Insightful)
Absolutely the 304 response won't work anymore under that new proposal. And 304 already saves a lot as most external references are static.
There is only one exception, advertisements. One can only asume that Google wants this to effectively push advertisements on the user.
fst wb prtcl (Score:4, Funny)
If they really wanted a faster web, they would have minimized the protocol name. Taking out vowels isn't enough.
The protocol should be renamed to just 's'.
That's 3 less bytes per request.
I can haz goolge internship?
Re: (Score:2)
Eh, not at all. Akamai is a distribution/anycast provider. They're about the infrastructure to support large-scale websites and/or content providers with very high SLA targets, not speed up individual requests.
Re:Akamai? (Score:5, Informative)
Re: (Score:3, Informative)
Re: (Score:3, Funny)
You youngsters and your fancy text based web browsers. In my day, we used gopher, and we LIKED it!
Re:Just turn off image loading (Score:5, Funny)
here's an onion to hang on your belt, granpa.
now, on a more serious note, isn't gopher a faster protocol than HTTP ? could we just use it to transport html, pictures, etc ?
Re:Just turn off image loading (Score:5, Informative)
Gopher is not installed by default, kiddie...
Gopher is installed by default on most builds of Firefox. Try this in your address bar: gopher://gopher.floodgap.com/1/world [floodgap.com]
Re: (Score:3, Funny)
Port 80? That newfangled HTTP thing? Gopher predates HTTP by a fair number of years. You can try that fancy pants modern trick now but back in the day, that would have got you nothing.
Of course, Gopher is newer than Telnet. And Telnet is newer than BBSs. And BBSs are newer than dialing in to the university mainframe over a 300 baud acoustic-coupled modem connected to a teletype, which is where I cut my teeth, sonny boy.
Re: (Score:3, Informative)
>>>acoustic-coupled modem
Which was the result of the Bell Telephone monopoly. They refused to let other non-Bell devices connect to their lines, which forced users to buy *only* Bell products. Man I hate monopolies. I despise them like Teddy Roosevelt despised them.
Fortunately somebody came-up with the idea of the acoustic modem, which connected *indirectly* via the usage of sound. Very primitive but they worked, and they didn't break Bell's rules, and more importantly, they opened-up the mark
Re:Just turn off image loading (Score:4, Informative)
>>>Gopher predates HTTP by a fair number of years.
Not correct. Gopher and HTTP were both released in summer 1991, so virtually the same birthdate. However gopher was available on the IBM PC that same year while HTTP was still confined to Unix systems, so that's why people misremember gopher as being first. (HTTP came to IBM PC, Macs, and Amigas in 1993.)
Re: (Score:2)
Speaking seriously, once the main page of HTML is downloaded you pretty much know already where everything goes.
Just stub it out with "loading" boxes in spots where you don't have all the content. Especially if parameters like width= and height= already fix how big the final image is going to be.
When something finishes loading, just update the layout.
Re:Just turn off image loading (Score:5, Informative)
Someone already invented this.
It's called Opera browser
Re: (Score:3, Informative)
Mostly in that it handles tables and frames.
http://www.jikos.cz/~mikulas/links/ [jikos.cz]
Re: (Score:3, Informative)
Re: (Score:3, Informative)
# To make SSL the underlying transport protocol, for better security and compatibility with existing network infrastructure. Although SSL does introduce a latency penalty, we believe that the long-term future of the web depends on a secure network connection. In addition, the use of SSL is necessary to ensure that communication across existing proxies is not broken.
The problem for that is now everything is encypted. If it has multiple channels, let one be plaintext of insecure items,a nd one cyphered for encrypted ones
We've had ideas along these lines-- specifically, we need to work on caching! One proposal that we had was that we'd send cryptographic hashes on the secure channel, then send the static data in the clear on a non-encrypted channel.
Alternatively, the data could be signed, and no communication would be necessary on the secure channel.
In any case, there is a lot of work to do on this, and we by no means have the answers right now. We just want to make the experiment public, and get as many people involved as
Re: (Score:3, Interesting)
Oh, also.. the measured in-lab 2X speedup was without any server push. Who knows, maybe the HELLO message will eventually include a flag that says that the server shouldn't push anything to the client. We're already talking about how to rate-limit anything speculative like this (so that client-requested content is almost never held up with content that is speculatively pushed).