Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Google Networking The Internet Software Technology

Google's SPDY Could Be Incorporated Into Next-Gen HTTP 275

MojoKid writes "Google's efforts to improve Internet efficiency through the development of the SPDY (pronounced 'speedy') protocol got a major boost today when the chairman of the HTTP Working Group (HTTPbis), Mark Nottingham, called for it to be included in the HTTP 2.0 standard. SPDY is a protocol that's already used to a certain degree online; formal incorporation into the next-generation standard would improve its chances of being generally adopted. SPDY's goal is to reduce web page load times through the use of header compression, packet prioritization, and multiplexing (combining multiple requests into a single connection). By default, a web browser opens an individual connection for each and every page request, which can lead to tremendous inefficiencies."
This discussion has been archived. No new comments can be posted.

Google's SPDY Could Be Incorporated Into Next-Gen HTTP

Comments Filter:
  • by gearloos ( 816828 ) on Tuesday January 24, 2012 @11:45PM (#38815363)
    Its a secret plot by Apple to fix the lag in IOS 5 Safari by getting Google to find a way to speed up web page loads to cover it up!
    • by Anonymous Coward on Tuesday January 24, 2012 @11:53PM (#38815399)

      Look dude, if you're going to spread a conspiracy properly, you need to spread your ideas out to a least 3 excessively long paragraphs or raging insanity. You could have easily structured it:

      -Secret plot by apple. They're evil, man.

      -The secret plot happens to be cuz iOS is evil like them.
      -iOS version 5 is also evil.

      -They've drugged Google with evil for more evil.

    • by scdeimos ( 632778 ) on Wednesday January 25, 2012 @12:56AM (#38815707)
      Hate to rain on your parade, but didn't you realize that Apple will wait three years then have a media conference introducing iSPDY as their own invention?
      • Re: (Score:3, Funny)

        by phonewebcam ( 446772 )

        And Microsoft to merely wrap some bloatware round it and call it their own, like they do with their "search engine" [blogspot.com].

      • by YA_Python_dev ( 885173 ) on Wednesday January 25, 2012 @04:33AM (#38816515) Journal

        I realize you guys are just kidding, but there's a very important and overlooked part of the SPDY protocol. Hopefully TPTB won't understand its implications before it's too late to stop SPDY adoption.

        You see, the way I read the spec and the way it's currently implemented, SPDY requires every single connection to be encrypted. It's not optional.

        Imagine that, a world where MITM attacks suddenly become much much harder, where your ISP doesn't inject ads in your search results, where your mobile provider cannot "help" you screwing up your HTTP connections with a transparent proxy, where the British government cannot censor a Wikipedia page, where even the small sites can be encrypted because web hosts save bandwidth money by offering this option to everyone.

        Imagine a world where net neutrality becomes much harder to break because all big protocols are encrypted all (or at least most) of the time and the deep packet inspection shit that's used much more widely than people think just doesn't work anymore.

        SSH, Freenet, Skype BitTorrent and other P2P protocols are already there. This is the chance to do it for HTTP.

        Disclaimer: I speak only for myself and not anyone else. IANARE.

        • by Pieroxy ( 222434 )

          I did not know that about SPDY, thanks for the info.

          That's one reason to adopt it widely and quickly IMHO.

        • by makomk ( 752139 ) on Wednesday January 25, 2012 @05:23AM (#38816655) Journal

          Imagine a world where every time you set up a website you have to fork over money to a certificate provider in order for people to be able to access your sites, where the prices for certificates are sky-high once again because the CAs know that they've got everyone over a barrel, where governments use their influence to get their own CAs into every browser and go right on ahead MITMing everything in site by painting anyone that refuses as a supporter of child porn...

          • Do you really, honestly, truly believe that the US Government needs its own CAs and can't simply ask Verisign/Symantec to hand over a valid CA for a domain they want?
          • by Anonymous Coward on Wednesday January 25, 2012 @09:02AM (#38817649)

            My god. We might have to self-sign our own certificates.

  • by Anonymous Coward on Tuesday January 24, 2012 @11:55PM (#38815409)

    Microsoft has announced their implementation, DirectSPEW.

    It's going to kill SPDY when it debuts in a few months with Windows Phone 7.1.

    Hi bonch

  • by the_other_chewey ( 1119125 ) on Tuesday January 24, 2012 @11:56PM (#38815413)
    I realise that SPDY is about reducing the latency of HTTP connection handshakes -
    but wouldn't using the already existing and even implemented HTTP 1.1 standards
    for pipelining (requesting multiple resources in one request) and keep-alive (keeping
    a once-established connection open for reuse) mostly remove the handshake latency
    bottleneck?
    • by laffer1 ( 701823 ) <luke@nospAM.foolishgames.com> on Wednesday January 25, 2012 @12:19AM (#38815529) Homepage Journal

      When pipelining first came out, there were many buggy implementations. As a result, many browsers and web servers disabled the feature. Maybe it's time to turn it on for everything.

      • by zbobet2012 ( 1025836 ) on Wednesday January 25, 2012 @12:23AM (#38815545)
        To reiterate my reply below no, pipelining offers very little gain vs true "multiplexing" and it represents a security risk.
        • by rtb61 ( 674572 )

          Even better to combine multiplexing with better ISP proxy routines. Where regular checking is done for the mirrored version versus the original source. So with enforced net neutrality the adds can be delivered from the local mirror based upon correct alignment of add to content, without theft of content via altered or added adds.

          Mirrored adds will substantively reduce traffic across ISP boundaries. Of course the source would want to validate the accuracy of the mirror/proxy and also for the capitalist wo

    • by zbobet2012 ( 1025836 ) on Wednesday January 25, 2012 @12:20AM (#38815533)
      Browsers and servers almost all use persistent connections these days and have since at least the early 2000's. SPDY doesn't claim to do anything with this (the summary above is incorrect). Speedy does however implement several features of "pipelining" but in a more elegant manner. There are a host of issues with pipelining on the server side (it is a security risk, a description of why is here [f5.com]). SPDY effectively implements pipelining but without the associated security risks. It also implements more advanced features that allow the server to push data to the client without the client requesting it.
    • by grmoc ( 57943 ) on Wednesday January 25, 2012 @01:36AM (#38815867)

      As one of the creators of SPDY..

      No, HTTP suffers from head-of-line blocking. There is no way around that.
      HTTP also doesn't have header compression (which matters quite a bit on low-bandwidth pipes) nor prioritization to ensure that your HTTP, javascript and CSS load before the lolkittens :)

      • by symbolset ( 646467 ) * on Wednesday January 25, 2012 @02:52AM (#38816173) Journal
        Now that's a comment you don't see every day. Glad you're thinking of the lolkittens. They're precious.
      • by FireFury03 ( 653718 ) <`gro.kusuxen' `ta' `todhsals'> on Wednesday January 25, 2012 @03:17AM (#38816269) Homepage

        No, HTTP suffers from head-of-line blocking.

        Head of line blocking is a feature of TCP, and my (very cursory) understanding is that SPDY still uses TCP so how is this not still a problem?

        A technologically better solution would probably be to use SCTP instead of TCP, but unfortunately suffers from the fact that SCTP is only 12 years old, so Microsoft have stated that they have no intention of ever supporting it. However, despite MS's lack of support, that doesn't prevent the possibility of using SCTP in environments where it is available.

        • by Anonymous Coward on Wednesday January 25, 2012 @04:07AM (#38816435)

          Yes, the layer-4 protocol still requires in-order delivery. But lost packets is not the sort of problem SPDY is trying to solve. SCTP might also be a good idea, but you'd still want SPDY on top of it instead of HTTP because it's faster even assuming perfect, in-order packet arrival.

          But SPDY adds encapsulation within the TCP stream so data frames from multiple SPDY streams can be interleaved. It's like opening a bunch of separate TCP streams (like browser do now) but with a lot less overhead. It's also less complicated to implement than HTTP pipelining because each request/response gets its own SPDY stream so the underlying HTTP server can treat them as fully independent connections.

          Using with HTTP, even with pipelining, the server must respond to requests in the order they are received. So if you request something slow -- say a DB query -- and then a bunch of static images, HTTP will have to wait for the slow request, send that response, and then send the images. Using SPDY each request creates a new SPDY stream, and the server can send back responses in each stream without respect for the order in which they arrived. So in the same slow-DB + static images scenario SPDY would be able to immediately start sending back images and then send back the DB response when it was ready. SPDY could even start sending images, and when it sees that the DB response is half-ready and has a higher priority, interrupt the image transfer, send the first half of the DB response, complete the image response, and then complete the DB response.

          • by keeboo ( 724305 )

            But SPDY adds encapsulation within the TCP stream so data frames from multiple SPDY streams can be interleaved. It's like opening a bunch of separate TCP streams (like browser do now) but with a lot less overhead. It's also less complicated to implement than HTTP pipelining because each request/response gets its own SPDY stream so the underlying HTTP server can treat them as fully independent connections.

            Cute. Still, in the end, the future HTTP daemon will end starting a new process for each of those streams, and each request will have a negligible extra cost to the client and network overhead but will still cost to the server in terms of disk IO, processing and memory consumption.

            Looks like SPDY's multiplexing - alone - brings more interesting DoS possibilities.
            Google may not have problems with their huge structure and resources in general, but what about the rest of the world?

            It seems to me that SPDY

            • It seems to me like it would be a big win for ssl sites.

              Consider a browser visits a ssl site. First they have to open a ssl connection and download the page. Then they have a few choices.

              1: download all the items sequentially using the same connection (obviously slow and bad)
              2: open multiple connections (wasting time and processing power on multiple ssl handshakes)
              3: use pipelining and hope they don't hit an item that is slow to retrive and blocks everything else.

              With spdy they can just open one ssl connection and do everything over it without having requests block each other.

        • Wikipedia says it is available in the kernel, it references http://www.bluestop.org/SctpDrv/ [bluestop.org] but this doesn't say anything about the stock kernel.
      • As one of the creators of SPDY..

        No, HTTP suffers from head-of-line blocking. There is no way around that.
        HTTP also doesn't have header compression (which matters quite a bit on low-bandwidth pipes) nor prioritization to ensure that your HTTP, javascript and CSS load before the lolkittens :)

        Please tell me your joking.

        SPDY is layered on top of TCP... **TCP** suffers from head-of-line blocking and therefore so does SPDY.

        By collapsing everything into a single connection you induce *more* latency on low speed, high latency connections because the chance of idling the link when waiting for an ACK/response with one TCP session or waiting for the next request increases vs multiple concurrent TCP sessions.

        You don't need to invent a new protocol for header compression if you intend to always use TLS an

        • **TCP** suffers from head-of-line blocking and therefore so does SPDY

          Head-of-line blocking may happen at different levels.

          In the case of pipelined HTTP, if the first request in the pipeline is slow, it will block all the others, because answers have to be delivered in order.

          A smarter protocol could take a "tagged" approach, where each response is tagged, and thus can be associated with the correct request even if delivered out of order. IIRC, imap uses an approach like this.

          • by grmoc ( 57943 )

            SPDY goes further-- not only are requests and responses on separate streams asynchronous, but the parts of the different responses can be interleaved. The only head-of-line blocking here is based on the TCP buffering or the maximum frame size for SPDY (we typically use 4k frames).

    • Not quite. Pipelining requires responses to be delivered in the same order as the requests. This is fine if all the responses are available immediately (eg. static css and images), but for dynamic content such as php a delay generating the content will not only delay that request but also all following requests.

      One main advantage of SPDY is http header compression which should reduce upstream bandwidth to about a quarter of what it currently is for web browsing - and while bandwidth isn't that important a

  • Real issue is... (Score:5, Insightful)

    by blahplusplus ( 757119 ) on Tuesday January 24, 2012 @11:58PM (#38815419)

    ... embedded links that activate scripts/contact other adservers, etc. There is so much junk embedded in modern web-pages that most users have no clue how bad their client is being raped of accumulating identifiable information.

    • Re:Real issue is... (Score:4, Interesting)

      by jeti ( 105266 ) on Wednesday January 25, 2012 @01:26AM (#38815821)

      Too true. Installing the Firefox NoScript extension has been an eye-opener for me.

    • Most "IT security professionals" don't know how inept they are either. It's a human trait: we all have blind spots and the biggest one is the breadth of our incompetence.
  • by Anonymous Coward on Wednesday January 25, 2012 @12:02AM (#38815439)

    Maybe Google could remove all the useless Google Adsense ads first because the main reason why pages load so slow is because of the million ad calls to Google?

    • Ads are OK. They're the reason why most things in life (i.e., the Internet) are free.
      • Re: (Score:3, Insightful)

        by Anonymous Coward

        Yes, because before ads, there was no culture whatsoever and people didn't communicate. What was the point anyway? It's not like you could monetize it somehow...

        Protip: in the real world, content precedes advertisements... strangely, even on the Internet... I know you're probably too young to remember, but there _was_ a time when the Internet had more content than ads. Waaaaay back.

        Also, most things on the Internet are free, because most things in the Universe are free anyway and because... if you charged m

        • by Pieroxy ( 222434 )

          If you continue mixing everything in one big meta problem, you'll fail to see the big picture.

          Fact: People need to eat. They also usually need a roof over their head.
          Consequence: People need money.

          As a result of this, most people are not willing to work for free, because it is usually their work that makes the ends meet.

          Look at Wikipedia. Free of ads, but they come whining every other year for money. All in all, they do put ads on their site, buit it's ads for themselves.

          And before ads were invented the web

      • by keeboo ( 724305 )

        Ads are OK. They're the reason why most things in life (i.e., the Internet) are free.

        Ouch... I think you should broaden your interests in life.

    • You have to dig that deep for a dig at Google? That's pretty lame. Google can't do anything about how novice blogs include their ads.
  • by zbobet2012 ( 1025836 ) on Wednesday January 25, 2012 @12:14AM (#38815491)

    By default, a web browser opens an individual connection for each and every page request, which can lead to tremendous inefficiencies

    HTTP1.1 which is supported by everything newer than IE5 (at least) utilizes persistent connections. You can verify this yourself with Wireshark in seconds. SPDYs optimizations largely revolve around "pipelining", but without some of the issues that it causes.

    • by VortexCortex ( 1117377 ) <VortexCortex AT ... trograde DOT com> on Wednesday January 25, 2012 @01:51AM (#38815919)

      By default, a web browser opens an individual connection for each and every page request, which can lead to tremendous inefficiencies

      ... SPDYs optimizations largely revolve around "pipelining", but without some of the issues that it causes.

      No no no... You're using the wrong definition of "pipelining". First you must realize the Internet is a Series of Tubes. The Series part is important because TCP is a Serial protocol. If any Tube in the Series cracks and data is lost, endpoints start spewing packets all over the place in alarm! SPDY's optimizations revolve around "pipelining" -- That is: Lining the Pipes to prevent such events from happening in the first place.

      HTTP1.1 is OLD! The pipes built around that time are OLD too. The HTTP1.1 "pipelining" is starting to wear through after 17 years... Connecting the Tubes is expensive too; If a Header is compressed there's less chance of part being lost in a data leak when it flows through the tubes.

      There is an underground movement to get rid of the whole ridiculous idea of Tubes. I mean, Why would you take something as permeable as a NET and build a Series of Tubes out of it?! OF COURSE YOU NEED PIPELINING if you want it to be efficient!

      However, what if you didn't need a "pipe" what if you could get your information from the Sea of Data by Casting about the Web and sorting out the results on your end? You could simply keep trying until you were satisfied with the data you had. Even better if relevant data could be naturally organized -- swarm together -- in a sort of BitTorrent so you could get the data in the Net with less Casting... One could even take the center out -- Decentralize it -- to help prevent conflicts about which data came from what side. I mean: Who cares if someone wants to DownLoad a car so much that it's undrivable from all the weight? It doesn't make your car any less usable! Besides, the naysayers are all Hypocrites: They have to participate in the things they say are wrong just to even see into what we're doing -- You have to Peer to Peer!

      Don't even get me started on Cloud Computing! Seriously... It's VaproWare!

  • by Ostracus ( 1354233 ) on Wednesday January 25, 2012 @12:14AM (#38815493) Journal

    What about alternatives [geant.net] like UDT, and SCTP?

  • by grcumb ( 781340 ) on Wednesday January 25, 2012 @12:15AM (#38815495) Homepage Journal

    "[T]he SPDY (pronounced 'speedy') protocol ....

    No WAY am I pronouncing it 'speedy'. I'm a callin' it 'spidey'. That way, I can build wearable network monitors which vibrate at high frequencies when the web server gets bogged down.

    And then.... I'll be able to interrupt my boss in mid-sentence and say, "Hang on, my spidey sensors are tingling..."

  • We haven't done that yet? Wasn't that a late nineties thing? We're still on a 10 and 20 year old protocol!? Why isn't slashdot using html 1.1? Tables not good enough? As someone who still catches up on the IEEE from time to time, this is actually surprising. No wonder lynx hasn't needed much upgrading for connections beyond bug fixes.

    That means all advantages have been the physical pathways (that includes wireless) and TCP! Wow! Based on the fact psychics made it to the front page of slashdot witho

    • by Hadlock ( 143607 )

      We have other protocols, like FTP for example, that handle things besides web pages. HTTP is a pretty wide open protocol and allows all sorts of things to be jammed in to it, which is why it's worked so well in the past.
       
      Also, as they say, "if it ain't broke, don't fix it".

      • by Pieroxy ( 222434 )

        We have other protocols, like FTP for example, that handle things besides web pages. HTTP is a pretty wide open protocol and allows all sorts of things to be jammed in to it, which is why it's worked so well in the past.

        Also, as they say, "if it ain't broke, don't fix it".

        With all the NAT going around, FTP is becoming less and less useable.

        The biggest pro for HTTP is it's universal support, not its openness.

        As far as your last sentence is concerned, we'd still be fighting for a little fire if we followed it all along...

    • Ah, I see... So, if it's not broke, take it apart and re-engineer it until it is. I'm filing this under my subtle methods for maintaining job security.
      • HTTP's inception predates the scale at which the Internet is used today, and like IPv4, the failure to anticipate the shifts in use and data access within the Internet make it far less efficient than it could otherwise be. SPDY, along with many of the streaming protocols, identify more with the modern Internet practices than, "Get this page now," technique of HTTP.

        I'm on the second tier of my ISP's access rate, and even though many pages should load in the theoretical second, they don't due to modern styli

  • by q.kontinuum ( 676242 ) on Wednesday January 25, 2012 @12:38AM (#38815623)

    Several mobile phone companies and some browsers offer special proxies nowadays to speed up browser experience on mobile phone and to reduce data usage for customers by serving prerendered or otherwise optimized/reduced pages. This might severely reduce Googles ability to collect user data from these users on the visited web pages (unless the user is logged in to google+ or alike with his browser, which might be unlikely given that for social networks there are usually separate apps).

    Is this now a step to reduce the need for these proxies in order to protect their own business?

  • of something... i cant remember it right now, because, well, my text reading program has been optimized for ARM NEON, but my smart phone doesn't have NEON, or at least, this version of the driver i'm using doesn't work with NEON, and i tried to port it, but its like i can't find a good way to work around the specialized vectorization code inside of the Eigen2 library because it doesnt work on certain platforms. but if it did work, my text editor would be like 75% faster than some luser running it on a singl

    • by Surt ( 22457 )

      I think the internet has proven sufficiently slow that it is now officially time to go ahead and get on top of optimization. It's premature by about negative one decade at this point.

    • Have you considered using a hammer to pound that nail? It sounds like the limp fish you're using isn't getting it done.
  • At least it's not SPYW..thanks God.

  • by Anonymous Coward

    Says it all. The connection setup isn't the problem, it's being bounced to 10 different sites other than the one you wanted to visit, half of which are overloaded at any given time.

    Fix that instead Google and there's no need to mess with the standards anyway.

    • by Misagon ( 1135 )

      In most cases when a web page takes a long time to load, it is not the HTTP connection to the web page's home server itself that is too slow, but ... that the page is waiting for responses from various ad-servers, counters, social networking sites, etc before it can fully render.

      I experience that the biggest slowdown on the sites that I visit, have often been caused by connections to Google Analytics.

      Thanks Google for making the web faster!

  • by Lisandro ( 799651 ) on Wednesday January 25, 2012 @02:00AM (#38815965)

    SPDY's goal is to reduce web page load times through the use of header compression, packet prioritization, and multiplexing (combining multiple requests into a single connection).

    I'd like to see SCTP [wikipedia.org] getting some love, which sadly enough seems unlikely if it hasn't happened so far. It's a very simple protocol mixing the good parts of both TCP and UDP, plus it supports multiplexing and priorization off the bat.

    • I'd like to see SCTP [wikipedia.org] getting some love, which sadly enough seems unlikely if it hasn't happened so far. It's a very simple protocol mixing the good parts of both TCP and UDP, plus it supports multiplexing and priorization off the bat.

      Unfortunately, whilst its a very good protocol, it isn't supported by Windows, and Microsoft is on record as saying they have no intention of ever implementing it. I guess this is no surprise - the protocol is very new (only 12 years old) and not in common use Microsoft traditionally wait until technologies have been in common use for a good 10-15 years before bothering to produce a half-arsed broken implementation of them (see standards like C99 for details).

      That said, it would be nice to see SCTP being u

    • by rdebath ( 884132 ) on Wednesday January 25, 2012 @03:37AM (#38816355)

      That's a really poor way of describing SCTP. Firstly the relationship between TCP and UDP is such that TCP could be built entirely ontop of UDP, the only reason it isn't physically is so that the port numbers for UDP and TCP are distinct. On the other side the best description of UDP is actually "Bare IP packets with port numbers".

      SCTP is not that, it would probably be most accurate to describe it as being a protocol with multiple TCP streams in both directions within one connection. Because it's within a single connection a 'stream' can be very small (ie a little message) and still be efficient and because there are multiple streams messages don't have to wait for each other; though they can if you want. It is probably simpler that TCP, but only because TCP has had so much bolted on.

      But you are absolutely correct, this would be a very good protocol for throwing a load of tiny requests at a web server and getting the results back as soon as they're ready. BUT, mixing it with SSL would not be very simple, I guess you'd have to do what OpenVPN does.

      • by dkf ( 304284 )

        That's a really poor way of describing SCTP. Firstly the relationship between TCP and UDP is such that TCP could be built entirely ontop of UDP, the only reason it isn't physically is so that the port numbers for UDP and TCP are distinct. On the other side the best description of UDP is actually "Bare IP packets with port numbers".

        It also adds packet content checksums, so you're much less likely to get bad data delivered. It's less important than it used to be (due to improvements in physical network quality) but even so, it's a huge help since it lets you assume that the data is at least uncorrupted by the transfer process itself.

  • by haffy ( 66129 ) on Wednesday January 25, 2012 @02:00AM (#38815969) Homepage

    SPDY is a great example of someone thinking only of their own application.

    By increasing the initial window size from 3 to 10 they add to the bufferbloat effect (at the microscopic level) and increase Jitter from tolerable 38 ms to intolerable 126 ms on a 1 Mbit/s ADSL line. This level of jitter severely affects VoIP sound quality. And for this calculation I have assumed that the web browser only uses one TCP connection to load the page; if it uses two TCP connections the Jitter may double.

    But hey! What does any application developer care about other applications? They are only concerned about getting their own application sped up.

    When you improve the performance of your application, you should think about how it degrades the performance of other applications. If someone recommended increasing the O/S priority level of the web browser to the maximum, so all your other applications slowed down to a halt while the web browser was running, you would probably object. The increased initial window size is a comparable recommendation, but at a network buffering level, so very few people understand its negative side effects.

    We all want faster loading web pages, but we also want other applications to respond faster, and we also want perfect VoIP sound quality without the walkie talkie effect caused by high latency or jitter.

    • by mnot ( 71203 )

      Actually, Jim Gettys has called for the adoption of SPDY (alongside the wider deployment of HTTP pipelining) to help mitigate bufferbloat.

    • By increasing the initial window size from 3 to 10 they add to the bufferbloat effect (at the microscopic level) and increase Jitter from tolerable 38 ms to intolerable 126 ms on a 1 Mbit/s ADSL line.

      I can't really see how increasing the window size would increase jitter. Network bandwidth aside, the throughput of a TCP connection is a function of latency and window size. Increasing the window size simply increases the throughput on high-latency networks.

      You're still limited to MTU-size packets (probably 1500 octets), and if you're using a priority queuing discipline this gives you a network jitter of about 12ms on a 1Mbps connection: (1500*8)/1000000 = 0.012 since the priority queue will insert the small high priority RTP packets between the large low priority packets.

      When you improve the performance of your application, you should think about how it degrades the performance of other applications. If someone recommended increasing the O/S priority level of the web browser to the maximum, so all your other applications slowed down to a halt while the web browser was running, you would probably object. The increased initial window size is a comparable recommendation, but at a network buffering level, so very few people understand its negative side effects.

      Increasing the window size is not comparable to increasing the priority of the traffic. I would agree with you if the application were setting the ToS flags(*) in an abusive way, but the window size just affects the maximum throughput for a connection given a specific latency connection. Given that latency isn't something generally very controllable, this can't even be regarded as an effective method of intentionally throttling throughput (shaping the traffic on the router would be more sensible here).

      (* routers out on the internet won't generally pay attention to ToS flags, so setting them abusively wouldn't normally give you any advantage anyway. However, routers at the ends of low bandwidth links, such as ADSL, should be paying attention to ToS flags in order to prioritise latency-sensitive protocols. If you're not doing this and just relying on FIFO queuing then you're pretty much screwed already for VoIP unless you're using the connection for nothing else).

    • by Lennie ( 16154 )

      SPDY does not increase the initial window, Google, Microsoft and many others increased that on their server to get the content faster to you.

      Within Google a completely different department I'm sure.

      SPDY uses less connections a good thing to reduce bufferbloat.

      Someone above already mentioned Jim Getty mentioned SPDY will reduce the effect of bufferbloat.

  • Do you know any methods to improve tethering? I use my 3G phone as a WiFi hotspot. It seems that when the connection is idle for a while, it takes some time to kick it up to full speed. Sometimes when I try to post a message to some forum, I have to load some another page in the background to wake up the link.

    Does the TCP slow-start make things worse here, would the connection run better if I encapsulate it to some single tunnel, should I send some constant keep-alive data, can I force an Android phone to s

  • by Skuto ( 171945 ) on Wednesday January 25, 2012 @05:17AM (#38816631) Homepage

    Go to about:config and switch network.http.spdy.enabled.

    Mozilla has been quite critical of some Google technologies (Dart, Native Client, ...) that it saw as not truly open and closing down the internet to be the GoogleWeb. SPDY got implemented though. So I guess it's a a keeper and might see wider adoption.

    • by makomk ( 752139 )

      From what I recall from reading the bug report about adding it, SPDY was just on the borderline of being actually implementable by other browsers - undocumented and requiring some hairy low-level changes to their SSL implementation, but simple enough to be doable. There's probably a reason it's not enabled by default though. Also, I think bits of it may have been effectively reverse-engineered from the behaviour of the Google services using it.

  • Although (Score:3, Funny)

    by sakura the mc ( 795726 ) on Wednesday January 25, 2012 @05:56AM (#38816777)

    The speed benefits provided by this new protocol will rapidly be negated by the ability to cram more shit into each connection.

  • by mangobrain ( 877223 ) on Wednesday January 25, 2012 @05:58AM (#38816795) Homepage

    By choosing TLS as the underlying transport, inspecting traffic for debugging or monitoring purposes becomes nigh on impossible. It will only be possible to capture traffic at the application level, after decryption and decapsulation (which, depending on the nature of any given bug, may be too late), or if it originates from servers you own (i.e. the server's private key is available to you). SPDY proxies will become dumb pipes, unable to perform any sort of content caching, unless users are willing to accept that the proxy may perform MITM on their traffic. For such MITM to be feasible, users need to trust a CA certificate associated with the proxy, and rely on the proxy performing upstream certificate checks on their behalf, because the browser no longer has a single end-to-end TLS session with the origin server. In corporate and other "managed" environments, people often find this acceptable in moderation*, but I would be worried if this became the norm - it creates a mind-set whereby it's acceptable to breach users' privacy, and the more proxy vendors have incentive to implement the necessary (proprietary) code, the more scope there is for them to get it wrong, completely undermining security.

    Not to mention that introducing a mux/demux layer in between the network traffic and the individual requests/responses greatly increases the complexity needed to implement a proxy compared to plain-old HTTP.

    Losing the functionality of caching proxies would seem counter to Google's goal of speeding up the Web. Losing the ability to monitor and filter network traffic will greatly diminish the ability of schools, employers, public hotspot providers etc. to enforce acceptable usage policies, to the extent that some - especially employers - may simply resort to blocking web access outright.

    IMHO, the behaviour of SPDY proxies needs to be tightly specified, if they are going to exist. Standardise MITM behaviour, so that users and admins are aware of the pros and cons. Make it mandatory that end users are warned when MITM is taking place. Perhaps introduce new extensions, such as the ability for the proxy to establish TLS with the browser using its own certificate, but transmit a copy of any origin server certificates corresponding to proxied requests, so that the browser isn't entirely at the proxy's mercy WRT certificate verification. Perhaps introduce the ability to multiplex requests to different domains on the same connection, something a browser can't do when talking directly to distinct origin servers.

    Note that similar concerns apply to reverse proxies, used by website providers themselves to speed up their own infrastructure. It may seem desirable for both front-end caching and back-end servers to use SPDY, but establishing TLS sessions between two halves of the same infrastructure, over the LAN, will be detrimental to resource usage.

    * Bear in mind that currently, MITM is only necessary for HTTPS sites, which means that there are vast swathes of in-the-clear HTTP traffic for which caching doesn't introduce any inherent security concerns. By making *everything* use TLS, MITM becomes a consideration if you want to perform *any* caching/filtering at all, not just caching/filtering of secure sites. If there is no longer any distinction between secure and insecure sites, how does even a responsible admin know where to draw the line?

  • I realize there is probably a really good reason why they are not using the builtin compression tools available. What are they again?

Like punning, programming is a play on words.

Working...