Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Internet Explorer Microsoft Software The Internet

Internet Explorer Implements HTTP/2 Support 122

jones_supa writes: As part of the Windows 10 Technical Preview, Internet Explorer will introduce HTTP/2 support, along with performance improvements to the Chakra JavaScript engine, and a top-level domains parsing algorithm based on publicsuffix.org. HTTP/2 is a new standard by the Internet Engineering Task Force. Unlike HTTP/1.1, the new standard communicates metadata in binary format to significantly reduce parsing complexity. While binary is usually more efficient than text, the real performance gains are expected to come from multiplexing. This is where multiple requests can be share the same TCP connection. With this, one stalled request won't block other requests from being honored. Header compression is another important performance concern for HTTP.
This discussion has been archived. No new comments can be posted.

Internet Explorer Implements HTTP/2 Support

Comments Filter:
  • When did the slash get added, and why? Anyway it is just a cleaner modern verison of SPDY should be trivial to support for most browser assuming it is actually final final.

  • by GbrDead ( 702506 ) on Friday October 03, 2014 @08:54AM (#48054877)

    Slowly, web services are becoming a bad reimplementation* of CORBA. Once again, why did we jump on their band wagon?

    * Hm, maybe the correct word is "restandardization"?

    • I've always been a fan of IIOP. You can use IIOP even if you don't want to re-introduce some of the more hangover inducing parts of the full CORBA stack (java's remote interfaces use IIOP IIRC).

      Some people complain that a binary protocol is somehow not "open" but I've seen enough "open" XML uber-nested gibberish in my time to question that assertion...

    • by Warbothong ( 905464 ) on Friday October 03, 2014 @09:47AM (#48055299) Homepage

      Slowly, web services are becoming a bad reimplementation* of CORBA. Once again, why did we jump on their band wagon?

      As far as I understand it, SOAP is reimplementation of CORBA, whereas HTTP is a REST protocol.

      Specifically, HTTP doesn't try to keep disparate systems synchronised; it is stateless and has no notion of "distributed objects". Every request contains all of the information necessary to generate a response, for example in HTTP Auth the credentials are included in every request.

      Of course, people keep trying to re-introduce state back into the protocol, eg. for performance ("modified since") or to support stateful programs (cookies). These aren't necessary though; for example, we can replace cookies (protocol-level state) with serialised delimited continuations (content-level state) http://en.wikipedia.org/wiki/C... [wikipedia.org]

      • by tepples ( 727027 )
        The PDF linked from that Wikipedia article recommends cookies at the bottom of page 2: "To lessen that problem [of inconsistent state], it is adviseable to use difficultly forgeable URL or cookies." And it may cause vulnerability to a session fixation attack [wikipedia.org] if the user shares a continuation encoded in a URL. Consider the "add to cart" action in an online shopping application. One of the inputs is "which cart are you holding", as you don't want a shopper to see or modify another shopper's cart. If a shopper
    • by Anonymous Coward

      There's 2 different requirements out there. The most common one, exposing an API in a way that can be consumed by as many clients as possible, is generally better served by REST. Its simpler, anything that can do standard http requests and supports the primary components (status codes, verbs, content types, headers, body) will be able to handle it.

      The other is "these things I normally do within my own code, I want to be able to do them remotely" (ie: complex operations, transactions, queuing/transparently r

  • by Isca ( 550291 ) on Friday October 03, 2014 @08:58AM (#48054901)
    Chrome has plenty of innovations but it easily becomes a resource hog and bogs down the system. IE 10 keeps chugging along. Microsoft isn't quite the microsoft of the past. These improvements should be felt the most in the mobile space where they clearly have the best browser. Their only problem? it might all be too late if they can never get out from under the shadow of their reputation.
    • Everything making us impressed with IE ends up with "It is latest Windows only" argument.

    • by zennling ( 950572 ) on Friday October 03, 2014 @09:40AM (#48055221)
      Re the resource usage issue - isnt IE's low(ish) resource usage only due to the fact that alot of what it needs to render a page is actually in the OS and thus loaded already before it needs it?
      • by Anonymous Coward
        That is what is traditionally told, but I'm not sure if anyone has done a recent analysis of precisely which shared objects are already in memory when IE starts.
    • by Cenan ( 1892902 ) on Friday October 03, 2014 @09:43AM (#48055247)

      What the mobile (or smartphone) boom should have shown every nerd on the face of the planet: nobody outside of /. gives a shit about "reputation" when picking up a new phone or tablet. If Microsoft manages to launch a smartphone that is affordable (i.e not priced above an iPhone) and manages to make Windows-not-metro-for-fucks-sake-please-dear-god-please-stop-reminding-us usable on a touch device and the desktop at the same time, all that bad nerd press from the last 15-20 years will mean diddly squat for their sales figures.

      The non nerdy friends and colleagues I have all pretty much agree on what is important in a new phone: camera (especially camera vs. dim light conditions), app store inventory (games mostly), fb app, twitter app, instagram app. Who made the device is of very little concern.

      Now, with a one OS to rule all platforms approach, they might even be able to add some of that Apple just-works magic to their portfolio, which is not to be scuffed at.

      And I agree, MS is not old MS anymore. They've been forced to try and keep up rather than the old buy-and-extinguish strategy, at least in the mobile and touch device market, and I think it's been good for them.

      • My girlfriend has a Windows phone (bought in spite of Windows, not because of it - it's the phone with the best camera currently available), so I've spent a bit of time playing with it. I'm really impressed with a lot of what Microsoft has done in terms of usability, but there are lots of obvious omissions in terms of basic functionality. For example, no easy way of syncing calendars or contacts (this is also true on Android if you don't drink the Google Koolaid, but at least there are third-party sync ad
        • Not entirely sure what you see is missing, but contacts and calendar stuff is synced via your Microsoft account automatically, and when you open the Office app there is a "phone" option which opens documents on your phone (including the folder which you copy stuff to over USB).

          • Not entirely sure what you see is missing, but contacts and calendar stuff is synced via your Microsoft account automatically

            So you need to store your calendar and contacts with Microsoft to sync? No thank you. iOS doesn't require that you share things with Apple and on Android the default is to share with Google but there are other options.

            , and when you open the Office app there is a "phone" option which opens documents on your phone (including the folder which you copy stuff to over USB).

            Didn't seem to work. Do you have to copy the files to a specific location?

        • For example, no easy way of syncing calendars or contacts

          Can you clarify? My WP maintains three synced calendars - one for my Exchange work account, one for my personal Google account, and one for Facebook.

          Some things are just really odd omissions. You can plug the phone into a computer and copy PDFs and Word documents to it. It comes with applications that are designed for reading PDFs (Adobe Reader) and editing Word documents (MS Word), but there is no way for these applications to open the things that you've just copied into the phone's documents folder.

          It's there in 8.1, just not enabled out of the box. Install the "Files" app from the Store (it's an official app, but for some mysterious reason not preinstalled) - then you can just navigate the folders and open files from them with whatever app subscribed to opening that type of file.

          • Can you clarify? My WP maintains three synced calendars - one for my Exchange work account, one for my personal Google account, and one for Facebook.

            Three proprietary protocols for syncing with three entities. No support for either CardDAV / CalDAV so you (or your corporate IT folks) can run your own server, or local sync with a PC so that you can sync without any clouds involved.

            It's there in 8.1, just not enabled out of the box. Install the "Files" app from the Store (it's an official app, but for some mysterious reason not preinstalled) - then you can just navigate the folders and open files from them with whatever app subscribed to opening that type of file.

            Thanks!

            • I get Outlook and Facebook. does it really speak some proprietary protocol to sync with Google calendars? What do they use?

        • by spiralx ( 97066 )

          I have the same phone, and it sounds like it hasn't been updated to WP 8.1, which I think solves these issues. Or at least I don't have them on my phone.

          • I don't think she's seen an automatic update, what's the procedure for updating?
            • by spiralx ( 97066 )

              Go to Settings and Phone Update is about halfway down - you can check for any missing updates to the OS there. I think you need to be on Wi-Fi or plugged into the PC to work though.

              Also below that in Settings the About page should be able to tell you what versions you have for the OS, firmware etc. I'm currently on OS v8.10.12393.890 :)

    • by jzilla ( 256016 )
      If you have not worked with web front end and not had to deal with the torture that is ie 6, 7, 8 and to a lesser extent 9. Then it might be easy to forget and forgive.
      But even if ie 10 is an acceptable browser, it just proves that microsoft will produce a decent product only a last resort. When all the chips are down and they are loosing market share in a steady flow, then and only then does the prospect of not making a steaming pile of dogshit become feasible. I'm not impressed.
      • But even if ie 10 is an acceptable browser, it just proves that microsoft will produce a decent product only a last resort. When all the chips are down and they are loosing market share in a steady flow, then and only then does the prospect of not making a steaming pile of dogshit become feasible.

        Yes, that seems to be true. I have always suspected that that was also the motivation in developing the NT6 foundation (which greatly improved security, stability and performance of Windows). Mac was getting more popular and Linux was getting rather good on the desktop.

    • Those $100 full X86 Windows tablets that HP is coming out with might do a lot to bring Windows around in the mobile space. Pair that with their phones that I've only heard good things about, and I think they stand a pretty good chance of taking back a large part of market. The only downside that people complain most about with Windows Phone and things like the Surface RT was that there wasn't enough apps, and that they were too expensive. Create a $100 tablet that runs full Windows and you get rid of the
    • by Bengie ( 1121981 )
      Chrome can be a resources hog, but it's the only browser that doesn't periodically freeze up when attempting to load web pages. And "resources hog" is a relative term when desktops tend to now have 16GB+ memory and quad core 3ghz CPUs that are idle 24/7. I would rather have a browser that is wasteful with my over-abundance of resources and runs smoothly, than a browser that is fickle about using resources, but has jarring interruptions.
      • Chrome freezes up for me all the time. More so than FF and IE.

      • And "resources hog" is a relative term when desktops tend to now have 16GB+ memory and quad core 3ghz CPUs that are idle 24/7.

        Where do you find reasonably priced desktops with 16+GB of RAM? From what I've seen, 8GB seems to be the norm for desktops.

        • by Bengie ( 1121981 )
          When limiting myself to premium name brands, the price difference between 8GB and 16GB is about $65. 8GB is what I put on my firewall because it was dirt cheap, even though it doesn't even break 100MB of usage, and like $20 more than 4GB at the time. 8GB is quite low end.
    • Chrome has plenty of innovations but it easily becomes a resource hog and bogs down the system. IE 10 keeps chugging along. Microsoft isn't quite the microsoft of the past. These improvements should be felt the most in the mobile space where they clearly have the best browser. Their only problem? it might all be too late if they can never get out from under the shadow of their reputation.

      Yea, but until they release Android and iOS versions, its still going to be a niche consumer product.

    • > Microsoft isn't quite the microsoft of the past.
      Because it hasn't quite the market share of the past.

    • The main reason I avoid IE is the user interface. It's behind the times with respect to end-user usability and developer tools.

    • by zlogic ( 892404 )

      I've tried using MS-only products for about a year before surrendering and switching back to Google.
      Bing, Outlook.com, Windows 8.1/Windows Phone 8, Office 2013, all that sorts of stuff. Did not work out, and the biggest complaints about IE are:
      1) Website compatibility. For some reason IE 11 chooses legacy mode for many modern sites like endomondo.com etc. which disables features and breaks stuff. Additionally, MS tried to force developers to stop using IE versions when determining supported features, which

  • Seems like the efficiency you gain parsing the binary header would be lost with the need to first decompress it ;-)
    • by xdor ( 1218206 )

      Exactly. But this is from the company who though zip-ing Excel files was a good idea (XLSX). You spend more time waiting for the file to decompress than actual loading into memory.

      Obfuscation of headers into binary is going to put a lot of AJAX code out of business.

      • Presumably this will coexist with HTTP/1.1 but yes I can see a lot of Javascript rewriting in my future.

      • by BaronAaron ( 658646 ) on Friday October 03, 2014 @10:00AM (#48055415)

        This won't effect AJAX. HTTP is abstracted away from the javascript engine by the browser. I imagine there might be some additional HTTP header parameters to play with while making AJAX calls, but that's about it. All the benefits from HTTP/2 will happen behind scene as far as AJAX is concerned.

      • Exactly. But this is from the company who though zip-ing Excel files was a good idea (XLSX). You spend more time waiting for the file to decompress than actual loading into memory.

        It was (decompression is way, way, way faster than IO to disk), but the two are completely unrelated.

        Binary =/= compressed. They overlap, but they are not equivalent. Binary just generally means that its not specifically ASCII encoded, which generally means that it can skip the step of parsing and lexing the ASCII into binary-- which makes it run faster (not slower).

      • by Jeremi ( 14640 )

        You spend more time waiting for the file to decompress than actual loading into memory.

        If you had a really slow CPU and a really fast hard drive, that might be the case. Most computers these days have a really fast CPU compared to their hard drive, though.

        • by Bengie ( 1121981 )
          Yes, CPUs are so fast compared to harddrives, that not only does ZFS default to using compression for storage, but they are working on leaving the data compressed in memory. The cool benefit is the data block logic for moving between the caching layers is much simpler because there are not compression/decompression steps anymore. All compression and decompression happens in one spot in the code.
      • It's not zipping Excel files, it's zipping the OpenOffice XML - which compresses it a lot, since XML repeats the same strings over and over again in tags.

        And it's not about load speed. It's about the size of those files when you, say, attach one to an email.

    • The theory is that by saving network time receiving the smaller compressed header, the total time is still less.

      Of course, this assumes that you're on a slow enough network that the compression savings are worth it. Since typically latency is a bigger problem than throughput, I don't see compression as being terribly important.

      Similarly with the binary protocol: Parsing speed isn't the real problem. I'd rather have a plaintext protocol that I can test with PuTTY than save a few cycles parsing.

      • Plain text is great when you're just transferring text. The problem is HTTP has been used for transferring a lot more then just text for a long time. Images, file downloads, video, etc. With HTTP/1.1 browsers have different parsing code paths depending on if it's a binary file or plaintext html. There are also special cases for handling white space and stuff like that. It makes developing and testing a browser more complex then is should be.

      • Would you also prefer that TCP, UDP, ethernet, and IPSec used plaintext? What about TLS?

        Sometimes it just doesnt make sense to use plaintext. Theres no time that plaintext would be useful that you couldnt simply use a tool (like wireshark or fiddler) to convert the binary into a readable form.

    • Being binary doesnt mean it needs decompression. It may actually mean that you skip the step of lexing ASCII into binary.

    • I just did a quick test and found that decompression runs at about 800 Mbps. Therefore, if the network connection is slower than 800 Mbps, it makes sense to transfer compressed data over the network, then decompress it.

      The fact that binary data doesn't need the same parsing like ascii does is kind of an unrelated issue.

      • That doesn't mean anything. Were you running on a 3GHz Ivy Bridge server, or a little PIC IoT device? How much CPU time did it take? How many did your application need/require? Did the net result of header decompression along with the easier parsing of the binary header take more or fewer CPU resources then the older uncompressed, ASCII header? etc..etc..etc..
        • That was at 1.2 Ghz. It's likely that most devices used in the next 15 years will run at 1.2 Ghz or faster.

          > Did the net result of header decompression along with the easier parsing of the binary header take more or fewer CPU resources then the older uncompressed, ASCII header?

          You keep conflating two completely separate things, but faster + faster = faster. Compressed is faster than uncompressed. Note that's the end of a sentence.

          Also, and completely separately, binary is faster than ascii. Both are i

          • Compressed is faster on the wire, but takes more CPU time to decompress. If I have a 100GbE network connection coming into a server - my network bandwidth might outpace my compute abilities.
            • Yes, if you're using a full 100GbE link to serve empty files nothing but headers, and you have an Atom CPU, that CPU will probably be the bottleneck. You'd want to upgrade that CPU.

              On the other hand, if you're serving files, where the body of the response is much larger than the headers, you'd have only 100 Mbps of headers and your Atom CPU could keep up.

              You're not under the impression that you are the first person to think about the potential tradeoff, are you? People much smarter than either of us calcu

        • by Bengie ( 1121981 )
          When coupled with a CDN hosting most of your multimedia(non-compressible data), nearly all of your bandwidth is compressible.
      • by amorsen ( 7485 )

        This would make sense if HTTP requests were typically bandwidth-limited. Almost none of them are, most are way too short and never actually get TCP going at line-rate. HTTP is most often latency-bound, not bandwidth-bound, and the compression is meant to help with latency (reducing number of request packets), not bandwidth.

    • by amorsen ( 7485 )

      It is actually surprisingly complicated.

      It turns out that a typical HTTP/1.1 request requires multiple TCP packets to get all the headers across. With TCP slow start, this takes a long time because only one packet gets transmitted per round trip in the beginning. Obviously this gets even worse if you try to browse to a different continent, with 100ms+ latency.

      HTTP/2 manages to fit most requests into one packet, assuming a reasonable MTU. To do this requires both a binary protocol encoding and header compres

  • Control (Score:2, Funny)

    by xdor ( 1218206 )

    the real performance gains are expected to come from multiplexing. This is where multiple requests can be share the same TCP connection

    Now we can report your activities to the NSA at the same time as the request: all right from your own computer! (pay no attention to those extra binary headers, they're there for your safety!)

    • Re:Control (Score:4, Insightful)

      by CajunArson ( 465943 ) on Friday October 03, 2014 @09:46AM (#48055283) Journal

      Spoken like somebody who really doesn't understand TCP/IP but likes to say NSA for cheap mod points.

      • by xdor ( 1218206 )

        I should probably take offense at this a bit, since I did a bit of multicast programming back in the day, but hey, just because I know how to use implementations of UDP and TCP over IP doesn't mean I understand the underlying layers. So I'm sure this must be in your wheelhouse.

        And while I can see the advantage of sending more traffic over an already open socket, in the web-world isn't this just another name for a single-threaded browser?

        I must concede the NSA doesn't need home-sourced traffic capture when

        • by Bengie ( 1121981 )
          HTTP1.1 already supported pipelining many requests over the same connection, but you had to wait for the prior request to finish before the next one could start processing. This means stalling while waiting. HTTP2's multiplexing allows for async requests, meaning you do no need to wait for a request to finish to issue the next request, and requests are capable of returning out of order. This means fewer opened TCP connections in order to to get the same performance, which means fewer states for firewalls an
  • Parsing out the binary format of HTTP frames will most likely open up a whole new class of client vulnerabilities as malicious servers feed them bad data. Yay.

  • So does this put the pressure on to adapt JS to match up? Otherwise, it seems to me that the single thread deal with JS will partly hinder mulitplexing goodness. fast fast on the network, still slow slow on the browser. I guess that has always been the case even with multiple requests at a time, but it always seemed to me that was a kind of accepted, almost excuse re JS. Like yeah, "you don't want to make too many requests at once anyway"...

The use of money is all the advantage there is to having money. -- B. Franklin

Working...