Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Mozilla Firefox Security The Internet

Mozilla Restricts All New Firefox Features To HTTPS Only (bleepingcomputer.com) 243

An anonymous reader shares a report: In a groundbreaking statement earlier this week, Mozilla announced that all web-based features that will ship with Firefox in the future must be served on over a secure HTTPS connection (a "secure context"). "Effective immediately, all new features that are web-exposed are to be restricted to secure contexts," said Anne van Kesteren, a Mozilla engineer and author of several open web standards. This means that if Firefox will add support for a new standard/feature starting tomorrow, if that standard/feature carries out communications between the browser and an external server, those communications must be carried out via HTTPS or the standard/feature will not work in Firefox. The decision does not affect already existing standards/features, but Mozilla hopes all Firefox features "will be considered on a case-by-case basis," and will slowly move to secure contexts (HTTPS) exclusively in the future.
This discussion has been archived. No new comments can be posted.

Mozilla Restricts All New Firefox Features To HTTPS Only

Comments Filter:
  • "Anne van Kesteren, a Mozilla nanny"

    FTFY.
  • by fishscene ( 3662081 ) on Wednesday January 17, 2018 @02:23PM (#55947445)
    ...and this might be the one thing that gets me off the Firefox bandwagon as it is an incredibly backwards move. TONS of stuff does NOT need https and does not need the overhead HTTPS incurs both in processing time and certificate management. Also, do I really need HTTPS for stuff on my trusted LAN? No? So now I have to jump through hoops to enable developer mode? Just... what are they thinking? What is the recommended fork of Firefox these days? Pale Moon?
    • by QuietLagoon ( 813062 ) on Wednesday January 17, 2018 @02:42PM (#55947633)

      ...Just... what are they thinking?...

      Who knows if they are even thinking at all. The crowd that currently appears to be in charge at Mozilla seems to have a really strange perception of what the Firefox users want, and a strange perception of security. Yesterday I tried to log into the Mozilla site, but I was not allowed to because I would not let Mozilla persistently store tracking data on my PC. I allowed session cookies, but that wasn't good enough for them. Apparently they wanted access to offline web content storage.

    • by Eravnrekaree ( 467752 ) on Wednesday January 17, 2018 @03:01PM (#55947827)

      The LAN issue is an interesting one, maybe Firefox should make an exception for the private IP addresses ranges. That would be reasonable. On the other hand, I am all for HTTPS for everything else, even eventually dropping non-SSL support altogether.

      • Re: (Score:3, Insightful)

        by Obfuscant ( 592200 )

        The LAN issue is an interesting one, maybe Firefox should make an exception for the private IP addresses ranges.

        You do realize, I hope, that "private IP address ranges" are in the eye of the beholder. Yes, there is a standard set, but if I want to treat 123.123.0.0/16 as "private" there is nothing you can do to stop me.

        On the other hand, I am all for HTTPS for everything else

        Then you are free to run all your websites using HTTPS only. I run several websites, and not a single one of them needs HTTPS for anything. One of those is for one of those awful universities that gets grant money to do research and then keeps the data secret -- by publishing it on an open website for

        • by tepples ( 727027 ) <tepples.gmail@com> on Wednesday January 17, 2018 @03:25PM (#55948043) Homepage Journal

          I run several websites, and not a single one of them needs HTTPS for anything.

          How do you assure visitors of the several websites you run that the markup, stylesheets, images, fonts, and possibly scripts on your site have not been modified in transit by an intercepting proxy between your server and the viewer's machine? Comcast, for example, has been shown to inject advertisement scripts into HTML documents delivered through cleartext HTTP.

          OMG, a MITM might substitute fake data! How awful!

          Thus you answer your own question. It is awful.

          • Which is the greater danger, allowing web access in the clear (note that this does not preclude allowing secured access as well) or creating a single point of failure called "Let's Encrypt" such that if it does fail then suddenly the entire world has to start paying money for certificates or finds their sites no longer work properly?

            • by tlhIngan ( 30335 )

              Which is the greater danger, allowing web access in the clear (note that this does not preclude allowing secured access as well) or creating a single point of failure called "Let's Encrypt" such that if it does fail then suddenly the entire world has to start paying money for certificates or finds their sites no longer work properly?

              Not only that, but with Let's Encrypt issuing out certificates so sites can phish, it seems like a good way to avoid all the Paypal and other phishing is to block the Let's Encr

              • by tepples ( 727027 )

                with Let's Encrypt issuing out certificates so sites can phish, it seems like a good way to avoid all the Paypal and other phishing is to block the Let's Encrypt certificate. (they issued like 14,000 phishing certificates)

                Why not go a step further to block the domain registrars that issue out domains so sites can phish?

          • "How do you assure visitors of the several websites you run that the markup, stylesheets, images, fonts, and possibly scripts on your site have not been modified in transit by an intercepting proxy between your server and the viewer's machine?"

            Considering all users have been trained to click through all these useless security prompts, add website exceptions, and trust any certificates thrown at them, i would be surprised - shocked even - if an invalid certificate made a user so much as pause as they rabidly

        • "private IP address ranges" are in the eye of the beholder.

          Somewhat true. I mean if you don't want to be able to connect to parts of China, you can use 123.123.0.0/16, but the IP range is defined as public - and registered under APNIC.

        • if I want to treat 123.123.0.0/16 as "private" there is nothing you can do to stop me

          And when your routing table has a hiccup, there's nothing to stop your "private" request being sent to Chinese servers.
          123.112.0.0 - 123.127.255.255 is owned by China Unicom

          • And when your routing table has a hiccup,

            Gee, yeah, if I misconfigure my network it won't do what I want it to do. I'm shocked to learn that. Shocked.

            I know that block is owned by someone else. That's the point.

    • The overhead for SSL is not the encryption. Not on a modern CPU it isn't. Any overhead is due to the extra communication steps to set up the connection. But HTTP 1.1 will do a single handshake and reuse the connection.

      "On our production frontend machines, SSL/TLS accounts for less than 1% of the CPU load, less than 10 KB of memory per connection and less than 2% of network overhead. Many people believe that SSL/TLS takes a lot of CPU time and we hope the preceding numbers will help to dispel that." - Ada

      • On our production frontend machines, ... Adam Langley, Google

        So, if you have a huge compute infrastructure like Google does, SSL isn't much of a problem. Isn't it wonderful that all the websites in the world are run using massive parallel redundant servers like Google does it?

        • by tepples ( 727027 )

          So, if you have a huge compute infrastructure like Google does, SSL isn't much of a problem.

          Modern server CPUs contain AES instructions that make TLS bulk encryption efficient. If the computation cost of TLS were a practical problem, you'd be seeing the problem on your client whenever you browse Slashdot, SoylentNews, YouTube, or any other HTTPS site. Any website that's more than a collection of static documents has data storage, application logic, and presentation layers on the server side, and these probably use significantly more CPU time than TLS does.

          • by dryeo ( 100693 )

            Well Slashdot broke on my dial-up connection when it switched to HTTPS (pages hardly ever fully loaded) as well as a lot of pages suddenly needing reloaded. You depend on the cache a lot more with a 26.4 KBs connection.
            Then there is the issue of small timers who want to serve a web page from home, using an old computer and dynamic hostname. Seems like another move to make sure that only large companies can serve content on the internet.

            • The web browser caches resources delivered through HTTPS the same way as resources delivered through cleartext HTTP. The only thing you lose is being able to cache on an intermediate proxy, but that is relevant if you're splitting one dial-up connection among multiple clients.

              Then there is the issue of small timers who want to serve a web page from home, using an old computer and dynamic hostname.

              File a support ticket with your dynamic DNS provider to request addition to the Public Suffix List [publicsuffix.org]. If a dynamic DNS provider is on the Public Suffix List, Let's Encrypt issues 20 certificates per customer per week instead of 20 per pr [letsencrypt.org]

      • by msauve ( 701917 )
        And how many full time staff does Google employ to handle dns and certificate management?
      • SSL/TLS adds little CPU overhead when your system has hardware accelerated encryption engines to offload the encryption from the CPU
        The overhead then becomes a DMA transfer and a kernel context switch.
        Or if you're like Twitter (I think, could have be some other big company) you write your own network stack to include the hardware encryption to avoid multiple kernel calls.

        • "We have deployed TLS at a large scale using both hardware and software load balancers. We have found that modern software-based TLS implementations running on commodity CPUs are fast enough to handle heavy HTTPS traffic load without needing to resort to dedicated cryptographic hardware."
          - Doug Beaver, Facebook
    • by dremon ( 735466 )
      HTTPS is not enforced for browsing the normal web sites but for the browser features (like WebRTC for example). Just read the article before complaining.
    • by AHuxley ( 892839 )
      Re Just... what are they thinking?
      Man in the middle. It stops the collection of a users plain text communications along the internet.
      The data networks from a users browser to the site, service the user expected, not to be collected by some 3rd party, the ISP.
      • And for non-Internet facing Internal websites? The ones that have no need of encryption whatsoever? Remember, this is for web standards going forward. So this isn't an immediate problem, but new web based features are going to get caught in this. For example, if there's a new standard for, say, WebAR (Augmented Reality) and I want to make a webpage where my kids press buttons and different objects appear on their screens. The webpage MUST run over HTTPS. So I'd have to allow both my server and tablet acces
        • by AHuxley ( 892839 )
          And for non-Internet facing Internal websites?
          If a non-Internet facing Internal website was created the skilled staff can also suggest a browser to use their supported network.
    • by roca ( 43122 )

      Among other reasons for TLS, anything accessible over the Internet via non-TLS HTTP can be hijacked for DDoS attacks via the "Great Cannon": https://en.wikipedia.org/wiki/... [wikipedia.org]

  • by williamyf ( 227051 ) on Wednesday January 17, 2018 @02:37PM (#55947591)

    If the Standard call for a feature to work on Both HTTP and HTTPS, and you implement only HTTPS, then is not an standards compliant implementation...

    Come on Mozilla Foundation! Those heavy-handed tactics could work when your market share was about 50%, but not anymore...

    JM2C, YMMV

    • If the Standard call for a feature to work on Both HTTP and HTTPS, and you implement only HTTPS, then is not an standards compliant implementation...

      Nor does an implementation comply if the browser implements it over cleartext HTTP but the standard specifies that it shall not work over cleartext HTTP. A growing number of web standards specify such, citing things like the W3C Candidate Recommendation "Secure Contexts" [w3.org].

      Those heavy-handed tactics could work when your market share was about 50%, but not anymore...

      That'd be a good comeback if plurality browser Chrome weren't also doing it [chromium.org].

      • by MobyDisk ( 75490 )

        Chrome says it is applying this to things like geolocation and encrypted media. Firefox says it applies to CSS color properties. Chrome explicitly ignored these rules on localhost, Firefox didn't.

        • by roca ( 43122 )

          Firefox hasn't applied the new approach to anything yet. Neither has Chrome. Chrome will probably follow Firefox's lead here.

          Note that Anne's guidelines explicitly make an exception to allow a feature to work in insecure contexts if another major browser (Chrome) is already doing so. Mozilla isn't going to do anything suicidal like stop features from working in Firefox when they work in Chrome.

  • If everything is HTTPS will that stop nosy ISPs and even nosier government agencies (or anyone else for that matter) from snooping? So far as I know, it won't.
    • by AHuxley ( 892839 )
      Mil, security services have the keys so nothing stops them from collect it all over any generation of tech.
      Police who get ISP logs will be the interesting change.
      ISP will have to get some new skills if they want to keep looking over a users communications.
      Ad will have to change and become part of a site in some way.
    • by roca ( 43122 )

      It makes snooping much more expensive and it makes passive undetectable snooping impossible. To snoop, they have to install software on the user's computer, or the target server, or else get a CA to generate a certificate they can use to MITM the connection. All of these things are expensive to do at scale, and detectable. In the latter case, the bad certificate can be recorded and constitutes proof of the CA's misbehavior; if a rogue CA is found to have misissued a certificate, there are consequences, as

  • by RightwingNutjob ( 1302813 ) on Wednesday January 17, 2018 @02:55PM (#55947773)
    Last month bitcoin was the new fad. These silicon valley types must have been drinking too much Raw Water(TM) picked up some brain parasites.

    Very little needs to be encrypted or authenticated. Not everything that needs to be encrypted when going through the open internet needs to be encrypted or authenticated when happening on a closed LAN. Encryption isn't for free. SSL certificate management isn't for free. When stepping away from the half of web browser use that happens on the open internet and into the other half that happens on closed networks, it is wasted effort for no benefit.
    • Very little needs to be encrypted or authenticated.

      Then always use encryption so you don't have to think about whether you "need" it or not.

      SSL certificate management isn't for free.

      Let's Encrypt helps out here. It's not a huge pain in the ass anymore and doesn't cost users money.

      The problem I see here is my router and cable modem web interfaces don't support https. I know as I just tested them. These are fairly new devices too.

      • Let's Encrypt can go fuck itself. If the functionality of your system depends on yet another third party, then it isn't free.
        • If the functionality of your system depends on yet another third party, then it isn't free.

          DNS registries and registrars are third parties. What makes a CA any different from DNS in this respect?

          • On your own private LAN, you don't need either. You can make it all work with packets over port 80 and you can serve out webpages with nothing fancier than ethernet chip and a PIC16.
            • by tepples ( 727027 )

              How many "new Firefox features" is a site on a server with such limited resources going to use?

              • Probably very few. But it will already show up with an unsecure site warning. And who knows...maybe plain old HTML will be next on the chopping block.
      • Then always use encryption so you don't have to think about whether you "need" it or not.

        I've already thought about it. For the websites I run, it isn't needed. It isn't worth my time managing certificates for them.

        It's not a huge pain in the ass anymore

        So it is still a pain in the ass, just not a huge one. See above.

        The problem I see here is my router and cable modem web interfaces don't support https.

        I connected to the embedded web server in my HP printer for the first time just last night. It did HTTP just fine. Then it demanded to switch to HTTPS because I was going to enter a password. The first thing Firefox did was bitch about the certificate and make me go through the "add exception" process, after puking up t

        • Same thing happens to me at work all the time. Some internal website gets served out of a machine that wasn't made to play with our internal CA quite right and I have to hack FF to display it because HTSP is set by the server but the wrong certifiate is being served out. The best use of time and resources (your taxes at work, we're on a US government contract) is not to have a 100/hr IT compliance officer waste his time configuring a server that's going to be used for a week and then wiped again.
  • Since the article at bleepingcomputer makes no sense, I went to Mozilla's site. It isn't much better. It says:

    Effective immediately, all new features that are web-exposed are to be restricted to secure contexts. Web-exposed means that the feature is observable from a web page or server, whether through JavaScript, CSS, HTTP, media formats, etc. A feature can be anything from an extension of an existing IDL-defined object, a new CSS property, a new HTTP response header, to bigger features such as WebVR. In contrast, a new CSS color keyword would likely not be restricted to secure contexts.

    What is "observable from a web page or server?" I get that they are trying to prevent information leakage, but this statement is overbroad. I call B.S. on it.

    Mozilla programmers will not waste their time checking if HTTPS is enabled before supporting a new CSS property, or a new SVG feature. That would be a moronic waste of developer time. Heck, I bet they couldn't even implement that if they

    • by roca ( 43122 )

      Mozilla developers like Anne know more about browser development than you do.

      In Gecko, restricting new DOM APIs to secure contexts is simply a matter of adding an attribute to the WebIDL:
      https://github.com/mozilla/gec... [github.com]

      Probably something similar will be added to the CSS property list.

      There is also a single method you can call on the internal interface of a 'window' object to determine if you're in a secure context.
      https://dxr.mozilla.org/mozill... [mozilla.org]

      Selective disabling of new features is already standard prac

  • So I have to put severs IPMI on the internet so maybe use Let's Encrypt (with maybe auto renew) or just keep them offline and manually update certs all the time on each on

    • by tepples ( 727027 )

      If you don't want to expose your server to the Internet, you can use Let's Encrypt with an ACME client that supports the DNS challenge instead of the HTTP challenge.

If you think the system is working, ask someone who's waiting for a prompt.

Working...