Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Google The Internet Technology

Chrome To Force Domains Ending With Dev and Foo To HTTPS Via Preloaded HSTS (ttias.be) 220

Developer Mattias Geniar writes (condensed and edited for clarity): One of the next versions of Chrome is going to force all domains ending with .dev and .foo to be redirected to HTTPs via a preloaded HTTP Strict Transport Security (HSTS) header. This very interesting commit just landed in Chromium:
Preload HSTS for the .dev gTLD:


This adds the following line to Chromium's preload lists:
{ "name": "dev", "include_subdomains": true, "mode": "force-https" },
{ "name": "foo", "include_subdomains": true, "mode": "force-https" },

It forces any domain on the .dev gTLD to be HTTPs.

What should we [developers] do? With .dev being an official gTLD, we're most likely better of changing our preferred local development suffix from .dev to something else. There's an excellent proposal to add the .localhost domain as a new standard, which would be more appropriate here. It would mean we no longer have site.dev, but site.localhost. And everything at *.localhost would automatically translate to 127.0.0.1, without /etc/hosts or dnsmasq workarounds.

This discussion has been archived. No new comments can be posted.

Chrome To Force Domains Ending With Dev and Foo To HTTPS Via Preloaded HSTS

Comments Filter:
  • Maybe...? (Score:5, Insightful)

    by cayenne8 ( 626475 ) on Monday September 18, 2017 @04:04PM (#55221423) Homepage Journal
    Maybe use browser other than Chrome??
    • Re:Maybe...? (Score:5, Interesting)

      by Z00L00K ( 682162 ) on Monday September 18, 2017 @04:09PM (#55221457) Homepage Journal

      All the strive to force users to go https has gone over the top. It's better to be nice about it.

      Many sites don't need https since there's not much to protect in the communication when people just look at memes and pictures of cats.

      Keep the https available for cases where users want to get the extra security. Assuming that users are stupid makes the users stupid.

      • Many sites don't need https since there's not much to protect in the communication when people just look at memes and pictures of cats.

        You're making the common error of believing that the purpose of TLS is to protect the secrecy of the content stream, but that's only one half of it, and in most cases the less important half. The other goal is to ensure the integrity of the content stream, not because your cat pictures are important but because browsers are too big and too complex to secure effectively. TLS ensures that no one can inject anything malicious (or even anything annoying) into your stream of cat pictures. Of course, the site you

      • Re:Maybe...? (Score:4, Informative)

        by fuzzyf ( 1129635 ) on Monday September 18, 2017 @06:07PM (#55222171)
        Http means anyone can inject BeEF hooks in your browser sitting at the local Starbucks. No matter how unimportant yyour content might be.

        http://beefproject.com/ [beefproject.com]
        It's so easy you can do it with a phone using an app like dSploit.

        Http used to be ok, today every scriptkiddie has access to tools that will pwn any browser with a mitm attack.
        Also, any http security header you might add from your site is useless without https.
      • Re:Maybe...? (Score:4, Interesting)

        by Luthair ( 847766 ) on Monday September 18, 2017 @08:44PM (#55222909)

        I think you missed a key point - Google bought the .dev TLD and their intended usage is only their own projects. So what they're doing here is asserting that all their dev domains will be encrypted.

        The issue here is iCANN shouldn't have been dumb enough to grant a TLD that has been widely used internally, but unfortunately they have a financial incentive to hawk as many TLDs as possible.

      • by Altrag ( 195300 )

        Most houses don't get robbed much so there's no point in buying locks, correct?

        Assuming that users are stupid makes the users stupid.

        It only appears that way if you don't pay attention. Its not because it makes a previously not-stupid user magically become stupid, but because widening the audience to allow for stupider users brings down the average. If you want to appeal to a larger market, its the way to go. If you want to stick to communities that shun non-technical people out of hand well.. there's plenty of those around the internet also.

        Or to put in te

      • Many sites don't need https since there's not much to protect in the communication when people just look at memes and pictures of cats.

        What someone finds offensive about someone else's browsing habits is not for the content producer to decide.

    • by GuB-42 ( 2483988 )

      Maybe use browser other than Chrome??

      Firefox follows the same path (forcing https). If fact it tends to follow Chrome's every move...
      AFAIK Safari does it too.
      I'm not sure about IE/Edge and all the small players (Opera, ...).

    • FYI, the HSTS preload list is used by all major browsers (Chrome, Firefox, IE, Edge, Safari, Opera, etc.). This is a good thing, of course; online security shouldn't be enforced conditionally depending on which browser you're using.

      The linked article got it wrong. This isn't about Chrome adding TLDs to the HSTS list, it's about the TLDs' owner (which also happens to be Google) adding them to the global HSTS list.

  • Switch to .test (Score:5, Insightful)

    by Anonymous Coward on Monday September 18, 2017 @04:11PM (#55221469)

    .test is an IETF standard for this purpose. .dev never was. Google own .dev, and they own Chrome, so they are perfectly welcome to do this. We could argue as to whether a browser that enforces per-domain protocols is truly adhering to browser standards (and the larger ramifications if every browser coder started doing the same), but accept that you have zero right to use .dev as your personal fiefdom and move on to something that will remain easier for you to maintain.

    • Re: (Score:2, Insightful)

      by Anonymous Coward

      dev != test != prod

      • Re:Switch to .test (Score:4, Insightful)

        by grub ( 11606 ) <slashdot@grub.net> on Monday September 18, 2017 @04:46PM (#55221675) Homepage Journal
        NEEDS MOAR AGILE!!!11````
      • FFS. Do you seriously deploy to the .prod TLD, which is also owned by google? You should write a book called "DNS Worst Practices". This stuff is spelled out quite clearly in RFCs.

        Use dev.test, test.test, etc for your 2LDs. So myservice.dev.test, etc.

        Better yet, just allocate domains for internal use on top of the one you certainly already own (e.g. dev.mydumbbusiness.com) so you can have myhost.dev.mydumbbusiness.com, etc. Or register a tld specifically for internal domains. In any case, you just manage th

  • Please see RFC6761 (Score:5, Informative)

    by mysidia ( 191772 ) on Monday September 18, 2017 @04:15PM (#55221501)

    .invalid and .localhost are already reserved for private usage.

    • by Luthair ( 847766 )
      The article points it out .localhost only maps to 127.0.0.1 on Chrome & Safari, so if its an internal test server that doesn't help.
    • NOPE (Score:5, Informative)

      by cfalcon ( 779563 ) on Monday September 18, 2017 @10:31PM (#55223289)

      Modded +5, Informative, but both of its statements are inaccurate. .localhost is reserved for 127.0.0.1 and no other thing. .invalid is reserved for NO use, it should never resolve.

      https://tools.ietf.org/html/rf... [ietf.org]

      Localhost:
      Name resolution APIs and libraries SHOULD recognize localhost names as special and SHOULD always return the IP loopback address for address queries and negative responses for all other query types. Name resolution APIs SHOULD NOT send queries for localhost names to their configured caching DNS server(s).

      Invalid:
      Name resolution APIs and libraries SHOULD recognize "invalid" names as special and SHOULD always return immediate negative responses. Name resolution APIs SHOULD NOT send queries for "invalid" names to their configured caching DNS server(s).

      Neither of these are meant for use on a local internet. .localhost is meant to resolve to loopback, and .invalid is meant to never resolve but instead give NXDOMAIN.

      Maybe there are domains reserved for private usage, but it ain't these two.

  • .localhost TLD? (Score:4, Informative)

    by TheRealMindChild ( 743925 ) on Monday September 18, 2017 @04:17PM (#55221511) Homepage Journal
    And everything at *.localhost would automatically translate to 127.0.0.1, without /etc/hosts or dnsmasq workarounds

    Cmon, we aren't talking some crazy complicated configuration here. DNSMasq: add "address=/localhost/127.0.0.1" to your config file. Boom. Done.
    • by grub ( 11606 )
      I showed my 11 year old daughter your sig. She smiled and said "That's awesome! I'm going to get my engineers to make a combustible lemon that burns your house down!"
  • by Anonymous Coward on Monday September 18, 2017 @04:37PM (#55221633)

    that Google has. They already broke the "-ignore-certificate-errors" flag which was driven by their hate. I often have to change my clock for testing, and Google made the decision that I should not be allowed to use the web. We use Let's Encrypt certs that area also pretty hatefully limited to 90 days so they waste so much of our time having to maintain them, so you can't move your clock that far forward or backward before Google decides you shouldn't be able to work.

    • Re: (Score:2, Interesting)

      by Anonymous Coward

      They also decided to break --disable-web-security which they had previously supported for years. They even broke it in Content Shell which is used only for headless testing, which makes no sense unless they just don't want us to use Chrome for development.

    • > also pretty hatefully limited to 90 days so they waste so much of our time having to maintain them

      It's a one line cron entry, if that takes too much time, maybe you should hire somebody to do your job.

  • by viperidaenz ( 2515578 ) on Monday September 18, 2017 @04:49PM (#55221695)

    How about: Don't use a gTLD for your local DNS?

    Also, why are you doing web development without HTTPS unless you're planning on never using it? It's not like certificates cost anything. There's also nothing stopping you loading your own CA cert and signing your own certificates too.
    Browsers behave differently based on the protocol. Building against one set of rules and deploying against another is just asking for problems.

    • Also, why are you doing web development without HTTPS

      I am developing software that runs on a PC on a home LAN, and I've never seen anyone get HTTPS working with multicast DNS and DNS-SD.

      • You know your comment is moot if you quote the entire sentence, right?

        why are you doing web development without HTTPS unless you're planning on never using it?

        If you're using multicast dns, why are you using .dev instead of .local, as is part of the mDNS RFC?
        https://tools.ietf.org/html/rf... [ietf.org]

        If you're not using the Google sponsored .dev gTLD, this doesn't impact you at all.
        They bought the rights to control who's allowed a .dev domain. Just like you need to abide by certain rules if you want to use .aero or lawyer, etc. Perhaps a condition of using .dev is to only host HTTPS web servers? I haven'

      • .... your other option is to install your own CA on the LAN PC's so you can issue your own trusted certificates for .local domains.
        Then you've got no problem with HTTPS using mDNS

        Public CA's don't issue certificates for local domains for good reason.

    • >> How about: Don't use a gTLD for your local DNS?

      Lest someone take that advice the wrong way, let's be very clear.

      You DO NOT want to use fake/bogus TLDs in the internal network of an enterprise. It creates serious pain points, not the least of which is that you can't get public SSL certificates against your internal names. That means you have to push your private CA cert into a bunch of applications and it's a huge PITA.

      Examples: On Windows you can distribute your Enterprise CA cert via Group Polic

      • big companies love having their own CA though, it lets them decrypt and snoop on HTTPS traffic and resign it without browser security warnings

    • Comment removed based on user account deletion
  • I've been using ".local" for years. I'd have no problem with ".localhost".

    • According to RFC 6762, this has been a bad idea for years, because .local is an official special-use multicast DNS domain name and should not be used like that or it'll break your Zeroconf (should that find its way into your network, and it will) six ways from Sunday.
      • Yeah, I know -- but old habits die hard, and I don't use anything zeroconf. This has been on my "to fix" list for a while now, though.

  • Start creating sites that don't break as soon as you start using TLS 1.2.

  • Seriously are people really using .dev URLs to point to local resources where there could be a name collision with a real TLD? So you have a bunch of links to [].dev that people have stored. And then they switch networks where .dev resolves correctly and they start erroneously sending data to third parties. And we don't all see why that is an awful problem? /. is starting to sound like its the new hangout for Equifax CSO candidates.
    • Seriously are people really using .dev URLs to point to local resources where there could be a name collision with a real TLD? So you have a bunch of links to [].dev that people have stored. And then they switch networks where .dev resolves correctly and they start erroneously sending data to third parties. And we don't all see why that is an awful problem?

      Myself and others saw why it was a bad idea many years ago. Unfortunately all ICANN saw was dollar signs when they opened the floodgates at expense of the network.

      • That's a very fair point. The expansion of TLDs was a huge problem. But we also have *reserved* TLDs that can safely be used for testing. Any other TLD may get sold in the future. So, as has been pointed out other places, if you just use either the reserved TLD or a domain that you own for your servers, you are future proof against future TLD expansion. The fact that people are pretending that the new TLDs don't exist and haven't reconfigured their servers is quite shocking given how long ago this happ
    • Comment removed based on user account deletion
      • Using .example, .invalid, and .localhost complies with the RFCs. Using .dev (when it's an in-use TLD) most certainly does *not* comply with the RFC. But, even if it did, it would be foolish. If I have a bunch of URLs like myserver.dev that work on my corporate network and then I leave the office, I could accidentally load those URLs and leak data to a third-party. So even if the RFCs allowed this it would be foolish. The issue here is that people aren't using the TLDs in the RFCs. They are using the w
  • We're at the point now where using https is so easy, that there's very little reason to not use it. The biggest stumbling blocks had always been obtaining the certificate and vhost/IP limitations on certificates. But those are now taken care of with Lets Encrypt and recent changes to how certificates are handled.

    Given the current technical and political climate, HTTPS should be the default for *everything*, barring very special circumstances.

    • These are dev environments we're talking about. They are often only accessible on local networks, sometimes even only on a localhost. Deploying https in a dev environment isn't really necessary for most development.

      People are getting upset because you wouldn't be able to obtain a cert signed by any reputable authority for a domain you didn't actually own. After this change, you have to deploy https on those gTLDs which would mean going through self signed cert shenanigans. As others have mentioned, peop

      • It may not be necessary, but just because they're development system that doesn't mean security should be ignored.

        For one thing, you may run into problems in production if it applied security that you didn't check for in dev.
        For another thing, if your development efforts touch sensitive client data, then it *still* needs to be protected even if it's internal only. It's bad enough if your company is breached by an attack. It's even worse if your client data is threatened in the process.

        Proper security need

  • by brantondaveperson ( 1023687 ) on Monday September 18, 2017 @06:48PM (#55222357) Homepage

    But it's a real pain for anything that you ship with a web interface, and expect to work unmodified for a long period of time.

    Sure, that's a niche use-case, I get that, but not everything that's accessed by a web browser is something easily updated, and why should it be? If I build some device that's intended to be put on my local network, and give it a web interface, - like, say, a home router - will I be required to implement HTTPS on the device, and have it ship with a cert? A cert that expires after a relatively short period of time?

    I happen to have an old computer lying around the house, and it can't run anything more modern than Chrome from about eight years ago. This browser is able to access anything on the web, other than newer HTTPS sites, because it doesn't understand their certificate. By building these mechanisms of trust, and then constantly changing them (for instance, change from Common Name to Subject Alternate Name - and whatever it is that old Chrome hates about modern certs), we are locking ourselves out of notions of backwards compatibility, and increasing the rate at which we have to throw away our devices, because we can't afford to release OS updates for old hardware, and can't afford to release browser updates for old OSs.

    I get that we're talking about security here, and trust, but I personally see a high cost. Plain HTTP is great. HTTPS is a moving target, and seems like it will remain so.

    • I get that we're talking about security here, and trust, but I personally see a high cost. Plain HTTP is great. HTTPS is a moving target, and seems like it will remain so.

      Web security is a moving target, and will remain so, and that applies to plain HTTP as much as to HTTPS. Your computer with eight year-old chrome is a security breach waiting to happen. You could browse some site with malware that compromises your browser, compromises the machine, then attacks everything else in your home network that is accessible from that machine.

      Using unpatched (and unpatchable!) software is just a bad idea. If HTTPS changes forces you to keep things closer to current, that's a featur

      • I know, and I totally understand all that. But on the other hand, my handy home router isn't likely to be patched anytime soon, and the web interface on the product I happen to be working on is likely to be used for ten years or more, without necessarily being updated. The system may not be internet connected, but it will need to be configured by a laptop, running a browser.

        If Google take this HTTPS-only approach all the way, as some people suggest that they will, what shall I do? I can't put a 10-year cert

        • HTTPS everywhere has the side effect of locking us all into an upgrade cycle that I thought slashdotters in general, were against.

          Not security-focused slashdotters.

        • by Altrag ( 195300 )

          Get something like stunnel and wrap/unwrap the security in-stream. Problem solved. Or install an older version of the browser and then you don't even have the problem in the first place.

          The world needs to be secure by default -- not just the web but email and cars and IoT pacemakers and basically everything else. Devices are too connected and hackers willing to abuse those devices too prevalent for us to continue leaving things insecure in order to avoid the odd edge case here and there.

          In your case, sur

    • Google wants you to constantly update your pages. Static pages that are finished, that sit around unchanged for years, are not attractive to Google. They want churn, they want you doing the latest greatest, they want you spending money. Because that's good for them.
    • by dyfet ( 154716 )

      Related to this is ethernet connected "devices" and specialized embedded hardware which may not even have spare processing power for https, and have no need to it since they never are remotely reachable. A perfect example might be arduino web servers on a local lan. Pure http is a nice simple protocol that is easy to implement even on very low end devices, https is not. There are different ways to solve this, but banishing http to .localhost/127.0.0.1 is NOT one of them. I think even a new http header en

  • It seems like once a week I'm given a new reason to be happy that I don't use Chrome.

  • by watermark ( 913726 ) on Monday September 18, 2017 @08:36PM (#55222877)

    This gives me an idea. gTLD wide HSTS should be done for some other gTLD as well. I'm thinking like *.bank and the like. It just forces any user of that gTLD to be at least somewhat security conscience and adds some good public reputation to those select gTLD. A private company that owns a gTLD could use this to increase the value of their gTLD because it will have a reputation of being more secure.

THEGODDESSOFTHENETHASTWISTINGFINGERSANDHERVOICEISLIKEAJAVELININTHENIGHTDUDE

Working...