Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Facebook Mozilla Network Networking Security The Internet IT Technology

Facebook, Mozilla, and Cloudflare Announce New TLS Delegated Credentials Standard (zdnet.com) 25

Facebook, Mozilla, and Cloudflare announced today a new technical specification called TLS Delegated Credentials, currently undergoing standardization at the Internet Engineering Task Force (IETF). From a report: The new standard will work as an extension to TLS, a cryptographic protocol that underpins the more widely-known HTTPS protocol, used for loading websites inside browsers via an encrypted connection. The TLS Delegate Credentials extension was specifically developed for large website setups, such as Facebook, or for website using content delivery networks (CDNs), such as Cloudflare. For example, a big website like Facebook has thousands of servers spread all over the world. In order to support HTTPS traffic on all, Facebook has to place a copy of its TLS certificate private key on each one. This is a dangerous setup. If an attacker hacks one server and steals the TLS private key, the attacker can impersonate Facebook servers and intercept user traffic until the stolen certificate expires. The same thing is also valid with CDN services like Cloudflare. Anyone hosting an HTTPS website on Cloudflare's infrastructure must upload their TLS private key to Cloudflare's service, which then distributes it to thousands of servers across the world. The TLS Delegate Credentials extension allows site owners to create short-lived TLS private keys (called delegated credentials) that they can deploy to these multi-server setups, instead of the real TLS private key.
This discussion has been archived. No new comments can be posted.

Facebook, Mozilla, and Cloudflare Announce New TLS Delegated Credentials Standard

Comments Filter:
  • by BringsApples ( 3418089 ) on Friday November 01, 2019 @11:10AM (#59369700)

    For example, a big website like Facebook has thousands of servers spread all over the world. In order to support HTTPS traffic on all, Facebook has to place a copy of its TLS certificate private key on each one. This is a dangerous setup. If an attacker hacks one server and steals the TLS private key, the attacker can impersonate Facebook servers and intercept user traffic until the stolen certificate expires.

    I guess I don't understand what's dangerous about this? What would the would-be hacker do with a fake FB server, that facebook isn't already doing with real FB servers?

  • by slack_justyb ( 862874 ) on Friday November 01, 2019 @11:30AM (#59369742)

    I read this an immediately thought, "what exactly does this add?" Then I went to read the actual IETF paper on it and saw.

    o There is no change needed to certificate validation at the PKI layer.

    o X.509 semantics are very rich. This can cause unintended consequences if a service owner creates a proxy certificate where the properties differ from the leaf certificate. For this reason, delegated credentials have very restricted semantics that should not conflict with X.509 semantics.

    o Proxy certificates rely on the certificate path building process to establish a binding between the proxy certificate and the server certificate. Since the certificate path building process is not cryptographically protected, it is possible that a proxy certificate could be bound to another certificate with the same public key, with different X.509 parameters. Delegated credentials, which rely on a cryptographic binding between the entire certificate and the delegated credential, cannot.

    o Each delegated credential is bound to a specific signature algorithm that may be used to sign the TLS handshake ([RFC8446] section 4.2.3). This prevents them from being used with other, perhaps unintended signature algorithms.

    So that makes a lot more sense as to why they're going this route.

  • by Micah NC ( 5616634 ) on Friday November 01, 2019 @11:48AM (#59369798)
    If an attacker gets your private key, seems like you are cooked.

    If it was a continent vs. the world ... that'll just show up down later in the article.
    • by Errol backfiring ( 1280012 ) on Friday November 01, 2019 @11:58AM (#59369822) Journal

      The point is that they use the same key for a lot of servers. That is a choice. They could also use different certificates for different domains. But that would mean more overhead. I suspect companies that want to track you want to use as many domains as possible, to stay a step ahead of domain blacklisting.

      I am not sure I like this proposal. It encourages bad actors to use too many domains and be too hard to block.

      • by raymorris ( 2726007 ) on Friday November 01, 2019 @12:35PM (#59369936) Journal

        The proposal makes it easier to use FEWER names, not more.

        Stealing a private key for a cert allows an attacker to impersonate the site. The server keeps the private key secret so that hackers can't impersonate the site running on that server. That makes perfect sense for errolbackfires.com.

        Facebook.com doesn't run on a gigantic server the size of a small city; it runs on thousands and thousands of servers. If it worked the way TLS was originally designed, so each of those thousands of servers had a secret key, those servers would be Facebook000001.com, Facebook000002.com, Facebook000002.com, etc - thousands of servers with thousands of private keys would need thousands of certs with thousands of different names.

        Similarly, CapitalOne.com isn't one server. It's a thousand servers. If each were going to have their own secret and their own cert, customers would need to deal with capitalo.com, capitaltwo.com, capitalthree.com, capitalfour.com, etc.

        That would be silly, so what Facebook has been doing instead is having thousands of copies of the same secret key. All the many, many servers have a copy of the secret. Well if you're trying to keep something SECRET, having 10,000 copies of it spread around the world doesn't help keep it safe from anyone getting it. So the way TLS works today, you either have many, many domains, or you put your secret key at risk, so hackers could impersonate you.

          The proposal is that the main capitalone.com secret can sign a subsidiary cert for capitalone.com that is only valid for 24 hours. Each of the thousand capitalone.com gets a unique daily cert that is only valid until midnight. That way if the secret that is available on the public web servers gets stolen, it is only valid for a few hours.
         

        • ...that would be silly...

          Why is it silly for each server to have its own name? www1-eastus.facebook.com, www74-eu-irl.facebook.com, etc. That is how lots of sites work. I know using the same name has some fun capabilities with finding the closest local server by using BGPs capability to count hops, but maybe they should then issue a redirect to the real page instead.

          • Let's put aside for a moment why each server should ALSO have a unique administrative name.

            Suppose you're a capitalone.com customer. Would you enter your credentials on capitalfourthoudandninetysix.com?

            Or equivalently, fourthoudandninetysix.capitalone.com?
            Theoretically you could, but that's ripe for phishing and breaks your password manager. It ALSO means that they can't scale servers up and down, or remove servers for any reason, without cutting people off in the middle of a transaction. That would kinda suck to be in the middle of a transaction when they take fourthoudandninetysix.capitalone.com offline, leaving you not knowing if the transaction went through or not.

            You want to communicate with the SERVICE at capitalone.com without worrying about how it's physically implemented. You want to talk to capitalone.com (the bank), not 757464936394.capitalone.com (a specific physical server).

            Having said that, for administrative purposes they should have private unique IDs, not publicly used, so that an admin can take a specific physical server offline, or otherwise speak meaningfully about a certain physical server.

            Note also that for a customer to look up their balance on capitalone.com, that requires that all capitalone.com servers will give the same answer - they are all talking to the same back-end database. When you do a transaction on capitalone.com, it needs to be recorded for capitalone.com - not just for 74936483.capitalone.com. So for this to work right they all need to talk to the same database, probably using the same same code. We call this a "true cluster". If instead they each have *similar* databases which are synced up once per day, you could get conflicting answers depending on which server you hit. That's bad, if they all claim to be capitalone.com/get-balance.gh

            If instead they are merely similar, such as CentOS mirrors which update periodically, the names should reflect that - east.mirror.centos.com and west.mirror.centos.com. That avoids the situation of having two different conflucting contents for centos.com/security-hash.txt

            If you have a smaller web site with two or three servers, mirrors can make sense. Also if you have a static site. When you're dealing with mirrors as opposed to clusters, different names indicate that they answers you get may be different in each system. When a transaction is displaying or changing site.com, as opposed to a mirror of what site.com had last night, the naming should reflect site.com

        • by WaffleMonster ( 969671 ) on Friday November 01, 2019 @02:17PM (#59370350)

          The proposal makes it easier to use FEWER names, not more.

          After reading TFA and the draft I still don't understand the underlying logic.

          Facebook.com doesn't run on a gigantic server the size of a small city; it runs on thousands and thousands of servers. If it worked the way TLS was originally designed, so each of those thousands of servers had a secret key, those servers would be Facebook000001.com, Facebook000002.com, Facebook000002.com, etc - thousands of servers with thousands of private keys would need thousands of certs with thousands of different names.

          There is no requirement to have different names. You could have a thousand certs all signed by a CA and a thousand different private keys for the same name. If any individual key is compromised you could simply revoke it.

          That would be silly, so what Facebook has been doing instead is having thousands of copies of the same secret key. All the many, many servers have a copy of the secret. Well if you're trying to keep something SECRET, having 10,000 copies of it spread around the world doesn't help keep it safe from anyone getting it. So the way TLS works today, you either have many, many domains, or you put your secret key at risk, so hackers could impersonate you.

              The proposal is that the main capitalone.com secret can sign a subsidiary cert for capitalone.com that is only valid for 24 hours. Each of the thousand capitalone.com gets a unique daily cert that is only valid until midnight. That way if the secret that is available on the public web servers gets stolen, it is only valid for a few hours.

          I don't really understand the benefit of any of this.

          Assume for a second I have a website with a thousand servers all with a different private key.

          One of the servers is compromised and its private key stolen.

          Result is an attacker can now impersonate my entire site at will where they have access to data path or can influence naming or routing to get access.

          As an operator there are two possible realities:

          1. An attacker is using my key and I don't know about it.
          2. An attacker is using my key and I find out about it.

          With scenario #1 the 1 week validity cap constitutes no impediment to an attacker. I compromised your server once and I'll just keep stealing new keys from you at will unless #2 occurs.

          With scenario #2 the attacker is left with a short lived key that cannot be directly revoked. So either the site lives with remaining validity period or they actively revoke their public key and reissue certs to all of their servers.

          So #1 I'm equally fucked either way. This scheme doesn't change effective security.

          #2 lack of temp key revocation means either I revoke my entire site and reissue certs or I sit on my thumb for days/hours until expiration.

          The only way this scheme can possibly help is if revocation is completely broken.

          I would much rather see drafts focusing on fixing revocation.. Such as better distribution schemes or additional EKUs to express explicit revocation validation and sourcing requirements for sites.

          • > If any individual key is compromised you could simply revoke it. ...
            > I would much rather see drafts focusing on fixing revocation

            It seems you're aware that revocation doesn't work that well. Hopefully that'll be fixed, but so far attempts to fix it haven't been all that successful.

            > So either the site lives with remaining validity period or they actively revoke their public key and reissue certs to all of their servers.

            And that remaining validity period is anywhere from a negative number to a few hours. As you may know, revocation takes about 24 after you found out about it. Delegated certificates expire within a few hours even of you don't find out about it.

          • by Anonymous Coward on Friday November 01, 2019 @11:13PM (#59371774)
            Revocation is broken. Not all devices can hold all of the revoked certs in memory. There's like 70 CA certs needed to be stored to validate a cert. There are millions of revoked certs and they can only be removed after they expire. Historically, there have been events where millions of certs had to be revoked in a single go. The list is so large that some clients can't even download all of it.

            This is a fundamental issue. The work around is certs with very short life times.
        • by kdayn ( 874107 ) on Saturday November 02, 2019 @07:30AM (#59372330)
          Why not just use SCEP? You have to update the certificate anyway and SCEP can do it already.
          • by raymorris ( 2726007 ) on Saturday November 02, 2019 @09:13AM (#59372472) Journal

            SCEP would allow 10,000 servers to each request six certs per day from a CA. I think they are trying to avoid hitting up the CA 60,000 times a day, buying 22 million certificates each year.

            Also, SCEP doesn't have a good way to strongly authenticate the request. The best SCEP can do is issue a new cert based on the fact that they have an old cert. That defeats the purpose of short-term certs - a hacker who steals a 6-hour cert can keep renewing it.

            From my understanding, what they want is more along the l lines of RFC 5280 Name Constraints, but done right. Name constraints are hardly ever used because there are a number of problems with them.

  • by account_deleted ( 4530225 ) on Friday November 01, 2019 @12:55PM (#59370000)
    Comment removed based on user account deletion
  • by fahrbot-bot ( 874524 ) on Friday November 01, 2019 @02:55PM (#59370522)

    ... the attacker can impersonate Facebook servers and intercept user traffic ...

    ... and here I thought that was a feature for sites like Facebook.

  • by fjooos ( 5979368 ) on Thursday November 14, 2019 @09:03AM (#59413358)
    Guys, I recently found out that for anonymous access to Internet resources, checking access to a resource from around the world and bypassing online locks, there are proxy servers!! Yes, I live in a cave. So I’m thinking about getting some kind of one. I want to have access to American sites. Can I get them https://buy.fineproxy.org/eng/... [fineproxy.org] here? are they good?

Disks travel in packs.

Working...