Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Facebook Mozilla Network Networking Security The Internet IT Technology

Facebook, Mozilla, and Cloudflare Announce New TLS Delegated Credentials Standard (zdnet.com) 25

Facebook, Mozilla, and Cloudflare announced today a new technical specification called TLS Delegated Credentials, currently undergoing standardization at the Internet Engineering Task Force (IETF). From a report: The new standard will work as an extension to TLS, a cryptographic protocol that underpins the more widely-known HTTPS protocol, used for loading websites inside browsers via an encrypted connection. The TLS Delegate Credentials extension was specifically developed for large website setups, such as Facebook, or for website using content delivery networks (CDNs), such as Cloudflare. For example, a big website like Facebook has thousands of servers spread all over the world. In order to support HTTPS traffic on all, Facebook has to place a copy of its TLS certificate private key on each one. This is a dangerous setup. If an attacker hacks one server and steals the TLS private key, the attacker can impersonate Facebook servers and intercept user traffic until the stolen certificate expires. The same thing is also valid with CDN services like Cloudflare. Anyone hosting an HTTPS website on Cloudflare's infrastructure must upload their TLS private key to Cloudflare's service, which then distributes it to thousands of servers across the world. The TLS Delegate Credentials extension allows site owners to create short-lived TLS private keys (called delegated credentials) that they can deploy to these multi-server setups, instead of the real TLS private key.
This discussion has been archived. No new comments can be posted.

Facebook, Mozilla, and Cloudflare Announce New TLS Delegated Credentials Standard

Comments Filter:
  • For example, a big website like Facebook has thousands of servers spread all over the world. In order to support HTTPS traffic on all, Facebook has to place a copy of its TLS certificate private key on each one. This is a dangerous setup. If an attacker hacks one server and steals the TLS private key, the attacker can impersonate Facebook servers and intercept user traffic until the stolen certificate expires.

    I guess I don't understand what's dangerous about this? What would the would-be hacker do with a fake FB server, that facebook isn't already doing with real FB servers?

    • by BeerFartMoron ( 624900 ) on Friday November 01, 2019 @11:24AM (#59369730)
      It's dangerous because the would-be hacker might have morals or ethics, and could damage Facebook's global reputation as a "We Will Do Anything For Cash" company.
    • I guess I don't understand what's dangerous about this? What would the would-be hacker do with a fake FB server, that facebook isn't already doing with real FB servers?

      Tell you the truth?

    • What would the would-be hacker do with a fake FB server, that facebook isn't already doing with real FB servers?

      heh.

    • Sell user data directly for money? Oh wait, prove Facebook isn't already doing this, lol. (Maybe it's just a private/secret agreement when you buy the data from Facebook? Enough money to them and I'm sure they'd budge.)
    • In a nutshell: the issue is loss of trust due to a hacker being able to spoof traffic. I.e., sending off posts as if you were someone else. People trust that a facebook user is the one sending their messages... no matter how stupid the content may be. Think of RMS having his website changed to say he was stepping down from GNU and the impact that had. Most people don't have personal websites, only social media accounts. Spoofed posts on Facebook can definitely get a targetted victim fired from a job, blow u
  • by slack_justyb ( 862874 ) on Friday November 01, 2019 @11:30AM (#59369742)

    I read this an immediately thought, "what exactly does this add?" Then I went to read the actual IETF paper on it and saw.

    o There is no change needed to certificate validation at the PKI layer.

    o X.509 semantics are very rich. This can cause unintended consequences if a service owner creates a proxy certificate where the properties differ from the leaf certificate. For this reason, delegated credentials have very restricted semantics that should not conflict with X.509 semantics.

    o Proxy certificates rely on the certificate path building process to establish a binding between the proxy certificate and the server certificate. Since the certificate path building process is not cryptographically protected, it is possible that a proxy certificate could be bound to another certificate with the same public key, with different X.509 parameters. Delegated credentials, which rely on a cryptographic binding between the entire certificate and the delegated credential, cannot.

    o Each delegated credential is bound to a specific signature algorithm that may be used to sign the TLS handshake ([RFC8446] section 4.2.3). This prevents them from being used with other, perhaps unintended signature algorithms.

    So that makes a lot more sense as to why they're going this route.

  • If an attacker gets your private key, seems like you are cooked.

    If it was a continent vs. the world ... that'll just show up down later in the article.
    • The point is that they use the same key for a lot of servers. That is a choice. They could also use different certificates for different domains. But that would mean more overhead. I suspect companies that want to track you want to use as many domains as possible, to stay a step ahead of domain blacklisting.

      I am not sure I like this proposal. It encourages bad actors to use too many domains and be too hard to block.

      • by raymorris ( 2726007 ) on Friday November 01, 2019 @12:35PM (#59369936) Journal

        The proposal makes it easier to use FEWER names, not more.

        Stealing a private key for a cert allows an attacker to impersonate the site. The server keeps the private key secret so that hackers can't impersonate the site running on that server. That makes perfect sense for errolbackfires.com.

        Facebook.com doesn't run on a gigantic server the size of a small city; it runs on thousands and thousands of servers. If it worked the way TLS was originally designed, so each of those thousands of servers had a secret key, those servers would be Facebook000001.com, Facebook000002.com, Facebook000002.com, etc - thousands of servers with thousands of private keys would need thousands of certs with thousands of different names.

        Similarly, CapitalOne.com isn't one server. It's a thousand servers. If each were going to have their own secret and their own cert, customers would need to deal with capitalo.com, capitaltwo.com, capitalthree.com, capitalfour.com, etc.

        That would be silly, so what Facebook has been doing instead is having thousands of copies of the same secret key. All the many, many servers have a copy of the secret. Well if you're trying to keep something SECRET, having 10,000 copies of it spread around the world doesn't help keep it safe from anyone getting it. So the way TLS works today, you either have many, many domains, or you put your secret key at risk, so hackers could impersonate you.

          The proposal is that the main capitalone.com secret can sign a subsidiary cert for capitalone.com that is only valid for 24 hours. Each of the thousand capitalone.com gets a unique daily cert that is only valid until midnight. That way if the secret that is available on the public web servers gets stolen, it is only valid for a few hours.
         

        • ...that would be silly...

          Why is it silly for each server to have its own name? www1-eastus.facebook.com, www74-eu-irl.facebook.com, etc. That is how lots of sites work. I know using the same name has some fun capabilities with finding the closest local server by using BGPs capability to count hops, but maybe they should then issue a redirect to the real page instead.

          • Let's put aside for a moment why each server should ALSO have a unique administrative name.

            Suppose you're a capitalone.com customer. Would you enter your credentials on capitalfourthoudandninetysix.com?

            Or equivalently, fourthoudandninetysix.capitalone.com?
            Theoretically you could, but that's ripe for phishing and breaks your password manager. It ALSO means that they can't scale servers up and down, or remove servers for any reason, without cutting people off in the middle of a transaction. That would kind

        • The proposal makes it easier to use FEWER names, not more.

          After reading TFA and the draft I still don't understand the underlying logic.

          Facebook.com doesn't run on a gigantic server the size of a small city; it runs on thousands and thousands of servers. If it worked the way TLS was originally designed, so each of those thousands of servers had a secret key, those servers would be Facebook000001.com, Facebook000002.com, Facebook000002.com, etc - thousands of servers with thousands of private keys would need thousands of certs with thousands of different names.

          There is no requirement to have different names. You could have a thousand certs all signed by a CA and a thousand different private keys for the same name. If any individual key is compromised you could simply revoke it.

          That would be silly, so what Facebook has been doing instead is having thousands of copies of the same secret key. All the many, many servers have a copy of the secret. Well if you're trying to keep something SECRET, having 10,000 copies of it spread around the world doesn't help keep it safe from anyone getting it. So the way TLS works today, you either have many, many domains, or you put your secret key at risk, so hackers could impersonate you.

          The proposal is that the main capitalone.com secret can sign a subsidiary cert for capitalone.com that is only valid for 24 hours. Each of the thousand capitalone.com gets a unique daily cert that is only valid until midnight. That way if the secret that is available on the public web servers gets stolen, it is only valid for a few hours.

          I don't really understand the benefit of any of this.

          Assume for a second I have a website with a thousand servers all with a different private key.

          One of the servers is compromised and its private key stolen.

          Res

          • > If any individual key is compromised you could simply revoke it. ...
            > I would much rather see drafts focusing on fixing revocation

            It seems you're aware that revocation doesn't work that well. Hopefully that'll be fixed, but so far attempts to fix it haven't been all that successful.

            > So either the site lives with remaining validity period or they actively revoke their public key and reissue certs to all of their servers.

            And that remaining validity period is anywhere from a negative number to a f

        • by kdayn ( 874107 )
          Why not just use SCEP? You have to update the certificate anyway and SCEP can do it already.
          • SCEP would allow 10,000 servers to each request six certs per day from a CA. I think they are trying to avoid hitting up the CA 60,000 times a day, buying 22 million certificates each year.

            Also, SCEP doesn't have a good way to strongly authenticate the request. The best SCEP can do is issue a new cert based on the fact that they have an old cert. That defeats the purpose of short-term certs - a hacker who steals a 6-hour cert can keep renewing it.

            From my understanding, what they want is more along the l l

  • Comment removed based on user account deletion
    • If you give Cloudflare enough money you can set up "keyless ssl" with them:

      https://www.cloudflare.com/lea... [cloudflare.com]

      Almost nobody cares enough about security to do this and little guys don't get to play. They will mint their own key with your domain in the general case.

  • ... the attacker can impersonate Facebook servers and intercept user traffic ...

    ... and here I thought that was a feature for sites like Facebook.

Disc space -- the final frontier!

Working...