Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
The Internet Communications Networking

Most Alarming: IETF Draft Proposes "Trusted Proxy" In HTTP/2.0 177

Lauren Weinstein writes "You'd think that with so many concerns these days about whether the likes of AT&T, Verizon, and other telecom companies can be trusted not to turn our data over to third parties whom we haven't authorized, that a plan to formalize a mechanism for ISP and other 'man-in-the-middle' snooping would be laughed off the Net. But apparently the authors of IETF (Internet Engineering Task Force) Internet-Draft 'Explicit Trusted Proxy in HTTP/2.0' (14 Feb 2014) haven't gotten the message. What they propose for the new HTTP/2.0 protocol is nothing short of officially sanctioned snooping."
This discussion has been archived. No new comments can be posted.

Most Alarming: IETF Draft Proposes "Trusted Proxy" In HTTP/2.0

Comments Filter:
  • Well for one... (Score:5, Insightful)

    by Junta ( 36770 ) on Sunday February 23, 2014 @10:51AM (#46316039)

    Pretty much anyone can submit an IETF RFC if they really want. The existence of a draft does not guarantee a ratified version will exist someday.

    For another, it could be much worse. There is explicit wording at least here about seeking consent from the user and allowing opt-out even in the 'captive' case, as well as notifying the actual webserver of this intermediary, and that the intermediary must use a particular keyusage field meaning that some trusted CA has explicitly approved it (of course, the CA model is pretty horribly ill-suited for internet scale security, but better than nothing). Remember how Nokia confessed they silently and without consent had their mobile browser hijack and proxy https traffic without explicitly telling the user or server? While something like this being formalized wouldn't prevent such a trick, it would be very hard to defend a secretive approach in the face of this sort of standard being in the wild.

    Keep in mind that in a large number of cases in mobile, the carriers are handing people the device including the browser they'll be using. A carrier could do what Nokia admits to in many cases without the user being the wiser and claim the secretive aspect is just a side effect today. If there was a standard clearly laying out that a carrier or mobile manufacturer should behave a certain way, that defense would go away.

    I would always elect the 'opt out' myself, but I'd prefer anything seeking to proxy secure traffic be steered toward doing things on the up and up rather than pretending no one will do it and leaving the door open for ambiguous intentions.

    • Remember how Nokia confessed they silently and without consent had their mobile browser hijack and proxy https traffic without explicitly telling the user or server? While something like this being formalized wouldn't prevent such a trick, it would be very hard to defend a secretive approach in the face of this sort of standard being in the wild.

      Except the consumer of this ID would be anyone with a browser which is potentially billions of people. The only question that matters in my opinion is can you explain the concept of "trusted proxy of untrustworthy content" to an average person (e.g. cookie baking oracle) ... if not essentially you are asking the user to provide an answer to a question they don't understand. A stupid and pointless question I might add.

      If there was a standard clearly laying out that a carrier or mobile manufacturer should behave a certain way, that defense would go away.

      Providing legal cover for illegitimate behavior I suspect is the whole point. See the use

  • by cardpuncher ( 713057 ) on Sunday February 23, 2014 @10:53AM (#46316051)
    But as I read it, the issue seems to arise from the fact that HTTP2 will permit TLS to be used with both http: and https: URLs. If it is used for http: URLs, then existing proxy and caching mechanisms will simply break. I think this is a proposal for "trused proxies" to be permitted where an http: URL is in use and TLS is also employed, I don't think it's proposed that this should apply to https: URLs.

    In other words, it doesn't make things any worse than the current situation (where http: URLS are retrieved in plain text all the time) and does permit the user to control whether they want some protection against interception or potentially better performance. And it doesn't appear to change the situation for https: at all.

    Or that's how it appears to me.
    • But as I read it, the issue seems to arise from the fact that HTTP2 will permit TLS to be used with both http: and https: URLs. If it is used for http: URLs, then existing proxy and caching mechanisms will simply break. I think this is a

      My understanding using TLS for HTTP via "HTTP2" is accomplished via *untrusted* opportunistic encryption. Nothing breaks if your operating a proxy supporting HTTP2. Proxy would simply terminate the encryption from the client and setup a separate equally useless "encrypted" channel to the server. The proxy would act as a middle man.

      proposal for "trused proxies" to be permitted where an http: URL is in use and TLS is also employed, I don't think it's proposed that this should apply to https: URLs.

      Basically what they are proposing is to provide a "trusted proxy" for completely untrustworthy http transactions. How is this not an oxymoron? What is the security value? Va

  • The current solution (Score:2, Informative)

    by Anonymous Coward

    If you want to do this now, you're typically in one of two situations:

    You need to proxy the traffic for all users of a company, in order to filter NSFW content and to scan for viruses and other malware. In this case you add your own CA to all company computers. Then you MITM all SSL connections. This doesn't work for certain applications which use built-in lists of acceptable CAs, but mostly the users will be none the wiser.

    The other situation is that you want a reverse proxy in front of your hosting infras

  • by MobyDisk ( 75490 ) on Sunday February 23, 2014 @10:56AM (#46316069) Homepage

    My employer uses a MITM HTTPS proxy. The IT department pushed down a trusted corporate certificate, and most people don't even know their HTTPS connections aren't secure any more. The real problem is when some application, other than a browser, needs internet access and it fails. This includ sethings like web installers that download the app during installation, automatic update systems, secure file transfer software, or things that call home to confirm a license key. On occassion a developer curses some installer for not working, then we inspect the install.log file and find something about a certificate failure.

    IT departments forget that HTTPS is used for more than just browsing the web.

    • Same at my company, but I take issue with "people don't even know their HTTPS connections aren't secure any more". Corporate machines are "rooted" in the first place, they generally install whatever new software the employer wants during each reboot or login. Probably half the cycles on my work computer are wasted on Symantec spyware. So, you can't lose the privacy you never had.
    • by smash ( 1351 )

      IT departments forget that HTTPS is used for more than just browsing the web.

      No, not necessarily. Some IT departments are just more paranoid than others about letting un-filtered https go through the firewall, due to the new generation of malware which is typically doing C&C over HTTPs to thousands of randomly generated and not blacklisted URLs.

      You have a choice - you MITM/inspect HTTPs, you allow only whitelisted HTTPs connections (which is not really practical due to the ever changing whitelist),

    • Those applications are broken. If they fail to respect the OS proxy and CA settings they are the ones at fault.

      In a corp environment nothing should be calling home ever, that is what they made licences servers for.
      Updates should be gotten from an update server, ya know something that IT approves.
      Installers calling home again should never happen.
      Post SOX/HIPPA there is no secure file transfer your IT dept has a legal requirement to look and record things coming in and out the door.

      • by drolli ( 522659 )

        That depends on what the purpose of this application is. There are purposes for which you may prefer an application failing instead of accepting another certificate. If the application promised end-to-end safety, with a very specific *certified* configuration ending on the target (i imagine Software updates for the development of embedded systems in cars), then failure by default is the right behaviour until sombody signs of a sheet of paper that he/she/the company takes responsibility to the end customer (

        • Anything like that is not running on a GP computer, as the hardware is very much part of the solution. Nor should it be connected to the general corp lan, an environmental system would be a good example it goes in a DMZ gets holes punched through to exactly what it needs and can not access anything else.

          • by drolli ( 522659 )

            The development systems for embedded software *are* running on GP computers. Simulink embedded coder etc require windows PCs. And yes, these developers dont transfer everything by floppy/sneakernet.

      • by MobyDisk ( 75490 )

        Post SOX/HIPPA there is no secure file transfer your IT dept has a legal requirement to look and record things coming in and out the door.

        Ironically, HIPAA requires that they NOT recording things coming in and out of the door. Yay for regulation!

      • by MobyDisk ( 75490 )

        In a corp environment nothing should be calling home ever, that is what they made licences servers for.
        Updates should be gotten from an update server, ya know something that IT approves.
        Installers calling home again should never happen.

        Thinking like that is why everyone hates IT departments. You are saying that applications should be designed to support the IT departments way of doing things. In reality, lots and lots of apps call home and perform their own licensing. There's nothing wrong with that except that it interferes with the IT departments "vision" of perfect control.

        Post SOX/HIPPA there is no secure file transfer your IT dept has a legal requirement to look and record things coming in and out the door.

        Actually, HIPAA states the exact opposite. That is why our company has specific file transfer rules in place to prevent snooping. If our IT department intercept

    • As a website operator, I want to know if my content is being MITMd en route to the user. I know about the SSL fingerprint trick that lets a really technical user discover proxying, but I want to automate this process server-side, and stick up a big banner to say "Your employer is snooping on this connection, please log in from a trusted machine" (and then I'll prevent the user from logging in).

      • As a website operator, I want to know if my content is being MITMd en route to the user. I know about the SSL fingerprint trick that lets a really technical user discover proxying, but I want to automate this process server-side, and stick up a big banner to say "Your employer is snooping on this connection, please log in from a trusted machine" (and then I'll prevent the user from logging in).

        What you just wrote makes about as much sense as: "My Internet is currently down so I'm sending a nasty e-mail to my ISP demanding they fix the problem."

        • Why? If the connection is being MITMd, then both sides need to be able to figure this out.
          There was a long discussion on this (regrettably rejected by the browser vendor) to allow the SSL fingerprint to be obtained in JS. That would make it reasonably easy for the site operator to verify that the SSL cert hadn't been tampered with. (Of course, a really evil proxy can scan for the JS, but that game of whack-a-mole is usually easier for the good guys to win, at least sometimes).

          • Why? If the connection is being MITMd, then both sides need to be able to figure this out.

            You answer your own question in the next paragraph.

            You have a compromised communication channel and you are making decisions based on content of data communicated over that channel. It's broken so lets use it anyway and hope for the best.

            There was a long discussion on this (regrettably rejected by the browser vendor) to allow the SSL fingerprint to be obtained in JS. That would make it reasonably easy for the site operator to verify that the SSL cert hadn't been tampered with. (Of course, a really evil proxy can scan for the JS, but that game of whack-a-mole is usually easier for the good guys to win, at least sometimes).

            If you want servers to validate clients use client certificates or TLS-SRP to log-on to a site. All MITM countermeasures need to be cryptographically bound to session encryption or they are useless. "whack-a-mole" scenarios do not prove security and security without mea

    • The IT department didn't forget. The higher ups never knew, never bothered to find out, and never was interested in the answer anyway.

  • by SuricouRaven ( 1897204 ) on Sunday February 23, 2014 @10:57AM (#46316081)

    It's already quite easy to add a * certificate to a browser to allow a proxy to intercept SSL. This is a standard practice in many LANs to allow the web filter to work on SSL pages - otherwise it'd be impossible to perform more than the most basic DNS/IP filtering on HTTPS sites, which would let a *lot* of undesired content through - google images alone would be quite the pornucopia.

    All this proposal does is formalise the mechanism that people are already widely using. The end user still needs to explicitly authorise the proxy, no different than adding a * certificate today - and that's something so common, Windows lets you do it via group policy. The author's big fear seems to be that ISPs could start blocking everything unless the user authorises their proxy - and they could do that already, just be blocking everything unless the user authorises their * certificate!

    And either way, they won't. For reasons of simple practicality. Sure, they could make the proxy authroisation process easy by giving a little 'config for dummies' executable. Easily done. Now repeat the same for the user's family with their three mobile phones (One android, one iOS, one blackberry), two games consoles, IP-connected streaming TV, the kid's PSP and DS (Or successor products), the tablet and the internet-connected burgler alarm. All of which will be using HTTP of some form to communicate with servers somewhere, and half of them over HTTPS, with the proportion shooting *way* up if HTTP/2.0 catches on.

    • by x0ra ( 1249540 )
      The NSA, GCHQ and other acronym agency are already spying everybody, so let's just formalize that even more. HTTPS MITM very basics is wrong, formalized or not.
    • It's already quite easy to add a * certificate to a browser to allow a proxy to intercept SSL. This is a standard practice in many LANs to allow the web filter to work on SSL pages - otherwise it'd be impossible to perform more than the most basic DNS/IP filtering on HTTPS sites, which would let a *lot* of undesired content through - google images alone would be quite the pornucopia.

      So, if I understand you correctly, this proposal does nothing which it is not already possible to do, and should therefore be discarded...

      • Sort of. Nothing that isn't possible to do right now. But the MITM-via-trusted-cert isn't the tidiest approach. It's an administrative headache - every OS has its own method for adding a trusted cert, and some don't permit it at all, and it doesn't allow clients to validate the server's certificate if the proxy doesn't accept it. I'm not sure quite what this proposal is, but it appears to be something to build in properly from the beginning support for trusted cert interception so it won't be such an inconv

    • by dbIII ( 701233 )

      otherwise it'd be impossible to perform more than the most basic DNS/IP filtering on HTTPS sites, which would let a *lot* of undesired content through

      So? That's not really enough of an excuse to record what the users have flagged as purely private conversations by using https in the first place. IMHO it's just as immoral as installing cameras in motel bedrooms and bathrooms to make sure the guests don't get up to any private acts that the motels terms of service forbids.

      If I was in possession of one of t

  • From the *actual* draft:

    This document describes two alternative methods for an user-agent to
    automatically discover and for an user to provide consent for a
    Trusted Proxy to be securely involved when he or she is requesting an
    HTTP URI resource over HTTP2 with TLS. The consent is supposed to be
    per network access. The draft also describes the

    • Re: (Score:2, Insightful)

      by Anonymous Coward

      This is also from the *actual* draft [ietf.org]:

      7. Privacy Considerations

      Notice how it's empty? The author(s) plainly don't give two hoots about use privacy.

    • by dbIII ( 701233 )

      From what I can discern, her primary concern is that ISP's will force all of their users to consent to them acting as a trusted proxy or refuse to serve them

      From what we've seen with the SSL proxies in workplaces that is a very legitimate concern.

  • by turkeyfish ( 950384 ) on Sunday February 23, 2014 @11:14AM (#46316167)

    What is going to happen to all those secure credit card transactions that are the life-blood of internet commerce, when third parties figure out how to decrypt packets en-route by infiltrating the procedures of ISP's and alter them to "achieve efficiencies"?

    You would think capitalists have a lot to loose if this proposal goes forward.

    • by smash ( 1351 )
      You mean playing man-in-the-middle with your HTTPS? It's already been going on for years.
      • if I didn't install the OS and I'm inside a corp LAN, I assume the worthless little 'lock' icon doesn't mean shit anymore.

        I would use my own laptop and my own purchased and installed VPN.

        these days, if you are in corp LAN, you have to assume you are being logged and traffic sniffed. this isn't 10 yrs ago when it was new and hot to do this; I would assume any company bigger than 10 people have this 'proxy' shit going on (mitm ssl).

        and about 10 yrs ago, I had an interview at bluecoat when I was informed by a

    • What is going to happen to all those secure credit card transactions that are the life-blood of internet commerce, when third parties figure out how to decrypt packets en-route by infiltrating the procedures of ISP's and alter them to "achieve efficiencies"?

      You would think capitalists have a lot to loose if this proposal goes forward.

      No kidding. Every day brings more and more proof that the bad guys are smarter (or at least way more motivated) than the good guys.

    • Valid point.

      Originally, SSL/TLS and HTTPS were developped and deployed to provide pprotection for this small amount of snesitive data.

      Now, for various reasons, we have HTTPS protect pages that contain a lot of "rich" content that actually doesn't need this protection. This has the side affect of creating a lot of extra, uncachable content. I can understand why ISPs would want a way handle that.

      So, is there a way to securely protect the sensitive stuff while leaving the rest unencrypted? Perhaps the non-sens

  • by redshirt ( 95023 ) on Sunday February 23, 2014 @11:41AM (#46316301)
    Is that Section 7, "Privacy Considerations," has no content.
    • Should we trust anything coming out from the US Department of Commerce?

    • by Alsee ( 515537 )

      Actually there is an extensive section on Privacy Considerations, but it has been deemed classified under U.S. National Security.

      -

  • Call me old school but transparent interception of https does not increase my feeling of safety. It breaks the net and any security I might imagine in a transaction. This technology will make it really easy for anyone to do what for example Microsoft does to Skype connections (which is why Skype isn't allowed in my company). It provides for any number of decryption points to be created between you and your bank or whatever. The doc suggests that it can be used for both anonymization and deep inspection, pos

  • The author who says that this is 'most alarming' is missing one key thing; sometimes people use computers that belong to someone else.

    Any company that needs it's employees to be able to use the internet, but also want to be able to detect any employee that is sending documents via the internet to outside of the company would love to use this, as well as have every permission to install this on their own computers. They could then have the employees computers trust the SSL proxy, and it could easily detect a

    • The author who says that this is 'most alarming' is missing one key thing; sometimes people use computers that belong to someone else.

      Any company that needs it's employees to be able to use the internet, but also want to be able to detect any employee that is sending documents via the internet to outside of the company would love to use this, as well as have every permission to install this on their own computers.

      Alternately, they put in a transparent https proxy, and sign a trust certificate for the proxy, and install the cert on all the corporate computers. Attempts to access port 443 from interior computers which do not already have the cert installed are redirected to a download page for the cert, and have a one-time "opt in". Making the proposal totally unnecessary for this use case.

  • by LostMyBeaver ( 1226054 ) on Sunday February 23, 2014 @12:16PM (#46316569)
    While the article justifiably blows a whistle on what could be an abuse or power, the premise of the article is BS at best. It suggests that the tech could be used to maliciously snoop on people without their knowledge. The spec says nothing of the sort. It allows a user to make use of a proxy. In the case of a TLS only HTTP 2.0, this is needed. Without it, people like myself would have to setup VPNs for management of infrastructure. I can instead make a web based authenticated proxy server which would permit me to manage servers and networks in a secure VPN environment where end to end access is not possible.

    Additional benefits of the tech will be to create outgoing load balanced for traffic which add additional security.

    How about protecting users privacy by using this tech. If HTTPv2 is any good for security, deep packet inspection will not be possible and as a result all endpoint security would have to exist at the endpoint. Porn filters for kids? Anti-virus for corporations? Popup blockers?

    How about letting the user make use of technology like antivirus on their own local machine to improve their experience? How many people on slashdot use popup blockers which work as proxies on the same machine.

    This tech adds to their security end-to-end instead. After all, it allows a user to explicitly define a man-in-the-middle to explicitly trust applications and appliances in the middle to improve their experience.

    What about technology like Opera mini which cuts phone bills drastically or improves performance by reducing page size in the middle.

    Could the tech be used maliciously? To a limited extent... Yes. But it is far more secure than not having such a standard and still using these features. By standardizing a means to explicitly define trusted proxy servers, it mitigates the threat of having to use untrusted ones.

    Where does it become a problem? It'll be an issue when you buy a phone/device from a vendor who has pre-installed a trusted proxy on your behalf. It can also be an issue if the company you work for pushes out a trusted proxy via group policy that now is able to decrypt more than what it should.

    I haven't read the spec entirely, but I would hope that banks and enterprises will be able to flag traffic as "do not proxy" explicitly so that endpoints will know to not trust proxies with that information.

    Oh... And as for tracking as the writer suggests... While we can't snoop the content, tools like WCCP, NetFlow, NBAR (all Cisco flavors) as well as transparent firewalls and more can already log all URLs and usage patterns without needing to decrypt.

    So... May I be so kind as to simply say "This person is full of shit" and move on from there?
    • Re: (Score:2, Insightful)

      by Anonymous Coward

      This tech adds to their security end-to-end instead. After all, it allows a user to explicitly define a man-in-the-middle to explicitly trust applications and appliances in the middle to improve their experience.

      I think you need to re-examine your use of the word "security" and "end-to-end".

      This does precisely the opposite of what you said, to achieve the aim you stated.

      "This tech reduces their security end-to-end, to improve their experience" is what it does. I admit, it has the potential to improve their

  • It seems to me this is just an attempt to standardize what people are already doing with fakey hackish methods involving bogus certs etc.

  • One of the benefits of using HTTPS currently is that it avoids broken proxies. There are all sorts of implementations that claim to support HTTP 1.1, but don't support 100 Continue, content negotiation, or other important features you might need to use. If you use HTTPS, it currently avoids all the breakage (unless the destination server itself is actually broken). Besides the security issues inherent in this model, you have to worry about all the cases in which somebody installed some broken proxy that doe

  • This is the same question as what to do with "HTTP" (not HTTPS) requests when transported over HTTP2 (which is supposed to be all TLS) and SPDY (which is already all TLS, and which HTTP2 is based on). Usually it's framed in the context of "do we need to authenticate and verify TLS certificates when the user didn't originally request HTTPS?"

    Some people are of the opinion that "TLS is TLS, and if you can't 100% trust it, there's no point." And I can see the logic in that. Obviously that should always be the c

    • My apologies, the second to last paragraph should read "in order to use SPDY or HTTP2 even for "HTTP" requests"...

      The extra "HTTPS" is nonsensical in this context and should not be there.

    • Also, I could point out that requiring validation of TLS certificates for SPDY/HTTP2 prevents actual shared hosting from opportunistically encrypting all the zillions of sites they host, which would be trivial right now (chances are they DO have a certificate installed... in the ISP's name... but not for every site they host). While this wouldn't allow real trusted "HTTPS" connections, it would allow for a LOT of sites to suddenly be using encryption routinely without either the site owners or the end users
  • This is laughably a bad idea.

    This will be abused the instant it hits code. The temptation is too great. This will sink the adoption of http 2.0 and 1.1 will live for a far greater time.

    With all of the news around man in the middle attacks I just can't believe this will be a feature.

    This needs to be amended. I can see trusted chains, Where you would trust a chain from end to end, but just the proxy? With each node in the chain being able to cache.

Think of it! With VLSI we can pack 100 ENIACs in 1 sq. cm.!

Working...