Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Chrome Chromium Security The Internet

FTP Resources Will Be Marked Not Secure in Chrome Starting Later This Year (google.com) 152

Google engineer Mike West writes: As part of our ongoing effort to accurately communicate the transport security status of a given page, we're planning to label resources delivered over the FTP protocol as "Not secure", beginning in Chrome 63 (sometime around December, 2017). We didn't include FTP in our original plan, but unfortunately its security properties are actually marginally worse than HTTP (delivered in plaintext without the potential of an HSTS-like upgrade). Given that FTP's usage is hovering around 0.0026% of top-level navigations over the last month, and the real risk to users presented by non-secure transport, labeling it as such seems appropriate. We'd encourage developers to follow the example of the linux kernel archives by migrating public-facing downloads (especially executables!) from FTP to HTTPS.
This discussion has been archived. No new comments can be posted.

FTP Resources Will Be Marked Not Secure in Chrome Starting Later This Year

Comments Filter:
  • by Alioth ( 221270 ) <no@spam> on Thursday September 14, 2017 @09:15AM (#55194823) Journal

    ...FTP just needs to die. The two port requirement and worse still, people who don't get it still insisiting on 'active' FTP, is a pain in the backside for firewall admins (we had one vendor insist that passive mode was 'insecure' and active mode was somehow 'secure' but after some browbeating and the threat of the wire brush of enlightenment accepted they should use this new fangled "sftp" which didn't have any of the drawbacks of ftp, passive or active).

    FTP's day was done over ten years ago.

    • by RightwingNutjob ( 1302813 ) on Thursday September 14, 2017 @09:26AM (#55194905)
      See, it's IT-monkeys like you that make for most of the trouble in technical work. Yes, FTP isn't secure by itself, but it's simple. And in many contexts I can think of, simple and unlikely to break because someone forgot to update his certificate beats encrypted but way more fragile by a mile.
      • Yes, FTP isn't secure by itself, but it's simple.

        Spoken like someone who has never looked at the FTP protocol or the code in a client or server. HTTP is far simpler to implement than FTP and, unlike FTP, is also trivial to add TLS support to, easier to scale up with CDNs, and so on. FTP hasn't been the right tool for any job for well over a decade.

        • HTTP is far simpler to implement than FTP

          Not for uploads. You'll need a server-side script for those.

          • Uploads through the POST method require a server-side script. But I thought a server could handle the PUT method by itself.

            • By default, no. You need DAV enabled. On Apache that's mod_dav, and on nginx it's HttpDavModule.

          • by flink ( 18449 )

            Or you can use WebDAV. It's about 5 lines of httpd configuration and you can tie it in to use whatever auth module you are already using.

        • by tlhIngan ( 30335 ) <slashdot.worf@net> on Thursday September 14, 2017 @12:30PM (#55196589)

          Spoken like someone who has never looked at the FTP protocol or the code in a client or server. HTTP is far simpler to implement than FTP and, unlike FTP, is also trivial to add TLS support to, easier to scale up with CDNs, and so on. FTP hasn't been the right tool for any job for well over a decade.

          1) FTP uploads are easier to support than HTTP uploads HTTP uploads require CGI scripts to handle, and if configured wrongly, can lead to security issues (see FCC website w.r.t. comment system)

          2) FTP supports TLS -it's called FTPS (not to be confused with SFTP - the former uses FTP and initiates a TLS session, the latter uses SSH). Modern FTP clients and servers support STARTTLS as a command to initiate TLS, and they do it before the USER/PASS commands so the connection is encrypted from the get-go. Note that you need to use passive mode while doing this as most NAT gateways spy on FTP sessions to set up dynamic mappings, and TLS doesn't allow them to do it.

          3) HTTP doesn't allow for easy downloading of multiple files other than picking and saving one at a time. Sure browser extensions may try to simplify this, but in general, you can't pick a list of files and transfer that. Triply so if you want to upload multiple files - either the web page and script has to implement support or you're having to upload files one at a time. Clever javscripting can help with that, but now you're relying on user side and server side scripts and not all websites that support uploads support multiple file transfers.

          Granted, it's time for a modern upgrade to FTP that gets rid of the multiple port requirements, but HTTP is not a complete replacement for FTP. FTPS still has all the issues with FTP. SFTP is a lot better, but support is generally lacking across the board, including bypassing strict firewalls.

          • Modern FTP clients and servers support STARTTLS as a command to initiate TLS

            Unless the ISP intercepts the STARTTLS command sent by the client and turns it into a garbage command that produces a 502 Method Not Supported response, fooling the client into thinking the server doesn't support TLS. This has happened [eff.org], Ars Technica has reported on it [arstechnica.com], and there's even a proof of concept in PyPI [python.org]. What's FTP's counterpart to HSTS?

          • by SQLGuru ( 980662 )

            No kidding. I just used FileZilla to download 250GB of weather data samples from the NOAA web site (ftp://ftp.ncdc.noaa.gov). It's spread over close to 100,000 files. I just drug the folders I wanted over to my drive and all of the files got queued up. Left it downloading over night and came back to a completed transfer.

            I don't see using any sort of HTTP based solution as being able to do that.

            It's public data (and very likely not going to be MITM'ed), so unencrypted FTP was the perfect solution for thi

          • 1) FTP uploads are easier to support than HTTP uploads HTTP uploads require CGI scripts to handle, and if configured wrongly, can lead to security issues (see FCC website w.r.t. comment system)

            Nope, HTTP supports several verbs including PUT and POST. PUT doesn't require any scripting and can be configured in most servers to allow uploads based on the current authentication (which can be a client-side SSL certificate, a password, or a couple of other things).

            2) FTP supports TLS -it's called FTPS (not to be confused with SFTP - the former uses FTP and initiates a TLS session, the latter uses SSH). Modern FTP clients and servers support STARTTLS as a command to initiate TLS, and they do it before the USER/PASS commands so the connection is encrypted from the get-go. Note that you need to use passive mode while doing this as most NAT gateways spy on FTP sessions to set up dynamic mappings, and TLS doesn't allow them to do it.

            Needing to support passive mode isn't too much of a problem, because active mode is pretty broken with a lot of NATs anyway. Unfortunately, the STARTTLS mode is trivial to attack with downgrade attacks and most FTP clients don't complain. I

        • by Sigma 7 ( 266129 )

          HTTP is far simpler to implement than FTP

          HTTP Compared to FTP:

          • While HTTP has the specification in one document, you still need to read more than one if you want modern browsers to work as expected. FTP works find with just one specification (and if an FTP client craps out, that's usually because it doesn't know how to operate.)
          • FTP makes it easier to determine which state the server/client is at. HTTP plops headers into one block that's sent, and hopes the other side doesn't get confused. (A minor technica
        • Actually, no, when somebody explains that there are use cases with different considerations, you can't just disagree and have any chance to be right. Disagreeing just shows you don't comprehend the words he used. And they were simple words. But you thought you had a magical brain that can see other people's use cases better than they can see them, even if all you know is that it involved FTP! Durrrrrrrrrrrr

      • Yes, FTP isn't secure by itself, but it's simple.

        Go to bed, you're drunk. The only thing "simple" about FTP is the minds that came up with a system that requires multiple ports with connections established in different directions that is the bane of the modern internet NAT'd routing.

        Just because it's plain text and in english, doesn't make it "simple".

        • a system that requires multiple ports with connections established in different directions

          A data connection established with PORT does go in the opposite direction of the control connection. But I thought PASV, which runs both connections in the same direction and cooperates better with NAT, had become more common. The exception is so-called "FXP" transfers from one server to another, where the client opens control connections to two servers and sends PASV to one and PORT to the other in order not to have to bounce a file off a residential last mile.

          • But I thought PASV, which runs both connections in the same direction and cooperates better with NAT, had become more common.

            It has, and brought it's own problems with it. But you highlighted my point well. Active, passive, one port, two ports, server to server FXP, and all of this is apparently "simple", where in reality its a damn complicated protocol with a lot of intricacies which has been extended and hacked about many times over the years.

            • by tepples ( 727027 )

              There still exists a subset of FTP that's simple to support if the server operator doesn't need FXP functionality, namely PASV-only. Or is FXP too important in practice to consider it expendable?

    • someone got a promotion and had to make some kind of menial change to impress corporate to keep that promotion. hopefully they got a window view.
    • A million times this.

      FTP (and Telnet) are antique protocols that cannot be made adequately secure and have long since had more secure alternatives available.

      They just need to go away now.

      • by green1 ( 322787 )

        If you're giving a file away for free to everyone, how secure do you need the transport protocol to be?

        • As secure as possible. You don't want MITM attackers modifying the file, and you don't want your server to be compromised.

          • It's a better use of just about anything I can think of to encrypt the file or make a secure hash of it or whatever ONCE, and transmit it in the clear with no computational or administrative (you updated your certs on time...right?) than it is to store it with nothing and encrypt/decrypt it each and every time it's accessed. May not matter if you think it's someone else's job to do that legwork or if designing for performance and limited computing power is something you think only dinosaurs care about, but
            • I generally agree with you. I actually laughed when I read your comment because I can't tell you the number of times people have berated me here for making very similar arguments.

              Maximum security is not the right answer for all circumstances. However, as much security as you can reasonably implement is a good idea. Multilayered security is always desirable.

              Case in point: in my home network, almost everybody behind my firewall still talks using encrypted channels. Why not, when the machinery supports this wi

            • It's a better use of just about anything I can think of to encrypt the file or make a secure hash of it or whatever ONCE, and transmit it in the clear

              If you transmit the hash in the clear, a man in the middle can alter the hash in transit.

              • True. However:

                There are many use cases where the a security signature or certificate needs to be transmitted once and the data transmitted orders of magnitude more times. In which case, the computational overhead of establishing an encrypted channel needs to be accepted infrequently and for a relatively small payload instead of every time for a large payload.

                There are many use cases where the key distribution mechanism isn't over the internet at all.

                There are use cases where you really honestly don't car
                • by tepples ( 727027 )

                  There are use cases where you really honestly don't care if the cat video you're getting to kill time is authentic or not.

                  But you do care whether the operating system UI presented after the conclusion of the cat video is authentic and not a phishing attempt.

                  • That's a failure on the part of the browser/website/OS designer if it's possible to spoof a UI image convincingly in a video.. You can just as easily spoof a UI image over an encrypted channel as you can over a cleartext channel.
                    • You can just as easily spoof a UI image over an encrypted channel as you can over a cleartext channel.

                      With an encrypted channel, the browser has a fully qualified domain name with which to associate all images transmitted over that channel. With an unencrypted channel, the browser can't tell what domain has injected the images.

                    • Except we aren't talking about performing secure transactions inside a browser window. We're talking about transferring files with FTP.
                    • by tepples ( 727027 )

                      The MITM could insert malicious code into files transferred through FTP. Even if the file looks like a video, several video containers such as WMV have contained functionality to download DRM code to obtain a license needed to decrypt the video in the WMV file. This functionality can be and has been used for dropping trojans [stackexchange.com].

                    • You're running away with your examples. Streaming video (as in the raw bits and bytes that turn into pixels on the screen) does not need to be encrypted unless it's video of something you don't want to send in the clear. If you're executing any of those bits and bytes, that's on you for having a fundamentally insecure media player.
                    • by tepples ( 727027 )

                      Or unless it's a video whose publisher doesn't want it sent in the clear to be viewed by other people who haven't paid for it.

                • by Strider- ( 39683 )

                  There are many use cases where the a security signature or certificate needs to be transmitted once and the data transmitted orders of magnitude more times. In which case, the computational overhead of establishing an encrypted channel needs to be accepted infrequently and for a relatively small payload instead of every time for a large payload.

                  A prime example of this is software updates. From Microsoft, from Apple, from whomever. Everyone should be keeping their systems as up to date as possible. That said, the downloads for these updates are huge, so leaving them in the clear allows transparent caching infrastructure to work properly. What about a MITM attack on the binaries? Well, you can resolve that by making the control link from update service to the update client HTTPS, and including sha256 checksums in the system to verify the cleartext d

      • by DogDude ( 805747 )
        FTP (and Telnet) are antique protocols that cannot be made adequately secure and have long since had more secure alternatives available.

        You're right! The only thing is that the Internet was never designed to be "secure", and likely never will be. FTP works just fine for moving files around. Not everybody needs security on the Internet. If you want to pretend that you can slap some software on top of an inherently insecure network to try to make it "secure", then go right ahead. I'll happily continue
        • Ahh, the old "if it can't be perfectly secure, then there's no point" argument! Personally, I'll take "as secure as possible" as preferable over "not secured at all".

        • by tepples ( 727027 )

          FTP works just fine for moving files around.

          But what works fine for ensuring that the file the receiver receives is identical to the file the sender sent?

    • by eth1 ( 94901 )

      ...FTP just needs to die. The two port requirement and worse still, people who don't get it still insisiting on 'active' FTP, is a pain in the backside for firewall admins (we had one vendor insist that passive mode was 'insecure' and active mode was somehow 'secure' but after some browbeating and the threat of the wire brush of enlightenment accepted they should use this new fangled "sftp" which didn't have any of the drawbacks of ftp, passive or active).

      FTP's day was done over ten years ago.

      what? As someone else who administers firewalls, I have to ask, what the hell kind of ancient packet filter are you using?

      Any modern stateful firewall for almost the last two decades has been able to inspect unencrypted FTP control channels, and dynamically open the appropriate data channel ports. Both active and passive usually work just fine. As a security person, I hate FTP for its lack of encryption and clear text password transmission, but technically, it's one of the easiest.

      SFTP will often end up wit

  • Donâ(TM)t throw the baby out with the bathwater, you can use ftp over tls ( goes by many names) . I dobâ(TM)r see any reson to kill of ftp (well the unencryptet version needs to go) it works perfectly eell for what it is intended to do
  • So how about FTPS (Score:5, Interesting)

    by guruevi ( 827432 ) on Thursday September 14, 2017 @09:16AM (#55194835)

    FTP can be done using TLS and there is also SSH-FTP. FTPS is no more or less secure than HTTPS.

    Have you ever downloaded large files over HTTP? It's not built for it, you practically need a download manager because the browsers will just choke or won't be able to continue unfinished downloads and there are hacks that make it work but many configurations aren't set up right to continue partial downloads.

    • I've noticed a lot of https or torrent as download options for large files.

    • FTP can be done using TLS and there is also SSH-FTP. FTPS is no more or less secure than HTTPS.

      FTP over TLS is a pain, because you still end up needing multiple connections and the protocol is a nightmare. It's also implemented in a variety of different and incompatible ways by different people and so there's basically no good way of supporting it. SFTP is an entirely different protocol that shares some initials with FTP but is otherwise unrelated.

      Have you ever downloaded large files over HTTP? It's not built for it, you practically need a download manager because the browsers will just choke or won't be able to continue unfinished downloads and there are hacks that make it work but many configurations aren't set up right to continue partial downloads.

      I don't think I've downloaded anything over about 50GB via HTTP, but I've had no problems doing so. The protocol supports range requests and both comman

      • by guruevi ( 827432 )

        FTPS is relatively standard, I never have had an issue with it although you are right that there are various shoddy closed source FTP servers, the only "problem" is that most use self-signed certificates.

        I'm currently trying to download 300GB via HTTP on a server that limits each user to a single connection of 128kbps (yay, academia). I know it supports range requests but as I said, they are particularly flaky on many non-Apache servers (looking at you IIS) and many admins misconfigure it, especially on ngi

    • I download large files over HTTP using wget. You can even configure wget to ignore robots.txt.

      • you practically need a download manager because the browsers will just choke

        I download large files over HTTP using wget.

        True, GNU Wget offers more robust resume support than a web browser. But it illustrates guruevi's point because it's a download manager, not a browser.

    • FTPS is WORSE. The problem with FTP in general is with stateful firewalls and NAT/PAT.

      FTP is a ridiculous protocol. For FTP to work at all these days, firewalls actually need to go out of their way to snoop in on the control channel and watch for the data channel IP & port, and then use that to pre-populate the state table with an entry for the data session.

      This breaks down on a lot of firewalls if you change the control port, or if you use FTPS where the control session is encrypted and the firew

      • by guruevi ( 827432 )

        Some (most) people would say NAT is a ridiculous situation. The protocol isn't broken, NAT is.

    • FTP can be done using TLS and there is also SSH-FTP. FTPS is no more or less secure than HTTPS.

      True, but that doesn't fix the rest of the things that make FTP suck.

      Have you ever downloaded large files over HTTP?

      Who said to use HTTP? That's not the only (or even a good) alternative.

    • Comment removed based on user account deletion
      • by tepples ( 727027 )

        The explicit FTPS method allows a client to revert to an unencrypted control channel after authentication, but it is only at the client's discretion. The FTPS server can't tell the client to issue a CDC command. So if your site requires it, you have to tell your users to configure their FTPS client that way.

        And that's a good thing. Otherwise, neither side can be sure that the file that is received is identical to the file that was sent.

      • Comment removed based on user account deletion
  • by Anonymous Coward

    GOPHER!

  • by Anonymous Coward

    At least it's not providing a false sense of security, unlike several other protocols (those with an 's' on the end) that allow any fraudulent certificate authority to issue certificates for any domain they feel like.

    • Exactly. SSL seems to be one money grubbing corporate shill back-slapping another. It's web-of-trust requires trusting entities that nobody in their right mind could or should trust.

      As far as FTP goes, there is no analog to something like proftpd which has all kinds of cool options (like upload download ratios, advanced per-user chroot support, encryption, or RADIUS accounting support. You aren't going to find a WebDAV service that supports all that. It's not a valid replacement for FTP.
  • by Anonymous Coward

    Make file transfers great again. Trimp 2018!

  • by Anonymous Coward

    This is typical of Google; always trying to gain CONTROL over our lives. Fortunately, I never use Chrome, and I am to the point where I am thinking of creating my own browser that enables the user to have FULL control.

    Imagine...
    Being able to block popups on one site but be able to enable them on another.
    Blocking some sites but not others.
    Enable cookies at one site, but not another.
    False seeding cookies

    etc. etc.

    If somebody else comes up with one of these, and no not just another addon, then I will try it be

    • Now that would be a plugin! One where people can choose the (ad) cookies to share with the other users of the plugin, basically rendering any and all data collected absolutely worthless because nobody can ever know anymore who used what ad cookie to visit a page.

      • Now that would be a plugin! One where people can choose the (ad) cookies to share with the other users of the plugin, basically rendering any and all data collected absolutely worthless because nobody can ever know anymore who used what ad cookie to visit a page.

        Interesting idea. It's not so certain that it would be a clear win for users, though. Inability to target ads means that ads are worth less, all else being equal. This leads advertisers to try to make their ads worth more by making them attention grabbing (bigger, brighter, blinkier, self-playing video, etc.), or to simply pay less for ad space. If they pay less for ad space, then site owners are incented to increase the amount of ad space on their sites, or else to stop depending on advertising revenue and

        • The benefit of more obnoxious ads is more people blocking them.

          And yes, people start doing that. "Normal" people. Not geeks. The same people that dutifully close 20 popups and error messages every time they launch a browser. The same people that do the same every time they start their machines. The same people that have a 5 by 4 inch browser window because the rest is occupied by "free" search bars.

          Can you imagine just how much you have to piss people like THAT off to make them install something? And they d

          • So... what's your strategy for funding the web, since you clearly want to remove all advertising from it?
            • ISPs get money from their customers to connect them. Hosters get money from their customers to host content. What else do you need?

              • ISPs get money from their customers to connect them. Hosters get money from their customers to host content. What else do you need?

                How do content producers get money to produce content? To take one specific example, who would pay for slashdot's hosting?

                • Most likely the users. At least if the content is worth it.

                  • Most likely the users. At least if the content is worth it.

                    Which means that 90% of the content on the web will disappear, and the remainder will all be paywalled and inaccessible to many people. There's good reason to dislike advertising, but there are also extremely good reasons that it has been the primary mechanism for funding broadly-distributed content for centuries. No other approach has proven to work remotely as well.

                    But, actually, none of that will happen. Instead, we'll just have an arms race between adblockers and adblocker-blockers, and the better-fun

                    • Then the login credentials to said ad networks will be shared. If they are tied to Facebook accounts, fake FB accounts will be generated by the dozen per nanosecond to fill the need. You build the better mouse trap and the better mouse will evolve.

                      In the end, I can't say that seeing 90% of the gunk that clogs the internet vanish would be a bad thing. I do remember the internet before its commercialization. It was smaller. But the average IQ was at least double its current rate.

  • Browsers should have stopped supporting FTP at least 10 years ago. We're never going to force the dinosaurs to upgrade until we stop enabling them! FTP has a proud place in the history of the internet, but it's time has long since passed, and it is time to retire the protocol, forever.

    • Because it has its uses, especially with capable clients like ncftp. mget * is much easier and more reliable than building up a list of switches for wget.

  • One day, someone will explain to me why a completely public piece of information, distributed freely to everyone everywhere, needs to be delivered securely.

    In a world where FedEx drops packages at my front door, and leaves them there when I'm away.
    In a world where the only thing stopping my speeding car from hitting an on-coming speeding car is a line of yellow paint.
    In a world where my front door is locked with a deadbolt, right next to a glass window.

    Can anyone say "overboard"? I don't need to encrypt so

    • by green1 ( 322787 )

      This has been the big thing with security all along. As long as you are fine with HTTP, why wouldn't you be fine with FTP?

      Not everything needs to be secure.

      Next time I drive by a billboard on the side of the highway it better be encrypted so I can't read the ad without some security certificates on my end!

      That said, browsers have always been positively HORRIBLE FTP clients, so if people decide to use FTP clients instead of browsers to use FTP sites, it's not really a huge loss.

    • One day, someone will explain to me why a completely public piece of information, distributed freely to everyone everywhere, needs to be delivered securely.

      Because, while the information is public, the fact that you requested it should be private. Even seemingly innocuous requests can reveal personal information which can be used against you, regardless of whether you are doing anything "wrong". Limiting how much strangers know about you is a powerful first line of defense: a form of social camouflage.

      Because you need to be able to trust that the information is from the expected source and hasn't been tampered with in transit. This is particularly true when th

    • Answer: third parties. Do you want anyone who can see the traffic to be able to insert themselves into the traffic flow and alter what you're receiving? Because that's what can happen with unencrypted traffic. And while it might not be easy to do that at the level of say the backbone routers, it's really easy for someone to hack the WiFi router at a coffee shop and install a transparent proxy to hijack downloads and replace them with malware. In fact, if you make regular use of public WiFi access, the hijac

    • Obligatory car analogy [xkcd.com].

Suggest you just sit there and wait till life gets easier.

Working...