Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
IT Technology

No More FTP At Debian (debian.org) 75

New submitter Gary Perkins writes: It looks like anonymous FTP is officially on its way out. While many public repositories have deprecated it in favor of HTTP, I was rather surprised to see Debian completely drop it on their public site. In a blog post, the team cited the FTP's lack of support for caching or acceleration, and declining usage as some of the reasons for their decision.
This discussion has been archived. No new comments can be posted.

No More FTP At Debian

Comments Filter:
  • by Bluecobra ( 906623 ) on Thursday May 04, 2017 @03:06PM (#54356189)

    Thank goodness, FTP needs to die in a fire. Everyone should be using SCP/SFTP nowadays anyways.

    • by serviscope_minor ( 664417 ) on Thursday May 04, 2017 @03:07PM (#54356201) Journal

      Or https/http, for simply fetching files.

    • by Anonymous Coward on Thursday May 04, 2017 @04:14PM (#54356727)

      While good on paper, what you propose is a lot more complicated. SCP and SFTP are subsystems of SSH, which do not have the degree of fine-grained control and capability as most decent FTP servers do. Rate-limiting is one such example (i.e. rate-limiting only SCP/SFTP but not SSH). Network administrators love to think of the situation simply ("yay, I can remove annoyances relating to TCP port 20 and 21 for FTPs passive and active modes, now just pass TCP port 22 and my job is done!"), but those of us in systems who have to actually try to implement the fine-grained controls for this with SSH/SCP/SFTP are driven absolutely mad because even servers like OpenSSH do not provide that granularity.

      In short: SCP/SFTP are no where close to drop-in replacements for FTP.

    • How much adaption has SCP/SFTP seen? I haven't seen it anywhere.

      Combine it w/ IPv6, and it should be ideal!

    • Everyone should be using SCP/SFTP nowadays anyways.

      FTP has file download resume (REGET), which is quite useful for big archives with weak connection.

      • And http has range requests.

        I have a utility that requests the end of a zip, displays the contents, and lets you select files. The compressed data is downloaded, the central directory rewritten, and you saved bytes.

        Tell me how useless it is, but this was 20 years ago and range requests were a thing back then.

  • by fred6666 ( 4718031 ) on Thursday May 04, 2017 @03:07PM (#54356203)

    Every time I used FTP in my sources.list, it was slower to connect. The whole apt-get update process could therefore be twice as long on FTP, compared to HTTP. Even though I guess once connected, the file transfer protocol should be more efficient.

    • by buchner.johannes ( 1139593 ) on Thursday May 04, 2017 @03:17PM (#54356289) Homepage Journal

      Even though I guess once connected, the file transfer protocol should be more efficient.

      There are huge differences between FTP servers in terms of their delivery.
      But today's Apache delivers static files extremely fast, by telling the kernel to move a file data onto the network card, so the data are never actually moved to the application. That's fast, and you can still play proxying, cache-freshness and other HTTP tricks on top of this.

      • by Anonymous Coward on Thursday May 04, 2017 @04:51PM (#54356985)

        > But today's Apache delivers static files extremely fast, by telling the kernel to move a file data onto the network card, so the data are never actually moved to the application.

        Hogwash. Even FreeBSD's accf_http.ko and accf_data.ko modules do not work this way. You cannot bypass "the application" layer -- that's httpd. Possibly you meant moved *by* the application? If so: yes that's true. I'll explain in detail:

        I believe what you're trying to speak about is the sendfile(2) syscall. This does not "move file data onto the network card" (that's misleading on your part) -- all it does is allow for the kernel itself to transfer data between two file descriptors, rather than httpd itself doing it. The theory is that this takes less time than the kernel having to copy to userland (httpd), then userland (httpd) doing a read/write, which sends it back up to the kernel, rise lather repeat for every buffer (i.e. it saves a read and write call being done per buffer). Also, FreeBSD's sendfile(2) syscall is zero-copy within kernel space, meaning there are no intermediary buffers used to store copies of the data being transferred (I'm unsure about Linux in this regard).

        sendfile(2) has a very precarious history of not working reliably with things like NFS and ZFS, or if working, the performance hit in kernel-land being major. This is why Apache has the EnableSendfile global directive, which defaults to off for a very good reason. The same goes for mmap(2) (EnableMmap, which defaults to on; if you're serving from NFS, you need to disable this for either a specific filesystem path using Location, or disable it globally). sendfile(2) also is known to have other problems, such as incorrect TCP checksum offloading calculation on Linux when using IPv6, and tends to have a transfer limit size of 2GBytes (a signed 32-bit number minus approximately 4096, due to page alignment), even on 64-bit systems (explains why there's sendfile64(2) on Linux). Refer to the Apache 2.4 documentation if you think I'm bluffing. You'll find *IX FTP servers (ex. proftpd) often allow disabling of sendfile as well, for the same reasons. In early 2005 there was there was even a serious security hole in sendfile(2) on FreeBSD 4.x/5.x (see FreeBSD-SA-05:02.sendfile).

        It's up to your systems administrator to decide if sendfile(2) is safe for use by whatever program might be using it. In general, it's best to default to not using it unless you know factually the underlying filesystems and applications work reliably with it. I've read a few anecdotal blogs talking about how sendfile(2) on OS X (based on FreeBSD, but these days deviates quite severely) is quite broken as well.

        Your other descriptions of use of things like proxying aren't really relevant from a performance perspective (re: "more efficient") -- if anything all this does is waste more resources -- but your point about proper use of HTTP caching directives is relevant as long as the client honours such (ex. ETags, If-Modified-Since, etc.).

        • Possibly you meant moved *by* the application?

          Are you just being obtuse? It seems perfectly clear that buchner.johannes was referring to the elimination of a copy-to-userland copy-back-to-the-kernel roundtrip.

          As you say, that's just what happens. [freebsd.org]

        • by epine ( 68316 )

          sendfile(2) has a very precarious history of not working reliably with things like NFS and ZFS, or if working, the performance hit in kernel-land being major.

          Wow. +5 Anonymous Coward

          Just one question.

          Algernon, is that you?

    • by gmack ( 197796 )
      There really shouldn't be an advantage once the actual transfer starts going. In both cases they can copy direct from the disk cache to the network and push packets out as fast as the connection will allow. The downside of FTP is it's convoluted protocol, it's throwback to the days when everyone had a public IP and it makes a lot of connections assuming it can just connect in both directions without restriction. In modern times, Firewalls have to maintain a lot of code just to keep FTP functional. FTP *
      • by dunkelfalke ( 91624 ) on Thursday May 04, 2017 @05:59PM (#54357433)

        FTP convoluted? Seriously? I have recently written an FTP client, it was by far the easiest protocol I have ever implemented. Hell, XMODEM is more complicated.

        • by Anonymous Coward

          Have you written an FTP server, by any chance? A client can get away with doing very little, but a halfway-reliable FTP server needs to account for a lot of weirdness in the protocol.

          Of protocols that are still commonly used today, HTTP 1.1 and POP3 are both simpler by far if you're implementing them over raw TCP, and if you're willing to piggyback on SSH you'll find that SFTP is a far more straightforward approach to doing exactly the same thing that FTP does.

  • Just put some memory on the server and sure the files will be cached by the OS in memory in the buffer/cache area.

    • by Anonymous Coward

      Hint: the "cache" mentioned by the article are several world-wide content-delivery networks. That's a fleet of 10Gbps servers located closer to the data consumers, giving you much better throughput *and* latency, and some DDoS resistance for free (and a lot more than "some" for $$$).

      Why the hell is the parent scored "3"? At least tag it "funny"...

    • by ls671 ( 1122017 )

      Or use a ftp reverse proxy like this one for cdn type use cases:

      http://www.delegate.org/delega... [delegate.org]

      The following is a real example of a configuration of DeleGate on ftp://ftp2.delegate.org [delegate.org] running as a caching-FTP-reverse-proxy. It expires the cache by 1 seconds because it is just for a backup server of ftp://ftp.delegate.org [delegate.org] that is, it returns cached data only when the target server is down.

  • by faedle ( 114018 ) on Thursday May 04, 2017 @03:09PM (#54356221) Homepage Journal

    uucp now deprecated by ftp.

    • by Anonymous Coward
      I actually worked at a place back in 2007 that still had servers in the wild where UUCP was the only option to transport files. I don't miss the slight line noise on the crappy phone lines running to customer sites corrupting my files 2/3 of the way through!
  • I was pretty excited by the title - thought maybe there would be a wholesale move to HTTPS, given that it's 2017 and all.

    Signed packages are great, but everything should be working towards being pro-privacy and MitM-resistant by this point. Leaking metadata is so 2014.

    • Compatibility, for one. If you want to support downloads from very old systems, then that HTTPS has to use insecure encryption anyway and one IP address per hostname.

      If there's one place where you'd want to allow old systems to connect, it's for downloading system updates.

    • by GuB-42 ( 2483988 )

      The "lack of caching and acceleration" may be one of the reasons to stay with HTTP.

      HTTPS proxy support is very limited by design (because a proxy is a man-in-the-middle). And a caching HTTP proxy is really great for public repositories.
      Also, HTTPS is supported in the client, it is just that most servers are HTTP-only.

  • ...and nothing of value was lost.

  • by Anonymous Coward

    FTP has been obsolescent ever since NAT became widespread. HTTP passes through NAT with ease since only one TCP connection is established by the client to the server. The FTP way of using two separate connections for commands and data, and making the server connect back to the client, was always problematic. Passive mode FTP, in which the client establishes both connections, was always a lousy kludge to fix a fundamental incompatibility with NAT.

    • FTP was an established enough protocol that most NATs added specific support for it.

      • by tlhIngan ( 30335 )

        FTP was an established enough protocol that most NATs added specific support for it.

        Which promptly broke when you enabled TLS and encrypted the control channel. (Not just FTPS, but this is running on regular FTP).

    • Today I learnt that FTP has a problem with NAT. No really you just taught me that. I've been using FTP for decades (literally) behind NAT, not behind NAT. I've never noticed a difference.

      While someone may have done some kludge somewhere at some point. I never noticed. Why should other users?

      • The thing you may have noticed is that FTP sends your password over the line unencrypted. This is due to the problems with NAT. An encrypted variant of FTP exists, but it doesn't work over NAT at all. Therefore almost no servers support it.

        Unencrypted FTP works fine over nat. It does require a specific ALG (application layer gateway) on the firewall, but that's been mainstrain for decades now.
        • I've never seen an FTP server support security extension (though plenty of clients) even before NAT became widely used. I have however used SFTP quite a lot. So even for security minded, alternatives which work exist. I reiterate: why should users care?

          • Sftp is not ftp, is file transfer over ssh. So, security stands as a reason why users should care and stop using ftp.
            • I know what it is which is why I said I've never seen an FTP server support security, immediately followed by I used SFTP a lot.
              Point is, for security there are alternatives.

  • Since I live in Silicon Valley, the fastest way to download a Linux distro during the dial-up days was to download from Australian FTP servers. They and my dial-up UNIX provider had a direct connection to MAE-West [wikipedia.org], which was about five miles from where I lived at the time. It often took a week to download overnight each CD of a five-CD distro.
  • Until I can list only *xz files, that I would like to see (s)ftp continue.
  • Farewell FTP (Score:5, Interesting)

    by duke_cheetah2003 ( 862933 ) on Thursday May 04, 2017 @04:53PM (#54357005) Homepage

    Along with many other antiquated protocols, FTP is now going the way of gopher, telnet and other such early protocols the internet used.

    FTP was a neat tool in its day, with lots of anonymous-enabled repositories of free software (and sometimes not-so-free.) Gone are the days of highjacking a server with lots of disk to make it a file dump via FTP.

    As more repositories close down, I wonder how they will be replaced? I have not seen much in the way of clearing houses for free software in web-page format, yet. Sure, a lot of linux distros are hosted up on websites, but rarely do you find indexes like you can with FTP easily.

    I'll miss the days of using somewhat questionable 'ftp search' websites that tried to scrape as much info as they could from anonymous-enabled FTP servers around the globe.

    You'll be missed, good ol' FTP.

    • Re:Farewell FTP (Score:5, Insightful)

      by eneville ( 745111 ) on Thursday May 04, 2017 @05:49PM (#54357367) Homepage

      Sure, a lot of linux distros are hosted up on websites, but rarely do you find indexes like you can with FTP easily.

      I'll miss the days of using somewhat questionable 'ftp search' websites that tried to scrape as much info as they could from anonymous-enabled FTP servers around the globe.

      You'll be missed, good ol' FTP.

      Yes, I think the real problem was just how to embed adverts into the listing output. If that problem could be solved then people would welcome FTP back with open arms.

    • FTP for the purpose of downloading is now rare. FTP for uploading files to web hosts is still ubiquitous, more so than SFTP.

Math is like love -- a simple idea but it can get complicated. -- R. Drabek

Working...