No More FTP At Debian (debian.org) 75
New submitter Gary Perkins writes: It looks like anonymous FTP is officially on its way out. While many public repositories have deprecated it in favor of HTTP, I was rather surprised to see Debian completely drop it on their public site. In a blog post, the team cited the FTP's lack of support for caching or acceleration, and declining usage as some of the reasons for their decision.
Network admins rejoice! (Score:3, Informative)
Thank goodness, FTP needs to die in a fire. Everyone should be using SCP/SFTP nowadays anyways.
Re:Network admins rejoice! (Score:5, Insightful)
Or https/http, for simply fetching files.
Re:Network admins rejoice! (Score:5, Funny)
Correct, FTP was never intended to transport files.
Re: (Score:2)
In your rush to be pedantic, you missed the point. I said fetching, not generally transporting.
scp and sftp allow you to also send files.
Re: (Score:2)
FTP was never intended to transport files.
Exactly. Everyone knows you use email to move files around. I've just this minute received a file from virusbucket.ru that I'm about to click on.
Re: Network admins rejoice! (Score:1)
Re: (Score:1)
Raw http/https? Are you some sort of savage? Convert it to base64, store it in an XML file and handle the download with a custom es6 app. Bonus points for each line of code installed from a package manager.
Re:Network admins rejoice! (Score:4, Insightful)
While good on paper, what you propose is a lot more complicated. SCP and SFTP are subsystems of SSH, which do not have the degree of fine-grained control and capability as most decent FTP servers do. Rate-limiting is one such example (i.e. rate-limiting only SCP/SFTP but not SSH). Network administrators love to think of the situation simply ("yay, I can remove annoyances relating to TCP port 20 and 21 for FTPs passive and active modes, now just pass TCP port 22 and my job is done!"), but those of us in systems who have to actually try to implement the fine-grained controls for this with SSH/SCP/SFTP are driven absolutely mad because even servers like OpenSSH do not provide that granularity.
In short: SCP/SFTP are no where close to drop-in replacements for FTP.
Re: (Score:2)
How much adaption has SCP/SFTP seen? I haven't seen it anywhere.
Combine it w/ IPv6, and it should be ideal!
Re: (Score:2)
Everyone should be using SCP/SFTP nowadays anyways.
FTP has file download resume (REGET), which is quite useful for big archives with weak connection.
Re: (Score:2)
And http has range requests.
I have a utility that requests the end of a zip, displays the contents, and lets you select files. The compressed data is downloaded, the central directory rewritten, and you saved bytes.
Tell me how useless it is, but this was 20 years ago and range requests were a thing back then.
HTTP is faster to connect (Score:5, Interesting)
Every time I used FTP in my sources.list, it was slower to connect. The whole apt-get update process could therefore be twice as long on FTP, compared to HTTP. Even though I guess once connected, the file transfer protocol should be more efficient.
Re:HTTP is faster to connect (Score:5, Informative)
Even though I guess once connected, the file transfer protocol should be more efficient.
There are huge differences between FTP servers in terms of their delivery.
But today's Apache delivers static files extremely fast, by telling the kernel to move a file data onto the network card, so the data are never actually moved to the application. That's fast, and you can still play proxying, cache-freshness and other HTTP tricks on top of this.
Re:HTTP is faster to connect (Score:5, Informative)
> But today's Apache delivers static files extremely fast, by telling the kernel to move a file data onto the network card, so the data are never actually moved to the application.
Hogwash. Even FreeBSD's accf_http.ko and accf_data.ko modules do not work this way. You cannot bypass "the application" layer -- that's httpd. Possibly you meant moved *by* the application? If so: yes that's true. I'll explain in detail:
I believe what you're trying to speak about is the sendfile(2) syscall. This does not "move file data onto the network card" (that's misleading on your part) -- all it does is allow for the kernel itself to transfer data between two file descriptors, rather than httpd itself doing it. The theory is that this takes less time than the kernel having to copy to userland (httpd), then userland (httpd) doing a read/write, which sends it back up to the kernel, rise lather repeat for every buffer (i.e. it saves a read and write call being done per buffer). Also, FreeBSD's sendfile(2) syscall is zero-copy within kernel space, meaning there are no intermediary buffers used to store copies of the data being transferred (I'm unsure about Linux in this regard).
sendfile(2) has a very precarious history of not working reliably with things like NFS and ZFS, or if working, the performance hit in kernel-land being major. This is why Apache has the EnableSendfile global directive, which defaults to off for a very good reason. The same goes for mmap(2) (EnableMmap, which defaults to on; if you're serving from NFS, you need to disable this for either a specific filesystem path using Location, or disable it globally). sendfile(2) also is known to have other problems, such as incorrect TCP checksum offloading calculation on Linux when using IPv6, and tends to have a transfer limit size of 2GBytes (a signed 32-bit number minus approximately 4096, due to page alignment), even on 64-bit systems (explains why there's sendfile64(2) on Linux). Refer to the Apache 2.4 documentation if you think I'm bluffing. You'll find *IX FTP servers (ex. proftpd) often allow disabling of sendfile as well, for the same reasons. In early 2005 there was there was even a serious security hole in sendfile(2) on FreeBSD 4.x/5.x (see FreeBSD-SA-05:02.sendfile).
It's up to your systems administrator to decide if sendfile(2) is safe for use by whatever program might be using it. In general, it's best to default to not using it unless you know factually the underlying filesystems and applications work reliably with it. I've read a few anecdotal blogs talking about how sendfile(2) on OS X (based on FreeBSD, but these days deviates quite severely) is quite broken as well.
Your other descriptions of use of things like proxying aren't really relevant from a performance perspective (re: "more efficient") -- if anything all this does is waste more resources -- but your point about proper use of HTTP caching directives is relevant as long as the client honours such (ex. ETags, If-Modified-Since, etc.).
Re: (Score:2)
Possibly you meant moved *by* the application?
Are you just being obtuse? It seems perfectly clear that buchner.johannes was referring to the elimination of a copy-to-userland copy-back-to-the-kernel roundtrip.
As you say, that's just what happens. [freebsd.org]
Re: (Score:2)
Wow. +5 Anonymous Coward
Just one question.
Algernon, is that you?
Re: (Score:2)
Re:HTTP is faster to connect (Score:4, Interesting)
FTP convoluted? Seriously? I have recently written an FTP client, it was by far the easiest protocol I have ever implemented. Hell, XMODEM is more complicated.
Re: (Score:2)
Re: (Score:1)
Have you written an FTP server, by any chance? A client can get away with doing very little, but a halfway-reliable FTP server needs to account for a lot of weirdness in the protocol.
Of protocols that are still commonly used today, HTTP 1.1 and POP3 are both simpler by far if you're implementing them over raw TCP, and if you're willing to piggyback on SSH you'll find that SFTP is a far more straightforward approach to doing exactly the same thing that FTP does.
FTP caching (Score:1)
Just put some memory on the server and sure the files will be cached by the OS in memory in the buffer/cache area.
Re: (Score:2)
Hint: the "cache" mentioned by the article are several world-wide content-delivery networks. That's a fleet of 10Gbps servers located closer to the data consumers, giving you much better throughput *and* latency, and some DDoS resistance for free (and a lot more than "some" for $$$).
Why the hell is the parent scored "3"? At least tag it "funny"...
Re: (Score:2)
Just use a reverse-proxy and force caching duh!
Re: (Score:2)
Or use a ftp reverse proxy like this one for cdn type use cases:
http://www.delegate.org/delega... [delegate.org]
The following is a real example of a configuration of DeleGate on ftp://ftp2.delegate.org [delegate.org] running as a caching-FTP-reverse-proxy. It expires the cache by 1 seconds because it is just for a backup server of ftp://ftp.delegate.org [delegate.org] that is, it returns cached data only when the target server is down.
NEWS FLASH!@! (Score:4, Funny)
uucp now deprecated by ftp.
Re: (Score:1)
Privacy, Authentication? (Score:2)
I was pretty excited by the title - thought maybe there would be a wholesale move to HTTPS, given that it's 2017 and all.
Signed packages are great, but everything should be working towards being pro-privacy and MitM-resistant by this point. Leaking metadata is so 2014.
Re: (Score:3)
Compatibility, for one. If you want to support downloads from very old systems, then that HTTPS has to use insecure encryption anyway and one IP address per hostname.
If there's one place where you'd want to allow old systems to connect, it's for downloading system updates.
Re: (Score:3)
The "lack of caching and acceleration" may be one of the reasons to stay with HTTP.
HTTPS proxy support is very limited by design (because a proxy is a man-in-the-middle). And a caching HTTP proxy is really great for public repositories.
Also, HTTPS is supported in the client, it is just that most servers are HTTP-only.
As Aristotle said... (Score:2)
...and nothing of value was lost.
NAT killed the FTP star (Score:2, Insightful)
FTP has been obsolescent ever since NAT became widespread. HTTP passes through NAT with ease since only one TCP connection is established by the client to the server. The FTP way of using two separate connections for commands and data, and making the server connect back to the client, was always problematic. Passive mode FTP, in which the client establishes both connections, was always a lousy kludge to fix a fundamental incompatibility with NAT.
Re: (Score:2)
FTP was an established enough protocol that most NATs added specific support for it.
Re: (Score:2)
Which promptly broke when you enabled TLS and encrypted the control channel. (Not just FTPS, but this is running on regular FTP).
Re: (Score:3)
Today I learnt that FTP has a problem with NAT. No really you just taught me that. I've been using FTP for decades (literally) behind NAT, not behind NAT. I've never noticed a difference.
While someone may have done some kludge somewhere at some point. I never noticed. Why should other users?
Re: (Score:2)
Unencrypted FTP works fine over nat. It does require a specific ALG (application layer gateway) on the firewall, but that's been mainstrain for decades now.
Re: (Score:2)
I've never seen an FTP server support security extension (though plenty of clients) even before NAT became widely used. I have however used SFTP quite a lot. So even for security minded, alternatives which work exist. I reiterate: why should users care?
Re: NAT killed the FTP star (Score:1)
Re: (Score:2)
I know what it is which is why I said I've never seen an FTP server support security, immediately followed by I used SFTP a lot.
Point is, for security there are alternatives.
*sniff* (Score:2)
Re: (Score:3)
You forgot to mention the part about how you would jack off to photos of Mae West while you waited for the download, and you would pass out from exertion before reaching climax, because your sexual prowess is so bad that you can't even perform sexually for yourself.
I was never into ASCII porn.
Hand cramps, presumably.
Pipe clamps worked better.
Re: (Score:3, Informative)
wget
Re: (Score:2)
how else would you download Firefox without IE/Edge?
With another computer, of course.
Re:Debian S released (Score:4, Informative)
How else would you download Firefox without IE/Edge:
In Powershell:
Invoke-WebRequest -OutFile Firefox.exe "https://download.mozilla.org/?product=firefox-53.0-SSL&os=win64&lang=en-US"
But I would rather use Edge than Powershell.
Re: (Score:2)
how else would you download Firefox without IE/Edge?
I'm trying to come up with a scenario that would require someone to never start IE/Edge even if just for downloading Firefox. Maybe you can help me with that.
*xz (Score:1)
Farewell FTP (Score:5, Interesting)
Along with many other antiquated protocols, FTP is now going the way of gopher, telnet and other such early protocols the internet used.
FTP was a neat tool in its day, with lots of anonymous-enabled repositories of free software (and sometimes not-so-free.) Gone are the days of highjacking a server with lots of disk to make it a file dump via FTP.
As more repositories close down, I wonder how they will be replaced? I have not seen much in the way of clearing houses for free software in web-page format, yet. Sure, a lot of linux distros are hosted up on websites, but rarely do you find indexes like you can with FTP easily.
I'll miss the days of using somewhat questionable 'ftp search' websites that tried to scrape as much info as they could from anonymous-enabled FTP servers around the globe.
You'll be missed, good ol' FTP.
Re:Farewell FTP (Score:5, Insightful)
Sure, a lot of linux distros are hosted up on websites, but rarely do you find indexes like you can with FTP easily.
I'll miss the days of using somewhat questionable 'ftp search' websites that tried to scrape as much info as they could from anonymous-enabled FTP servers around the globe.
You'll be missed, good ol' FTP.
Yes, I think the real problem was just how to embed adverts into the listing output. If that problem could be solved then people would welcome FTP back with open arms.
Re: (Score:2)
FTP for the purpose of downloading is now rare. FTP for uploading files to web hosts is still ubiquitous, more so than SFTP.
Re: (Score:2)
Windows 98 with the bundled IE5 was really good for this. An explorer.exe window had three modes of operation basically : web browser, file manager and FTP client, the latter looking about identical to browsing local files.
So, you've got a D:\crap window open, a D:\foo window perhaps, an Internet Explorer window you use for browsing some file archive website or whatever else. You can use the Internet Explorer window to browse a ftp site you've found, or turn the D:\foo window into a FTP client by hitting al