Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
The Internet Networking Technology

FTP Is 40 Years Old 253

An anonymous reader writes "FTP celebrates its 40th birthday tomorrow. Originally launched as the RFC 114 specification, which was published on 16 April 1971, FTP is arguably even more important today than when it was born. Frank Kenney, vice president of global strategy for US managed file transfer company Ipswitch, said that the protocol we know as FTP today is 'a far cry from when Abhay Bushan, a student at MIT, wrote the original specifications for FTP.' According to Kenney, the standard has grown from 'a simple protocol to copy files over a TCP-based network [to] a sophisticated, integrated model that provides control, visibility, compliance and security in a variety of environments, including the cloud.'"
This discussion has been archived. No new comments can be posted.

FTP Is 40 Years Old

Comments Filter:
  • Oh please (Score:5, Interesting)

    by Anonymous Coward on Friday April 15, 2011 @07:26PM (#35835692)

    FTP is a hideous protocol. The client connects to the server with one TCP connection, and then when a file (or directory listing) is requested, the server opens up another TCP connection back to the client. This is a nightmare for firewalls. There is also passive mode where the client initiates the second connection to the server, but it is only slightly less hideous.

    As awful as HTTP is, it is infinitely better than FTP. Sadly HTTP is mostly one way, but these days for anything that isn't being broadcast to the public (the web), you are betterm off using ssh/scp.

    Let FTP die already.

    • mod parent up (Score:2, Insightful)

      by Uksi ( 68751 )

      Unless the TFA is talking about SFTP (which it isn't), there is no reason to laud anything positive about FTP. Other than it was a straightforward protocol and it served us well, back in the day.

      • Re:mod parent up (Score:4, Informative)

        by jd ( 1658 ) <imipak AT yahoo DOT com> on Friday April 15, 2011 @08:58PM (#35836256) Homepage Journal

        I agree (though if you are going to consider sftp, please also consider ftps), but it has been surprisingly durable. Rivals, historically, have included fsp, scp, rsync, uucp, WAIS, gopher and ftpmail. Some, like WAIS and gopher, also provided a far superior interface to the traditional FTP client.

        Of these, scp and rsync are the only ones still in use today and I don't know of any anonymous FTP sites that provides scp, though I think kernel.org provides rsync.

        About the only significant change to FTP since it began was that people used to use archie to find programs. (Archie, for those too young to remember, was a search engine specifically for anonymous FTP sites. You gave it a regexp, it gave you every site that had files that matched and the full directory path of those files. Because it was specialized, there was no risk of clutter. Equally, there was no chance it would survive into the era of web crawlers and generalized search engines.

        • Re: (Score:2, Funny)

          by Anonymous Coward

          I agree (though if you are going to consider sftp, please also consider ftps), but it has been surprisingly durable. Rivals, historically, have included fsp, scp, rsync, uucp, WAIS, gopher and ftpmail. Some, like WAIS and gopher, also provided a far superior interface to the traditional FTP client.

          Of these, scp and rsync are the only ones still in use today and I don't know of any anonymous FTP sites that provides scp, though I think kernel.org provides rsync.

          About the only significant change to FTP since it began was that people used to use archie to find programs. (Archie, for those too young to remember, was a search engine specifically for anonymous FTP sites. You gave it a regexp, it gave you every site that had files that matched and the full directory path of those files. Because it was specialized, there was no risk of clutter. Equally, there was no chance it would survive into the era of web crawlers and generalized search engines.

          ) . . . . . . whew!

        • ftps wraps up all the issues with firewalling FTP, and makes them 1000 times worse, by adding encryption. You still have two connections, you still have pseudo-random ports that are such a joy to firewall & load balancer admin's every where, and you pile the joy of encrypting the port number so that you have to terminate the SSL session at each intervening firewall. Pure, unadulterated joy if you sell cpus for routers. Take a 10 foot pole, and gingerly push FTPS away... It is a really bad idea. secsh
    • Re:Oh please (Score:4, Informative)

      by BagOBones ( 574735 ) on Friday April 15, 2011 @07:42PM (#35835800)

      FTP is evil for simple firewalls but most advanced firewalls can rewrite the control commands or read them to open the right ports.
      SFTP is something totally different, but since it uses a tunnel it isn't that bad for firewalls.
      FTPS is the a nightmare! It has the random port problems of FTP but also encrypts the commands so there is no way for the firewall to figure out what ports will be used.

      • Huh? I use FTPS all the time with Filezilla server and connect with a variety of FTPS clients and never have issues. This is through a variety of firewalls- expensive cisco's and sonicwalls and cheapo netgears and linksys.

        Both FTP and FTPS require passive mode to work properly and a passive range forwarded. That's it. Once configured correctly on the server-side there's nothing else to do.

        Honestly, it scares me that vanilla FTP is so widely used and the defacto way to transfers files for so many services.

        • by ibbie ( 647332 )

          Honestly, it scares me that vanilla FTP is so widely used and the defacto way to transfers files for so many services. Its completely unencrypted.

          Er, not everything needs to be encrypted. Having it as an option is great, but for non-sensitive data (e.g., source code that I'm already making available to the world) I'll take the protocol with the lower overhead.

          • by Goaway ( 82658 )

            Your password is usually pretty sensitive data.

          • Yes, many things can be left unencrypted; your password is not one of them. Using FTP for anything other anonymous FTP is irresponsible and stupid.
            • by Leebert ( 1694 ) *

              Yes, many things can be left unencrypted; your password is not one of them. Using FTP for anything other anonymous FTP is irresponsible and stupid.

              Eh, there are ways to do it that are reasonable in terms of risk. For example, one-time passwords.

              I wouldn't recommend it, but to dismiss it outright for that reason isn't correct.

              (Disclaimer: As a former firewall administrator in a scientific computing facility, I hate FTP.)

          • "Having it as an option is great, but for non-sensitive data (e.g., source code that I'm already making available to the world) I'll take the protocol with the lower overhead."

            I would do that too.

            But to me, the lower overhead doesn't come from FTP but from SCP. It's just like Telnet.

            More and more, the expensive part of the equation is the labour hours; since FTP is more or less the same than SCP (or SSH the same as Telnet), I prefer knowing and using one tool better than two.

    • Re: (Score:2, Insightful)

      by DarkOx ( 621550 )

      Defense in depth in all but really Firewalls suck, and break the Internet. Its not FTP that is broken its systems that need firewalling that are. That said there is no operating system in common use, Linux included, that should not be behind a firewall, at the very least a local software based one.

      The control channel being on a separate socket from the data channel allows FTP to do things like XFTP where a client can broker a transfer between servers without needing to participate in it.

      • by smash ( 1351 )

        The control channel being on a separate socket from the data channel allows FTP to do things like XFTP where a client can broker a transfer between servers without needing to participate in it.

        Which is a hideously broken idea that is abused by spammers, etc the world over.

    • Exactly; this is an interesting analysis that proves the point: http://mywiki.wooledge.org/FtpMustDie [wooledge.org]
    • Active mode FTP is hideous where NAT is involved, because it requires the server to initiate an active connection to the client.

      Sadly, passive mode is horrible because it uses ephemeral ports on both ends, so you have no way to easily allow ftp and nothing else.

      This leaves you in the situation of absolutely requiring an ftp proxy, because you only allow active mode on site, but passive mode is needed to get off site...

      FTP is a nightmare. It has only remained because A) no command line HTTP file transfer c

      • Re:Oh please (Score:5, Informative)

        by Mad Merlin ( 837387 ) on Friday April 15, 2011 @08:55PM (#35836244) Homepage

        no command line HTTP file transfer clients ever sprang up

        Let me introduce you to wget [gnu.org] and curl [curl.haxx.se].

      • by hawguy ( 1600213 )

        FTP is a nightmare. It has only remained because A) no command line HTTP file transfer clients ever sprang up, and B) The OpenSSH folks didn't allow you to choose unencrypted data connections for "anonymous" and non sensitive data. Either of the two would blow FTP out of the water so fast it would make your head spin. FTP is just that horrible.

        Why does OpenSSH need an unencrypted option (for sFTP?) to make it popular? What advantage is there to having unencrypted file transfers?

        I don't think I've ever owned a PC that couldn't encrypt/decrypt at the speed of my WAN connection, and my fastest LAN transfer uses less than 15% of my CPU.

    • Re: (Score:3, Insightful)

      by ukemike ( 956477 )
      Lots of people grousing about how awful FTP is. I bet not one of you will ever write a piece of software that is still hugely popular and under active development 40 years later.
      • Re: (Score:3, Insightful)

        by lenroc ( 632180 )

        Lots of people grousing about how awful FTP is. I bet not one of you will ever write a piece of software that is still hugely popular and under active development 40 years later.

        Except, FTP isn't a piece of software. It's a Protocol. As far as I can tell from a cursory search, no particular FTP daemon is still in wide use that was written 40 years ago.

      • Lots of people grousing about how awful FTP is. I bet not one of you will ever write a piece of software that is still hugely popular and under active development 40 years later.

        By "piece of software" I think you mean "communications protocol", and I take exception to your statement.

        Sincerely,
        Arthur C. W. Aldis

        P.S. Yes, I am 200 years old, and I know how to use a web browser.

    • Except it was designed back before people had firewalls, or had even thought about firewalls. It's easy to criticize something in hindsight (why'd the make the first wheel out of stone, that's stupid).

    • by Spykk ( 823586 )
      Don't forget that the format of the directory listing isn't defined in the spec so it varies from server to server. Try pointing a GUI based FTP client at an IBM 4694 controller sometime...
    • by mcrbids ( 148650 )

      Sadly HTTP is mostly one way

      WTF are you talking about? HTTP requires an inbound connection from the client to the server, and the session is always controlled by the client, but the data can flow as strongly as desired in either direction. I routinely use an HTTPS-based file transfer tool that I wrote some 10 years ago to transfer files of any size, well into the GBs from the client to the server. (it's called POST and isn't particularly tough to do)

      Simple. Secure. Reliable. Why does SFTP have to be such a pain in the neck to do right

      • by hawguy ( 1600213 )

        Sadly HTTP is mostly one way

        WTF are you talking about? HTTP requires an inbound connection from the client to the server, and the session is always controlled by the client, but the data can flow as strongly as desired in either direction. I routinely use an HTTPS-based file transfer tool that I wrote some 10 years ago to transfer files of any size, well into the GBs from the client to the server. (it's called POST and isn't particularly tough to do)

        I think that's the problem - you use a tool that you wrote. Whenever that tool is available pretty much out of the box for nearly every operating system out there, then HTTP will replace FTP for file transfers.

        • Sadly HTTP is mostly one way

          WTF are you talking about? HTTP requires an inbound connection from the client to the server, and the session is always controlled by the client, but the data can flow as strongly as desired in either direction. I routinely use an HTTPS-based file transfer tool that I wrote some 10 years ago to transfer files of any size, well into the GBs from the client to the server. (it's called POST and isn't particularly tough to do)

          I think that's the problem - you use a tool that you wrote. Whenever that tool is available pretty much out of the box for nearly every operating system out there, then HTTP will replace FTP for file transfers.

          I know a lot of architectural practices who operate FTP servers so they can access files remotely. They could use something like WebDAV instead. I think it is just momentum which ties them to old protocols.

    • by Alioth ( 221270 )

      I have to agree. For non-authenticated transfers, HTTP and rsync are better, for authenticated transfers, sftp/scp is better.

      Fortunately at work virtually all the companies we deal with now use scp/sftp instead of ftp.

  • The cloud? (Score:5, Insightful)

    by socsoc ( 1116769 ) on Friday April 15, 2011 @07:28PM (#35835700)
    Do we really need to bring buzzwords like the cloud into this? It's a file transfer protocol, aptly named, for transferring data to another system. It could be in a cloud or in a cave for all I care, as long as it has port 21 open.
    • by mikkelm ( 1000451 ) on Friday April 15, 2011 @07:42PM (#35835802)

      Of course we do. It's imperative in today's business environment to deploy file transfer protocols based on integrated models that work in the cloud with compliance. Just imagine what FTP was like before it had compliance in the cloud. I don't get how anyone got anything done.

    • Do we really need to bring buzzwords like the cloud into this?

      Well let's see.. the guy who said it is...

      vice president of global strategy

      Ah well there you are. He's a VP. So yes, I believe he really does need to bring buzzwords into everything. He's probably contractually obligated to.

  • I'd instead say "and in internet years, that's about 400 years, and it shows. retire the poor thing already!" It's a royal pain for firewalls and it sends text in the clear. Move into the 21st century and use scp...

  • by suso ( 153703 ) * on Friday April 15, 2011 @07:33PM (#35835742) Journal

    Now die!

    • It is dead. FTP was once the majority of all bandwidth used on the Internet. It was overtaken by http... in 1995! [ai4fun.com]
      • by jd ( 1658 )

        But almost nobody sends files via http. Way too primitive. FTP is still king there, followed by torrent.

        • Rapidshare, megaupload, oron, upload.me, depositfiles, and many others would disagree with you. FTP is not king of the file transfer world. How many times recently have you used it compared to downloading an update usually via a HTTP connection from some updateserver somewhere?

          I have used FTP exclusively for connecting to a web server and putting files on there. I do that maybe once every 2 months. Now at 100 hits per day average I wonder... how much of my website gets transferred via FTP compared to HTTP?

    • by Eil ( 82413 )

      Don't mind me, just karma-whoring: http://mywiki.wooledge.org/FtpMustDie [wooledge.org]

      • Loved your page, most of it is bang on, but your first point, about ASCII being a bad choice for default, is bogus. When FTP was invented, you had IBM's (with EBCDIC, talking with DEC PDP ASCII, or Control Data machines (some 6 bit vendor charset, etc... charsets were vendor specific) So getting a correct ASCII conversion was the only way to get any data across a link. FTP said: Convert your random line format and charset to ASCII, and the other guy will do the same on his end. Sending a binary from
  • by dkleinsc ( 563838 ) on Friday April 15, 2011 @07:34PM (#35835750) Homepage

    When software gets to be around 40 years of age, wrinkles that were once minor are more and more apparent, what was once new and exciting isn't so much anymore, and it gets rather set in its ways and resistant to change. Decisions made in its youth often become a cause of later regret, and there's a certain amount of jealously of those who are now doing the same job it once did but in a snazzier way.

    But at the same time, it's likely to be far more established and dependable than its younger counterparts. You can count on it getting the job done, one way or another. It won't be flashy, but it will work.

    • by McGiraf ( 196030 )

      "But at the same time, it's likely to be far more established and dependable than its younger counterparts. You can count on it getting the job done, one way or another. It won't be flashy, but it will work.
      "

      That's what she said.

    • by Goaway ( 82658 )

      But at the same time, it's likely to be far more established and dependable than its younger counterparts

      But I thought we were talking about FTP?

    • by Inda ( 580031 )
      Amen to that.

      First thing I did on the wife's Android phone was install an FTP client/server. Why not?
  • Biased much? (Score:5, Insightful)

    by BitHive ( 578094 ) on Friday April 15, 2011 @07:35PM (#35835754) Homepage

    Asking the vice president of global strategy of a company built around its FTP client to comment on the relevance of FTP is a bit like asking an Adobe marketing executive about the importance of Flash, no?

    • It's biased, yes. But that doesn't necessarily mean "wrong" or "without value."

      If somebody can build a business around FTP, I think that's a testament to its relevancy right there. And who better to comment on it than somebody who deals with it and clients who use it every day?

      I wouldn't ask the guy if his product is the best on the market, but as a comment on the underlying protocols... why not?

      • Anyone can build a business around any concept, regardless of value or worth. Success isn't necessarily a testament to the value of the product or its constituent elements. This is a good example of that.

    • I don't know. Recently, the Blackberry CEO was invited to comment on the importance of security for the BBC, but he just stood up and left. Go figure!
  • by Rantastic ( 583764 ) on Friday April 15, 2011 @07:46PM (#35835840) Journal

    According to Kenney, the standard has grown from 'a simple protocol to copy files over a TCP-based network [to] a sophisticated, integrated model that provides control, visibility, compliance and security in a variety of environments, including the cloud.

    Actually, FTP predates TCP by 10 years and 679 RFCs. Hint: TCP is defined in RFC 793.

  • by nomadic ( 141991 ) <nomadicworld.gmail@com> on Friday April 15, 2011 @07:57PM (#35835904) Homepage
    Kermit is the way to go.
  • IHFTP

  • Not everything needs to be secure; every OS has an FTP client built-in, and FTP works with minimal overhead. It's just one tool to do a job.
    • by arth1 ( 260657 )

      every OS has an FTP client built-in

      Um, no. Not anymore. The ftp server went several years ago, and the ftp client has started disappearing too.

      tftp is still alive, but that's a different type of beast.
      ftp, on the other hand, is now on life support and will presumably and hopefully die before uucp does.

  • RIP (Score:4, Funny)

    by goodmanj ( 234846 ) on Friday April 15, 2011 @08:55PM (#35836242)

    FTP died in 1993, murdered by httpd and the Mosaic browser. I watched it die. I shed no tears.

  • by drfreak ( 303147 ) <dtarsky@gma[ ]com ['il.' in gap]> on Friday April 15, 2011 @09:20PM (#35836372)

    but I need to say SFTP is the only option in today's world of HIPAA and net neutrality. FTP-SSL, still, is just another layer over the already ubiquitous FTP.

    Yes, SFTP is yet another wrapper against FTP, but it is much more secure compared to FTP over SSL. SSL only offers limited encryption options. SFTP, on the other hand, can encrypt data flowing over public keys with encryption streams 1024-bit or higher.

  • The wrong FTP (Score:2, Informative)

    by Anonymous Coward

    The FTP we know today originated in RFC 765, published June 1980, and was designed to work over TCP. RFC 114 defines a completely different protocol for file transfer that has nothing to do with FTP.

  • The only nonsucky thing about FTP is that you can use FXP [wikipedia.org] to transfer files between two remote servers without piping it through your client. For example, suppose you have FTP logins on two servers and each has a nice, fast Internet connection. You are on dialup and need to copy database backups from one server to the other. You can use FTP to tell the first to upload to the second's IP address, and tell the second to recv a file from the first's IP address. Nothing but the status messages go through your poor local modem.

    You can do the same with by ssh'ing to the first server and scp'ing a file from it to the second, but that requires generating keypairs and copying the public keys around. If you're nitpicky about having separate keypairs on each SSH client machines (and you really should be!), and you have 20 hosts, then you'd have to copy 19 public keys to each machine. With FTP+FXP, you need an FTP login on each of the hosts. That's especially nice if the sending server is a public repository where you don't have anything but anonymous FTP access.

    This isn't exactly a killer feature for most people, but it's kind of slick if you ever actually need it.

    • by Jaime2 ( 824950 )
      Unfortunately, the same feature makes it possible to cause an FTP server to mount an attack on any server on your behalf.
      • by smash ( 1351 )

        Or send fairly difficult to trace spam, etc.

        Its a theoretical advantage that has a work around for that one time in a thousand that you need that functionality, for the cost of a commonly abused security issue.

    • by vsync64 ( 155958 )

      You can do the same with by ssh'ing to the first server and scp'ing a file from it to the second, but that requires generating keypairs and copying the public keys around.

      Check out SSH Agent Forwarding some time.

    • but that requires generating keypairs and copying the public keys around. If you're nitpicky about having separate keypairs on each SSH client machines (and you really should be!), and you have 20 hosts, then you'd have to copy 19 public keys to each machine.

      This is nonsensical.

      A) You should have ONE ssh key, which is password-protected.
      B) You start ssh-agent, ssh-add your key, and use agent forwarding (enabled by default). You can now jump around between any and all SSH/SFTP servers freely.

      You only feel

    • I dunno man, I kind of like being to re-start aborted transfers, too. I wish the W3C would tack that onto HTTP.

  • by hawguy ( 1600213 ) on Friday April 15, 2011 @09:29PM (#35836416)

    As recently as 5 years ago, I set up an FTP server for use by a number of financial firms to send orders into a specialized stock trading system

    $100M worth of orders were FTPed into that system using PGP encrypted text files (with public key fingerprints verified via telephone to make sure that all of the keys were valid). IP filtering was used to give a small additional layer of security.

    This system was set up in a short period of time (3 weeks from inception including writing the file spec and setting up the servers) and FTP was the one thing that all parties could count on having (client operating systems included Windows, various flavors of Unix, IBM VM, and I think one customer had Tandem Nonstop). Pushing files via HTTP PUT is possible, but it's a lot easier to script an FTP file transfer.

  • After 40 years the protocol is known well enough that developers can make it work on just about any system that needs file transfer. Its not exactly the fastest method, but I can transfer media from my PC to my PS3. I wouldnt be surprised if a few of the file transfer software packages for media players use some implementation of FTP.
  • Slow new day?

  • With ftp I can download whole folders with boatloads of files and more folders, and the rest my network stay up just fine. Transfer speeds are top notch. It lets other traffic through.. Bittorrent? I might put it on when I crash for the night.

    ftp through a nice private tunnel.. hasn't failed me yet..

  • FTP ... provides ... security ...

    I viewed the conversation on this topic mostly to see the revulsion at that series of words. There isn't enough. I would be pissed to see that statement anywhere, and probably mention something about fact-checking. It's on the front page of slashdot. There's no way timothy didn't look at it, recognize that it is a bald faced lie and that everyone here would know it, and endorse it anyway. What the hell?

    FTP hasn't evolved. It's been replaced. As others have pointed out, there's https for the masses, a

    • by epyT-R ( 613989 )

      https only works one way and doesn't offer authentication of any kind really.. it's just http wrapped in SSL.
      scp/sftp is beyond the average GUI user.
      torrent requires that .torrent files be sent to the user before he can download.. it distributes load, yes, but it's not meant to let someone grab some files from a host somewhere.

      ftps is as secure as any other secure socket connection assuming it was implemented well. badly implemented ssl is not restricted to ftps.

  • by ftexperts ( 2042636 ) on Friday April 15, 2011 @10:26PM (#35836638) Homepage

    Here's a little more background on the various generations of the FTP protocol.

    First Generation (1971-1980)

    The original specification for FTP (RFC 114) was published in 1971 by Abhay Bhushan of MIT. This standard introduced down many concepts and conventions that survive to this day including: ASCII vs. "binary" transfers, Username authentication (passwords were "elaborate" and "not suggested" at this stage) , "Retrieve", "Store", "Append", "Delete" and "Rename" commands, Partial and resumable file transfer , A protocol "designed to be extendable", Two separate channels: one for "control information", the other for "data", and Unresolved character translation and blocking factor issues

    Second Generation (1980-1997)

    The second generation of FTP (RFC 765) was rolled out in 1980 by Jon Postel of ITI. This standard retired RFC 114 and introduced more concepts and conventions that survive to this day, including: A formal architecture for separate client/server functions and two separate channels, Site-to-site transfers, Passive (a.k.a. "firewall friendly") transfer mode and The 3-digits-followed-by-text command response convention. ...and RFC 765 was replaced by RFC 959 (which formalized directory navigation) in 1985.

    Third Generation (1997-current)

    The third and current generation of FTP was a reaction to two technologies that RFC 959 did not address: SSL/TLS and IPv6.

    Most FTP software now conforms to RFC 2228 for FTPS. Oddly enough, there are still a LOT of file transfer packages that still don't have IPv6 or EPSV support. The RFCs beyond IPv6 and EPSV support are pretty well baked, so if you're still dealing with a vendor without those attributes, consider that a big red flag.

    Also keep an eye on draft-ietf-ftpext2-hash and draft-peterson-streamlined-ftp-command-extensions - that's where the action is in FTP today.

  • sendfile forever!

  • by loufoque ( 1400831 ) on Saturday April 16, 2011 @05:48AM (#35838538)

    That just made me rofl

Reality must take precedence over public relations, for Mother Nature cannot be fooled. -- R.P. Feynman

Working...