Forgot your password?
typodupeerror
Encryption Security Software IT

OpenSSH 5.4 Released 127

Posted by timothy
from the but-it's-secret dept.
HipToday writes "As posted on the OpenBSD Journal, OpenSSH 5.4 has been released: 'Some highlights of this release are the disabling of protocol 1 by default, certificate authentication, a new "netcat mode," many changes on the sftp front (both client and server) and a collection of assorted bugfixes. The new release can already be found on a large number of mirrors and of course on www.openssh.com.'"
This discussion has been archived. No new comments can be posted.

OpenSSH 5.4 Released

Comments Filter:
  • SFTP improvements (Score:4, Informative)

    by Ponga (934481) on Wednesday March 10, 2010 @03:51PM (#31430758)
    FTFA:

    * Many improvements to the sftp(1) client, many of which were implemented by Carlos Silva through the Google Summer of Code program:...

    ... - Add recursive transfer support for get/put and on the commandline
    (Alas!!)

    Whole host of other improvements and bugfixes; give it read if SSH is pertinent to your environment....
    • Re: (Score:3, Funny)

      by ig88b (1401217)
      I'm confused. You're excitedly sad about the sftp improvements?
    • Re: (Score:2, Funny)

      by Torrance (1599681)

      - Implement tab-completion of commands, local and remote filenames

      Well thank frak.

    • by Hatta (162192)

      Why sftp when you can scp? scp -r has worked fine for recursive transfers, and Bash has been tab completing remote filenames for a while now.

      • Re: (Score:3, Interesting)

        by Sancho (17056)

        Doesn't that tab completion only work if your key is either not protected by a passphrase or cached by ssh-agent? Unfortunately, the policy where I work is that you cannot cache credentials like that, and they must be protected by a passphrase. The new features are actually good for me!

    • by beav007 (746004)
      Sounds great, but I think I'll wait for the Debian-approved version.
    • by mzs (595629)

      This scripts has served me well over the years. There hasn't been a unix-alike where it has failed me in a very long time now. It also makes the target directory hierarchy for you automatically if needed.

      $ cat bin/stjput
      #!/bin/sh
      # e.g. copy all non-hidden files and dirs from your home dir using protocol 2
      # $ cd && stjput '-24 remuser@host' . *

      IFS='
      '

      case $# in
      [012])
      echo 'Usage: stjput sshopts remdir file|dir [file|dir ...]' >&2

      • by mzs (595629)

        Hmm that's not all of it, I continue:
        `

        # learn how many octets are in remdir
        # wc is annoying since it was buggy on BSD and sometimes returns number of 'characters'
        foo $n

        # tar does not support --, so need to make sure all file/dir args start with /
        # or . (not starting with - is not good enough, some versions of tar treat @
        # as special for example.
        m=''
        for i in "$@"
        do
        m="$m"x
        done

        foo() {
        case "$c" in
        "$m")

        • by mzs (595629)

          There's just a little bit more:
          cat /dev/null`" && mkdir -p "$n" && cd "$n" && bunzip2 -c - | tar xvf -'\'''

          • by mzs (595629)

            I have NO idea how to get this to post on slashdot, here is the guts of the last line (in some heavy quoting):

            IFS=""; n="`dd bs='$n' count=1 2>/dev/null`" && mkdir -p "$n" && cd "$n" && bunzip2 -c - | tar xvf -

            The idea is that the name of the destination dir is sent over stdin to the target host. This way I do not need to deal with all the arcane quoting. The command line itself is fixed save for the integer of the length of the dir name, so no complicated quoting is needed there.

      • by mzs (595629)

        I can't believe how badly slash code munged the script. Here is a link:

        http://home.fnal.gov/~mzs/tips/unix/ssh/stjput [fnal.gov]

        • by klui (457783)

          # e.g. copy all non-hidden files and dirs from your home dir using protocol 2

          Couldn't you use tar (ask it to filter non-hidden files/directories) then pipe to ssh? I'm actually curious if there is an obscure reason why a script is necessary.

          • by mzs (595629)

            That's exactly what the script does but over the years it got more useful. It used to use cpio at first, so I would pipe find output into it. But then I ran into a machine that did not have cpio, so I changed it to tar. Then I ran into trouble once where a file started with -, so the checks for that. At some point I started using more OSX machines and then I routinely ran into paths with spaces and got tired double escaping that for ssh, so the work around. Then then the switch to chsum instead of wc becaus

  • by Anonymous Coward

    I'm interested to see how the certificates and netcat features get used in the real world with SSH. I regenerated all of my SSH keys because they are defaulted to AES-128 bit encrypted and the public exponent is changed to 65537.

    johnny stoops.

    • by Morth (322218)

      ssh proxy nc host port
      has been working fine for quite a while, but I guess getting rid of the netcat dependency is a good thing.

      • by mzs (595629)

        I've been using this in my ssh_config for a while:

        ProxyCommand /usr/bin/ssh -24 -o PermitLocalCommand=no -qaxT gateway exec /usr/bin/nc %h %p

        I find that -qaxT are really key to getting everything to work right and that's not documented well. You can of course forward X11 and what not, the trick is to not get the gateway involved, it just passes it on to the host and that sshd handles it. You don't need the pty on the gateway either, etc for the other options. That with ControlMaster and screen has really be

        • by Morth (322218)

          Both -a and -x are default though, and -T is also default if you give a command to execute, so only -q will actually do something there.

          It is quite common to turn on agent and X11 forwarding in ssh_config though, and then there is a point to those options (and I guess they don't hurt).

          • by mzs (595629)

            Yes, I did a bad job of explaining, for example I have such entries:

            Host host.gateway
            ForwardX11 yes
            ForwardX11Trusted yes
            TCPKeepAlive yes
            GSSAPIAuthentication yes
            GSSAPIDelegateCredentials yes
            HostName host.example
            ProxyCommand /usr/bin/ssh -24 -o PermitLocalCommand=no -qaxT gateway.example exec /usr/bin/nc %h %p

            In my "Hosts *" sction earlier I have various items I usually like enabled (I have A LOT of hosts I ssh to, many not behind a gateway), such as agent and X11 forwarding. So before it dawned on me that I

          • by Sancho (17056)

            It is quite common to turn on agent and X11 forwarding in ssh_config though, and then there is a point to those options (and I guess they don't hurt).

            Agent forwarding should be selectively enabled only for hosts that you trust completely. A root user on the remote host can use your credentials for as long as you are connected.

  • by klui (457783) on Wednesday March 10, 2010 @04:06PM (#31430938)
    The read-only feature of sftp makes it almost a replacement for anonymous ftp. Too bad it appears to be a global setting.
    • by Sancho (17056) on Wednesday March 10, 2010 @04:28PM (#31431224) Homepage

      Could you not do this with a combination of Match User and ForceCommand directives? Something like:

      Match User anonymous
              ForceCommand sftp-server -R
              ChrootDirectory /home/anonymous

      • Re: (Score:3, Insightful)

        by klui (457783)
        I think I've just seen another incantation of ssh black magic (the other being command= in authorized_keys). Thanks for the insight.
      • by mzs (595629)

        Have they fixed the bug with ChrootDirectory on Mac OS X? On that system / is group writeable and that fails some sanity check. I do not permit any admin users to ssh in though so it should not really be a problem in practice. (To admin you need to ssh in as yourself, then /usr/bin/login -p admin, from there sudo.) I used to have a dylib I would preload but at some point it stopped working so I would compile my own versions.

        Also it seemed a while back that I would be able to use sftp on even if sftp was dis

        • by Sancho (17056)

          Have they fixed the bug with ChrootDirectory on Mac OS X? On that system / is group writeable and that fails some sanity check.

          Don't really know, as I haven't had a need to do much advanced configuration on OS X sshd. Sounds like a strange bug, though.

          Also it seemed a while back that I would be able to use sftp on even if sftp was disabled on the server.

          Is there really a point to disabling sftp? If you have the filesystem-level permissions, you can perform those operations through SSH.

          "get" a file: ssh remote "cat rfile" > lfile
          "put" a file: ssh remote "cat > rfile" lfile

          And if the admin does some tricky things to only allow certain commands to be executed from the SSH session, they probably aren't stopping those commands

          • by mzs (595629)

            It's funny but I use rbash (restricted) and rsh (remote)just like that (vxworks).

            But I think we are agreeing, seems pointless to disable sftp if you let people login instead of restricting to certain commands.

    • by Korin43 (881732)
      Anonymous SFTP? Maybe I'm missing something, but what's the point of encrypting data when it's all public?
      • Re: (Score:3, Insightful)

        by Aladrin (926209)

        Just because it's public data doesn't mean you want anyone else to know what that particular user is doing.

      • by roman_mir (125474) on Wednesday March 10, 2010 @05:00PM (#31431654) Homepage Journal

        Yes, you are missing the point.

        FTP is a fucking mess, I hate it, I wish I could kill it today everywhere. It is a disaster to manage with a firewall. The horrendous idea of using separate random ports for data connection vs control connections, the active/passive methods, it's is pure evil.

        SFTP is not FTP over SSH if you did not understand, it is a proper FTP that happens to run over a secured link.

        • history of FTP (Score:2, Informative)

          by Anonymous Coward

          FTP is a fucking mess, I hate it, I wish I could kill it today everywhere. It is a disaster to manage with a firewall. The horrendous idea of using separate random ports for data connection vs control connections, the active/passive methods, it's is pure evil.

          At the time of its invention FTP's design made sense.

          TCP allows bi-directional traffic on a port, but TCP was not invented when FTP was first created (1971). The protocol that was around only allowed one-way transmission of data on any connection. So when you FTPed into a machine, and server had to open a connection back to the client to return any data.

          Also remember that firewalls were also not invented until the late '80s (earlier '90s?), so the blocking of connections back to the client weren't an issue.

          • by roman_mir (125474)

            I don't dispute any of that, it's obviously true, but FTP should have been either abolished about 20 years ago or at least modified as a protocol standard to transition to a new more sensible implementation. So when the question arises about the reasons of switching to SFTP, well, even disregarding the 'secure' part, the protocol deficiency itself is a valid reason to switch.

            • Since the FireFTP addon to Firefox can support sftp we may see the end of plain FTP soon.
              I really should do some sort of https thing to allow secure upload of files instead of users having to use FTP, but never get around to more than googling in vain for others doing the same thing. Has anyone seen anything like that?
            • by mzs (595629)

              You mean like rcp or uucp? ftp was so dominant simply because the ftp client was wonderfully interactive for its time.

              • by roman_mir (125474)

                no, I mean like ftp. FTP should have been modified as a protocol and implementation over time to be more like other well behaved protocols.

        • by Spit (23158)

          You're obviously not running OpenBSD firewalls. ;)

        • by mzs (595629)

          They don't have to be random. Say you decide that will allow 20 simultaneous connections, so then you allocate a block of 40 (if it's not busy you can have less, but TIMEDWAIT after the connection is closed implies you should have some extras) below the ephemeral lower limit. Then in your firewall you open-up those 40 to the world or your organization. If nothing is listening on a port then there really is no harm having that port open. If you like you can block outgoing ICMP port unreachable messages. With

          • by Sancho (17056)

            That said all the firewalls have very good mechanisms now for watching ftp connection and adding temporary rules for any secondary ports needed.

            Not if the command channel is encrypted. Then the firewall can't read the PORT command.

            FTP really is a mess that needs to go away, but we still get vendors who require it for one reason or another. We even have a couple who sniff the FTP prompt using something like an expect script, so if you're not using a particular version/vendor of FTP, they will fail. Of course, this sort of thing could happen with any protocol.

            Anyway, the guy you replied to obviously has some other issues besides just FTP being a c

      • Re: (Score:3, Interesting)

        by Sancho (17056)

        Arguably, running one less service would be nice. Also, OpenSSH's chrooting is pretty painless for sftp (though arguably, proper chrooting mostly precludes the need for read-only service--having your server read-only does add another layer of security.)

      • by eggnet (75425)

        Encrypting the password.

  • by overlordofmu (1422163) <overlordofmu@gmail.com> on Wednesday March 10, 2010 @04:15PM (#31431078)
    I am reading this article and posting to it through a ssh tunnel using OpenSSH on a Gentoo Linux server at home and putty.exe on a work laptop running XP Pro at work.

    Firefox sees it as a SOCKS 5 proxy at localhost. The tricky part was setting the config key in Firefox called "network.proxy.socks_remote_dns" to true. (Navigate to about:config and filter for "proxy" to find this setting quickly). The corporate network admins use bogus DNS resolution as a firewall.

    I love you, OpenSSH devs. I sincerely thank you.
    • by 0100010001010011 (652467) on Wednesday March 10, 2010 @04:35PM (#31431306)

      OpenSSH is nothing short of magic. I too use it to tunnel out of work's firewall.

      Now, Debian Dev. DON'T TOUCH. :)

    • by Ponga (934481)
      Hmm. I too use SSH tunnel for port redirection to a remote http proxy, but I've never had to set the FF flag you mention as my FF DNS queries go through the proxy "out of the box" - that's my understanding of how a SOCKS compatible proxy should work. Am I wrong here?
      • Re: (Score:3, Informative)

        by Sancho (17056)

        Are you sure they're going through the proxy out of the box? My Firefox had that configuration knob set to "false" by default, and DNS queries are definitely hitting my company's DNS server.

        If I tune the knob to true, they go through the proxy.

        Both cases verified with tcpdump.

      • Re: (Score:3, Interesting)

        by overlordofmu (1422163)
        In my case, they block YouTube with a bogus DNS resolution. Internal DNS gives a intranet IP address (which gives a default intranet page) and my home server DNS gives the correct IP address(es). I tested this again, just now, and YouTube only works for me with that setting ("network.proxy.socks_remote_dns" as true) and is blocked if it is changed to false (which I believe is the default).

        I am using Firefox version 3.5.8, 32-bit, for x86.

        It seems, within Firefox itself, that your DNS queries with SOCK
    • by neiko (846668)
      I use the same setup here at my work in conjunction with FoxyProxy to conditionally load internal sites without using the SSH tunnel. Very handy stuff!
    • by Hatta (162192)

      I do the same thing frequently. I've noticed a weird thing with my configuration. When I'm working through the tunnel, with DNS requests forwarded through the tunnel, and I go to a non-existent domain my ISP (cox) hijacks my NXDOMAIN and serves up a search page (with ads, obv). When I'm at home, I get NXDOMAIN just fine. Can't figure that one out.

    • by owlstead (636356)

      My provider XS4ALL runs a ssh daemon on port 443 of their server. Using a HTTP (Netscape) proxy works just as well (another good reason to keep the ISP's proxy in the air). Thanks for the remote DNS hint, didn't think about that (DNS at our company is non-restricted).

      Fortunately I did not have to use it for a while, nowadays the proxy settings of the company proxy are more reasonable. Before that I had trouble retrieving many web pages with "bad words". Including those necessary to do my work.

    • by ilikejam (762039)

      I recently discovered that Thunderbird can also use SOCKS. No need for mutt in a putty session any more!

      • No need for mutt in a putty session any more!

        Sure you don't "need" to, but why wouldn't you want to?!

        Does thunderbird have the same dns issue as firefox (network.proxy.socks_remote_dns)?

        • by ilikejam (762039)

          I like to look at the pretty pictures.

          Remote DNS? No idea. For some reason my work's DNS can see the Internet, so we can resolve everything anyway.

    • by sam0737 (648914)

      You know what, that's the same thing I did for getting over the Great Firewall of China with a server outside of the mainland.

    • by pnutjam (523990)
      Thank you, that is very is very good to know. I didn't know you could get around the DNS issue for a SOCKS proxy.

      I went ahead and set up my home server for NX (nomachine) and I run a firefox window on my desktop that is really on my server. Bonus is I can disconnect it and reconnect it. It will still be where I left it. The firewall here blocks most ports other then the standards, 22 is open and NX has no problems
  • Please note: (Score:5, Interesting)

    by Anonymous Coward on Wednesday March 10, 2010 @04:25PM (#31431190)

    A brief quote from the project's home page:
    Please take note of our Who uses it page, which list just some of the vendors who incorporate OpenSSH into their own products -- as a critically important security / access feature -- instead of writing their own SSH implementation or purchasing one from another vendor. This list specifically includes companies like Cisco, Juniper, Apple, Red Hat, and Novell; but probably includes almost all router, switch or unix-like operating system vendors. In the 10 years since the inception of the OpenSSH project, these companies have contributed not even a dime of thanks in support of the OpenSSH project (despite numerous requests).

    So go and DONATE, as i've just done.

    • by tsalmark (1265778)
      I send them a few bucks every time I upgrade server software.
    • Re: (Score:3, Funny)

      by Anonymous Coward

      A brief quote from the project's home page:
      Please take note of our Who uses it page, which list just some of the vendors who incorporate OpenSSH into their own products -- as a critically important security / access feature -- instead of writing their own SSH implementation or purchasing one from another vendor. This list specifically includes companies like Cisco, Juniper, Apple, Red Hat, and Novell; but probably includes almost all router, switch or unix-like operating system vendors. In the 10 years since the inception of the OpenSSH project, these companies have contributed not even a dime of thanks in support of the OpenSSH project (despite numerous requests).

      So go and DONATE, as i've just done.

      Okay, we get it Theo.

    • Re:Please note: (Score:4, Insightful)

      by Abcd1234 (188840) on Thursday March 11, 2010 @12:41AM (#31434808) Homepage

      In the 10 years since the inception of the OpenSSH project, these companies have contributed not even a dime of thanks in support of the OpenSSH project (despite numerous requests).

      And they don't have to, either morally or legally.

      OpenSSH is released under the BSD license, and the devs know full well that they may not be financially rewarded for their work. To suddenly expect those users to donate cash just because they use the very code you freed is, to say, the least, hypocritical. After all, if you wanted to be paid for the work you do, why are you releasing it for free to the world under one of the most liberal software licenses possible? Why not a dual license that requires payment for commercial use? Naturally because the BSDs are all about freedom, of course.

      Well, unless they think they're getting screwed financially.

      • Re: (Score:3, Insightful)

        by Gaygirlie (1657131)

        "And they don't have to, either morally or legally."

        Legally, no. But morally? Well, I beg to differ: those companies generate millions of dollars a year and would be in a completely different situation right now if they didn't have OpenSSH to benefit from. As such I see it as rather greedy and selfish not to donate anything at all.

        But alas, this only proves that people have different views of what is morally or ethically acceptable: what I find morally questionable you find completely acceptable, and the sa

        • by Abcd1234 (188840)

          Legally, no. But morally? Well, I beg to differ: those companies generate millions of dollars a year and would be in a completely different situation right now if they didn't have OpenSSH to benefit from.

          Uh, so what? Those developers *chose* to release their code under a license which creates absolutely no obligation on the part of the user. They made that choice because they feel that open, free code is a good thing. So if their users don't give them any cash, why should they be surprised or offended?

  • Why can't they use X.509 certificates like everybody else does? Are they too complex for SSH? Why no smart card support for those really secure connections?

    Maybe we should just use OpenSSL & telnet or something similar, at least OpenSSL has PKCS#11 support nowadays. The only other thing required is a way to multiplex multiple protocols over SSL, but that certainly sounds doable.

    • by mzs (595629)

      client X.509 certs with TLS is vulnerable to renegotiation attacks. telnet would be vulnerable to some timing attacks with TLS if it were not configured carefully as well. Sometimes simple is better.

"In the face of entropy and nothingness, you kind of have to pretend it's not there if you want to keep writing good code." -- Karl Lehenbauer

Working...