Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Encryption Operating Systems Security

Secure Private Key Storage for UNIX? 95

An anonymous reader asks: "Microsoft Windows, from 2000 forward (except ME) offers secure certificate and private storage at the OS level in what is called a protected store. Offline, it's encrypted by a combination of the user's password and a session key stored on the filesystem. When the OS is running, the private keys stored are available to the logged in user, optionally encrypted with another password. The keys are stored in protected memory, so no applications can access them without going through the Microsoft CAPI calls. This code also is FIPS 140-1 level 1 (the best one can get for software cryptography modules) compliant." Does any other OS provide this kind of feature at the OS-level? If so, who? If not, why?
This functionality (especially certified FIPS 140-1 or FIPS 140-2) would be nice to see in UNIX variants. MacOS's key-chain functionality is similar, but stores at the application level, and is not FIPS compliant. An implementation of the protected store functionality will allow applications like Firefox, Thunderbird and gpg to have one common place to obtain private keys and certificates rather than maintaining their own individual key-stores. An additional application for this would be the ability to use hardware PKCS #11 tokens.

I am wondering why this functionality does not exist at the OS level in most OSes except Windows. A number of applications on many platforms have this functionality, but its at the app level, with their own key-stores, and not a standard at the OS level."
This discussion has been archived. No new comments can be posted.

Secure Private Key Storage for UNIX?

Comments Filter:
  • Well duh.. (Score:4, Insightful)

    by QuantumG ( 50515 ) * <qg@biodome.org> on Thursday March 01, 2007 @07:08PM (#18201240) Homepage Journal
    on Windows it is centralized and conforms to government requirements because, get this, Windows development is centralized, and Microsoft sell Windows to governments. Amazing huh? Now, of course, if you think this is important, and can code, you might feel like going and writing a daemon for handing out certificates. Hardening it against attack, etc. Then go write the support into all these various programs that use certificates. But unless you're willing to do that, we'll just have to wait until Red Hat, Novell, or whoever go for some government contracts that require this level of certification.

    • Re: (Score:3, Informative)

      I'm pretty sure that Sun, IBM, and HP sell their UNIX systems to the government, and centrally develop those too.

      He didn't specify Linux, he said UNIX.
      • by QuantumG ( 50515 ) *
        IBM and HP don't have UNIX systems anymore.. and for all I know Solaris has something similar to this.. in any case, gpg isn't going to be top on the list of applications used in a government contract.

        • They what? They don't have UNIX systems anymore? When did this happen? HP-UX isn't just 4 letters with a dash in the middle. It's a currently developed/supported OS and is widely used in government systems.
          • by QuantumG ( 50515 ) *
            HP-UX is dead.. as is AIX.. try to keep up.
            • AIX is dead? Huh? We just bought three 550 systems a month or so ago and they all came with AIX 5, absolutely no problems ordering them and having it preinstalled.

              AIX is far from dead, whatever you may want to believe.
        • Comment removed based on user account deletion
          • by nbvb ( 32836 )
            Linux: "This is the year of Linux on the Desktop" for almost 15 years running .....

            Seriously, AIX and HP-UX both have kick-ass volume management. Check out Dynamic Root Disk from HP... whoa, cool! And it only works if you have a real LVM ...
          • Re: (Score:2, Interesting)

            by linkages ( 131028 )
            If only AIX's memory management was as good as its logical volume manager. mmap on AIX is just broken when it comes to performance.
            If you have more that say a dozen processes mmaping a file and one of those procs makes a change all the others _MUST_ be interrupted to have their in proc. memory cleaned up. This becomes an even larger problem when you have hundreds of procs mmaping the same file.
        • Re: (Score:3, Informative)

          So, is this just a hallucination?

          HP has 3 (4?) flavors of UNIX:
          http://welcome.hp.com/country/us/en/prodserv/serve rs.html [hp.com]
          HP-UX, Linux, Tru64

          They also have UnixWare, though I'm not sure if that's UNIX or an application suite for UNIX, or something that is "kinda like but not really" UNIX.

          VMS is not UNIX, so I won't count that.

          Given that these are "for sale", I don't think "dead" is quite the appropriate term.
          You can drop out Linux since it's not a IBM creation, reducing their number of Unix OSes by 1, but th
    • Re: (Score:2, Interesting)

      by tritab ( 249395 )
      Wow, this is the closest I've seen to anyone on Slashdot admitting that Microsoft did something better than any Unix / Linux system in a long time!

      But seriously, I've wondered about the same question as the OP and have never found anything good. The closest was setting file system permissions on the key file as someone else mentioned.

      Is it not possible?
      • by QuantumG ( 50515 ) *
        What are you talking about exactly?

        The centralized aspect of Microsoft's solution is the only thing that is seriously different on Linux.. as it is hard to get distributed developers not to duplicate each other's work.

        The keys are encrypted in much the same way.
      • by mysidia ( 191772 )

        It's not really more secure.. if anything it's less secure, it's just more convenient sometimes.

        MS needs to use "protected memory" arises only because in windows, you don't necessarily need to be root to mess with other process' memory areas. If a skilled hacker needed to access that memory store badly enough, they could very likely write a kernel driver or otherwise patch the OS to open the little hole they need.

        Since the encryption of your certificate is bound to your login password, what do you

      • What can the kernel do for you that an application cannot?

        Seriously, this is a somewhat complicated operation, and there's little that the kernel can do to keep your data private that an application would be unable to do. Getting access to another process's memory is not a trivial task; it probably requires some trickery with the MMU, and the kernel's in the way. So you could ask for a system call to prevent this from happening, but is there anything else the kernel can do for itself that it can't do for ap
    • Because the author is saying that it uses password & some session key to encrypt. That means, it must be internally generating encryption key from password & encrypt. As for FIPS, password based key derivation functions are not allowed in FIPS mode [PBKDF. I think it is PKCS#5]. One key storage solution is using FIPS compliant hardware to store the keys [using pkcs#11].
  • by Anonymous Coward on Thursday March 01, 2007 @07:13PM (#18201318)
    On OS X, the keychain data (certificates, keys, etc) is not managed at the application level. There is a system daemon, securityd, that applications talk to if they need passwords or need data signed / decrypted or if they need credentials for a particular service.
    • by Straker Skunk ( 16970 ) on Thursday March 01, 2007 @07:24PM (#18201458)
      Same idea in KDE, and I'm sure GNOME has a similar mechanism. Whether these are "OS-level" or "application-level" is a subtler question, but this has more to do with the situation that Linux desktop systems don't necessarily have a centrally-planned infrastructure in the manner of Windows or MacOS X, rather than that they haven't addressed this problem at all.
      • this has more to do with the situation that Linux desktop systems don't necessarily have a centrally-planned infrastructure in the manner of Windows or MacOS X, rather than that they haven't addressed this problem at all.

        This lack of a centrally-planned infrastructure is exactly the problem in this case. Some developers, especially government contractors, want assurance of the quality of code, and their idea of assurance is papers stating that a corporation has put money into improving and certifying a given solution to a given government-approved standard.

  • by dgatwood ( 11270 ) on Thursday March 01, 2007 @07:18PM (#18201384) Homepage Journal

    I'm not sure what they're trying to claim here, but unless their definition of OS means "kernel", the Mac OS X (and classic Mac OS, AFAIK) keychain most certainly is an OS-level service. Keychain items can be shared among all applications, though most applications limit access to these items to a list of trusted applications for obvious security reasons.

    I don't know about the question of protected memory or FIPS certification, but the rest of this question just seemed like FUD to me.

  • chmod 600 (Score:2, Informative)

    by Anonymous Coward
    chmod 600 crypt.key

    Just make sure to store the key encrypted with a passphrase :)
  • FIPS Levels (Score:3, Insightful)

    by Jherek Carnelian ( 831679 ) on Thursday March 01, 2007 @07:19PM (#18201410)

    This code also is FIPS 140-1 level 1 (the best one can get for software cryptography modules) compliant."

    That's odd, OpenSSL was just certified to level 2 (FIPS 140-2). [linux.com]
    • Re: (Score:1, Funny)

      by Anonymous Coward
      That's odd, OpenSSL was just certified to level 2 (FIPS 140-2).

      Yeah? My cryptography goes to 11.
    • No (Score:5, Informative)

      by John.P.Jones ( 601028 ) on Thursday March 01, 2007 @07:41PM (#18201666)
      OpenSSL was just certified FIPS 140-2, that is NOT however level 2 it is version 2 of the standard. It was certified FIPS 140-2 level 1.
  • Eggs and baskets (Score:5, Insightful)

    by El Cubano ( 631386 ) on Thursday March 01, 2007 @07:22PM (#18201442)

    An implementation of the protected store functionality will allow applications like Firefox, Thunderbird and gpg to have one common place to obtain private keys and certificates rather than maintaining their own individual key-stores.

    Maybe it's just me, but I think that putting all your eggs in one basket is not smart. All it would take would be on critical vulnerability to be discovered and all of a sudden a potential attacker can get to all of your keys. Not good if you ask me.

    • Re:Eggs and baskets (Score:5, Informative)

      by Just Some Guy ( 3352 ) <kirk+slashdot@strauser.com> on Thursday March 01, 2007 @08:06PM (#18201930) Homepage Journal

      Maybe it's just me, but I think that putting all your eggs in one basket is not smart. All it would take would be on critical vulnerability to be discovered and all of a sudden a potential attacker can get to all of your keys. Not good if you ask me.

      I disagree. Right now, we're putting all our eggs in a bunch of half-assed baskets woven from tissue paper and lunchmeat. I'd much rather trust one well-audited, well-engineered solution than the 100 home rolled ones we have to trust now.

      KDE does this now with KWallet (although without the spiffy kernel-level protections the author claims that Windows supports). If I'm writing a KDE application, I don't have to worry about getting password storage right - some other folks who know a whole lot more about the problem have already taken care of it for me.

      I think this is good in the same way that using libc's strncmp is better than writing your own. Sure, there might be some undiscovered flaw lurking that's just waiting to open our systems to the world, and an environment of heterogeneous strncmp implementations would keep a successful attack from owning everything that links to libc. And yet, I have a lot of faith that the libc version is much better than anything I'm likely to come up with on my own.

      Finally, if an error in strncmp were to be discovered, an upgrade of one library file would fix every dynamically linked program on my system. If each of those programs used their own, then each one would have to be audited to make sure they weren't broken in a similar way. In the same way, an upgrade to KWallet helps every program that uses it. Other programs have to hope that new vulnerabilities are specific to KWallet's own code and not a more general problem.

      The Unix way is to build a tool that does one thing supremely well, then trust it. I think this is a prime candidate for the same treatment.

      By the way, I'm only using KWallet as an example because I'm familiar with it; I'd be even more interested in Theo de Raadt getting a wild hair and writing OpenSecureStore some weekend.

    • by IkeTo ( 27776 )
      > Maybe it's just me, but I think that putting all your eggs in one basket is not smart.

      I don't know whether it is just you, but I'm sure I'm not with you.

      First, the eggs are "quantum encumbered", breaking one in a basket is equivalent to breaking the corresponding one in another basket. Most of the time, if you configure multiple ways store keys to access something sensitive (say, a server), the same access required in multiple ways, so you end up having the key (or keys of equivalent functionalities)
  • They are quite good if you don't need frequent encryptions and signatures, well supported by openssh, opensc and openssl, less with gnupg, and firefox but I didn't experiment much.

    Just be sure to use the 32K version as the 64K version lacks support.

  • Poor supplicant. You will never reach enlightenment if you still place your trust in secure keys and encryption algorithms. Have not the radar gun vs. radar detectors taught you anything? Has not copy protection taught you anything? Has not DVD, Blu-Ray, or iTunes taught you anything? Has not closed hardware taught you anything? These things are intricacies and crusades of people who are paid to assert that they have completed an impossible task. You ask for feats of engineering when reverse engineer
  • by omnirealm ( 244599 ) on Thursday March 01, 2007 @07:49PM (#18201742) Homepage
    Current versions of the Linux kernel have a key retention [lkml.org] feature. For PKCS#11, there is openCryptoki [sourceforge.net].
    • Any idea if that mechanism addresses the DRAM "memory" effect described in this paper: http://www.cs.auckland.ac.nz/~pgut001/pubs/secure_ del.html [auckland.ac.nz]?
      • by tlhIngan ( 30335 ) <slashdot&worf,net> on Thursday March 01, 2007 @11:24PM (#18203358)

        Any idea if that mechanism addresses the DRAM "memory" effect described in this paper: http://www.cs.auckland.ac.nz/~pgut001/pubs/secure_ [auckland.ac.nz] del.html?


        Having developed for embedded systems, I'm amazed at how well DRAM can retain data. I've had it such that RAM disks were preserved after power cycles (~1 second without power, and SDRAM controllers not initialized until many milliseconds after powerup). There was at one point a hack we had to implement in the bootloader to clear a bit of memory so a power cycle really would start clean.

        Heck, it's a great way when debugging - the OS could log all messages to the screen, but that greatly slows down operations. So we log into a circular RAM buffer. When the board crashes, we power cycle, then inspect the RAM buffer for the last few messages written.

        Out of curiousity, I once experimented to see how long the data was retained - I wrote a data pattern to RAM, looked at it back, then removed power for varying lengths of time. It can take anywhere from a few seconds to a minute before the data gets hopelessly corrupted. But before then, if you knew what you were looking for, you could find it.
  • From " Vista encryption 'no threat' to computer forensics [theregister.co.uk]":

    We're seeing the same concerns with Vista as we saw with XP over the idea that built-in encryption features might frustrate law enforcement efforts. In practice XP has not proved to be a problem for computer forensics and we don't think Vista will be either," said Bill Thompson, director of professional development and training at Guidance Software.

    • by ClamIAm ( 926466 )
      Um, Windows's crypto features don't hurt forensics work mostly because nobody uses them. I'm not sure if you're aware of this, but security systems tend not to work when you don't turn them on.

      However if every Windows user used these features maximally, these forensics people would probably be singing a different tune...
      • by J'raxis ( 248192 )

        I would hope so. In fact the next sentence in the article was Mr Thompson saying that "sometimes people use file wiping utilities or other tools but often they are not configured properly. People accept the default settings, which can leave fragments of data." So, yeah. Idiocy is its own rewar^W punishment. But the article also mentioned earlier that this BitLocker supports a second "recovery" key, so who knows how secure it is even if you use it right?

        • by ClamIAm ( 926466 )
          Well, about the "recovery key":

          However, in corporate environments a BitLocker recovery key can be used to allow examination of target devices.

          This sounds to me more like a key that IT keeps hidden away until an investigation (government or otherwise) needs to be done.

          Even setting this aside, I think you're on the right track: the only people who really know how secure it is are the people who wrote it. And let's not count out exploits...
          • by J'raxis ( 248192 )

            Most likely, but I've heard enough about administrative backdoors in products to be suspicious. Of course, those are just secret passwords, not decryption keys, so it's not the same issue. But I'd be very suspicious of BitLocker, that in non-corporate environments, your copy of the OS might come conveniently pre-configured with a recovery key that only the OEM knows about.

            Not sure what's out there for OSX. Their standard FileVault supports a recovery password

  • It sounds like Linux needs a good dose of DRM to keep those keys secure.
     
  • ...in that form because it's rather domain-specific to Windows. But there are hardware devices out there and drivers that provide interfaces to said harware to provide similar bits.

    Most security doctrines operate with the conviction that physical access to the box, along with enough time and resources, negates all possible security measures.
  • A smart card is the best place to keep your private keys. The key is generated on the card, and never leaves. It is not even possible to get private keys off a smart card, except (perhaps) with an electron microscope.
    • Re: (Score:3, Interesting)

      by mr_death ( 106532 )
      At the RSA conference three years ago, you could bring your smart card to many booths and they would extract the private key in less than 5 minutes. I have no reason to believe that the problem has become any harder.

      True, a smart card (compared to a normal PC) sucks less, but it still sucks.
      • by 12dec0de ( 26853 )

        True, a smart card (compared to a normal PC) sucks less, but it still sucks.

        Hmm... another comment that shows how little anybody here knows anything about security hard- and software.

        There are so many different types of smartcards. And those with validated key protection schemes (CC, FIPS, etc.) you will not get at at all. But if you are only willing to pay for a memory card I would consider 5 Minutes rather long.

        mfg lutz
        • Hmm... another comment that shows how little anybody here knows anything about security hard- and software.

          And, of course, you're omniscient, so you know everything.

          And those with validated key protection schemes (CC, FIPS, etc.) you will not get at at all.

          Bullshit. Tamper resistance on these devices is a joke. Data can be sniffed, either on board or between devices. Biometrics are spoofable. PINs are simple. Keys can be found.

          Worshiping at the altar of Common Criteria and FIPS does little more than assure
    • Smart Card
      Oh neat! Is that one of those new fangled daemons for them new fangled Smart Car's I've been hearing so much about?

      ;-)
  • Any current operating system (yes, linux too) has too many built-in security holes (inadvertent or otherwise), which makes secure storage of anything, including private keys, a joke. At some point, the key must appear in the clear to be useful; for that time, the key is vulnerable to sniffing.

    Just as some of the keys for HD DVDs have been found, given a determined adversary, your private key will be found.

    We can talk about "less vulnerable" private key storage, but I don't think that's what you had in mind.
  • by SanityInAnarchy ( 655584 ) <ninja@slaphack.com> on Friday March 02, 2007 @02:02AM (#18204192) Journal

    ... but what's magical about the "OS level"? According to Microsoft, Internet Explorer is part of the OS, so anything they say about "OS level" is really irrelevant.

    Offline, it's encrypted by a combination of the user's password and a session key stored on the filesystem.

    We've been mounting home directories on encrypted filesystems for decades, so that's one way to do this. OS X has this built in and very easy to enable.

    When the OS is running, the private keys stored are available to the logged in user, optionally encrypted with another password.

    Which is pretty much how we do this already; just read the file. If the user had a passphrase, use that to decrypt it.

    The keys are stored in protected memory, so no applications can access them without going through the Microsoft CAPI calls.

    Well, on Unix, no application can access any other application's memory, period. End of story.

    There are ways around this -- you could do tricks with kernel memory, or you could read it off the swapfile. However, I believe there is a way to request that a specific chunk of memory never be swapped out, and while it's in RAM, if your kernel's safe, your app is safe. And it's always possible to run without swap, or encrypt your swap.

    On Windows, I believe you can "attach" to a running process with a debugger. On Unix, if you want to debug, you have to start the app in a debugger, because once it's running, the app's memory is its own. Only way you can "attach" then is if the app specifically has a way to do that -- for instance, browser plugins are essentially an app deliberately loading code from somewhere else into itself and running it. But if an app doesn't go out of its way to let you in, you aren't getting in, and if your kernel is owned, so are you, even on Windows.

    MacOS's key-chain functionality is similar, but stores at the application level, and is not FIPS compliant.

    What does FIPS compliance mean?

    And once again, "application level" is a pointless distinction. Yes, there are mechanisms for storing keys at the kernel level, but in my mind, that's less secure because it's much more complex for no good reason.

    An implementation of the protected store functionality will allow applications like Firefox, Thunderbird and gpg to have one common place to obtain private keys and certificates rather than maintaining their own individual key-stores.

    So have them all use libgpg or something. But what is the advantage?

    In Thunderbird, I have a PGP key that I sign my mail with, and I have a password that I use to connect to the server. In Firefox, I have an entirely different set of passwords, and the public keys to some Certificate Authorities. Firefox needs none of the Thunderbird keys/passwords, and vice versa. On the commandline, I have an ssh key, which I use to shell in to other boxes -- which is a key that I don't use in Firefox or Thunderbird.

    What's the advantage of putting these all in the same app? And what's the advantage of that app being "OS level"?

    Ultimately, the only advantage I see is with something like OpenID. It'd be nice if I could use the same keys I use with ssh to gain access to my OpenID server. Unfortunately, I haven't managed to get my hands on a working server implementation of OpenID, so that's moot.

    • Re: (Score:3, Informative)

      by Anonymous Coward

      On Windows, I believe you can "attach" to a running process with a debugger. On Unix, if you want to debug, you have to start the app in a debugger, because once it's running, the app's memory is its own. Only way you can "attach" then is if the app specifically has a way to do that -- for instance, browser plugins are essentially an app deliberately loading
      code from somewhere else into itself and running it. But if an app doesn't go out of its way to let you in, you aren't getting in, and if your kernel is owned, so are you, even on Windows.

      huh?

      what about
      strace -p pid
      gdb program pid

      • SanityInAnarchy has apparently not been doing a lot of development in a UNIX environment. While I don't blame him/her for potentially missing out on the ptrace syscall, as it's not mentioned in Stevens' Advanced Programming in the UNIX Environment, I do find it a bit sad that he/she makes such bold statements about the security of a computer system without checking at least the valid command line parameters to one of the tools he is referring to. Luckily an Anonymous Coward already told the world about two of these.

        For those not familiar with the ptrace syscall, here is some info about linux ptrace:

        • You have to have super-user privileges, CAP_SYS_PTRACE capabilities or be able to send signals to the process to "attach to". The first parts means you is or have the permission of the system administrator or compromised the system. The last part basically means that you can also poke around at will in "your own" (or all processes for the same user account you've hacked) processes.
        • It is possible to read and write all process memory and registers. Basically, this means that you can run, you can obfuscate, but you cannot hide your crypto keys. Not from yourself or a particularly Evil sysadm, that is.
        • Even if the traced process receives a signal from the tracing, it is already too late to do anything about it once it regains control over it's operation.

        Detecting that you're being traced is possible, but it equally possible to circumvent possible detection by tracing at the correct time, deliver spoofed signals, modifying memory in the traced process to avoid being detected. In short: if you cannot trust your system administrator and yourself (at least all processes running as you) you are out of luck as to local security. Network security is one step worse, in that you have to trust even more persons.

        Oh, and don't use trustno1 as password!

    • The advantage is that it takes advantages of Windows authentication to do all of the encryption operations so it greatly reduces the overhead for the programmers dealing with the DPAPI. It really seems like an easy thing to dismiss, but then you have to consider how horribly most developers screw up crypto in their applications and the DPAPI makes a lot more sense.

    • by rastos1 ( 601318 )

      On Windows, I believe you can "attach" to a running process with a debugger. On Unix, if you want to debug, you have to start the app in a debugger, because once it's running, the app's memory is its own.

      That's why man gdb says:

      You can, instead, specify a process ID as a second argument, if you want to debug a running process:

      gdb program 1234

      An implementation of the protected store functionality will allow applications like Firefox, Thunderbird and gpg to have one common place to obtain pr

      • Aha. Then libssl, which seems to be used by everything but gpg.

        It does seem I was entirely wrong about "attaching" to a running process. Fortunately for me, it doesn't actually affect me, as I don't use passphrases much. If you have physical access, or if you have my local user account, you've 0wned me already.

  • For Lunix :
    http://swtch.com/plan9port/man/man1/secstored.html [swtch.com]
    http://swtch.com/plan9port/man/man4/factotum.html [swtch.com]

    in which you can store *any* data, factotum will get the individual keys out for you

  • Plan 9's Factotum (Score:2, Interesting)

    by Darren Bane ( 21195 )
    Plan 9 [bell-labs.com] has such a central key repository. It's called Factotum, and the best description is the USENIX paper [usenix.org]. It has been ported to other UNIX-likes by the plan9port [swtch.com] project.
    • by Bob Uhl ( 30977 )
      Plan 9 did a lot of stuff well & smartly, generally out-Unixing Unix. Unfortunately, it came along too late--and was proprietary for too long--making its market impact pretty much nil. Which once again proves that if one wants to change the world, one really ought to free one's code.
  • Maybe it's not in Unix because there are better alternatives for "real" systems?

    For example the IBM 4764 Cryptographic Coprocessor? They're expensive, but more secure than anything a basic machine / operating system could provide. But if you're developing a system that actually has a need for secure keys then surely a price worth paying?

    In my view, anything like this built into the operating system is a "poor mans" secure store. Something a home use might like, but certainly not something you'd want to use
  • According to the breakms papers from Peter Gutmann, the cryptoapi has to store the private keys in the clear, in memory, and this is true from windows 3.1 to XP. No idea about vista.

    Wait - *you* the user needs to supply a password to access it, so it looks secure, but the actual key, in memory, is kept in the clear, unencrypted. Don't think just because the system asks for a password that your data is actually secured.

"The great question... which I have not been able to answer... is, `What does woman want?'" -- Sigmund Freud

Working...