Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Encryption Security Supercomputing IT

Parallel Algorithm Leads To Crypto Breakthrough 186

Hugh Pickens writes "Dr. Dobbs reports that a cracking algorithm using brute force methods can analyze the entire DES 56-bit keyspace with a throughput of over 280 billion keys per second, the highest-known benchmark speeds for 56-bit DES decryption and can accomplish a key recovery that would take years to perform on a PC, even with GPU acceleration, in less than three days using a single, hardware-accelerated server with a cluster of 176 FPGAs. The massively parallel algorithm iteratively decrypts fixed-size blocks of data to find keys that decrypt into ASCII numbers. Candidate keys that are found in this way can then be more thoroughly tested to determine which candidate key is correct." Update by timothy, 2010-01-29 19:05 GMT: Reader Stefan Baumgart writes to point out prior brute-force methods using reprogrammable chips, including Copacobana (PDF), have achieved even shorter cracking times for DES-56. See also this 2005 book review of Brute Force, about the EFF's distributed DES-breaking effort that succeeded in 1997 in cracking a DES-encrypted message.
"'This DES cracking algorithm demonstrates a practical, scalable approach to accelerated cryptography,' says David Hulton, an expert in code cracking and cryptography. 'Previous methods of acceleration using clustered CPUs show increasingly poor results due to non-linear power consumption and escalating system costs as more CPUs are added. Using FPGAs allows us to devote exactly the amount of silicon resources needed to meet performance and cost goals, without incurring significant parallel processing overhead.' Although 56-bit DES is now considered obsolete, having been replaced by newer and more secure Advanced Encryption Standard (AES) encryption methods, DES continues to serve an important role in cryptographic research, and in the development and auditing of current and future block-based encryption algorithms."
This discussion has been archived. No new comments can be posted.

Parallel Algorithm Leads To Crypto Breakthrough

Comments Filter:
  • by Colin Smith ( 2679 ) on Friday January 29, 2010 @09:24AM (#30948360)

    Apps could load a custom config into the chip to run faster.

     

    • Re: (Score:3, Interesting)

      by bcmm ( 768152 )
      How fast can an FPGA be reconfigured? I suspect that they would not lend themselves to task switching as readily as CPUs do, and the potential of FPGAs to accelerate day-to-day tasks would be somewhat reduced if only one process could use it at a time.
      • by SmurfButcher Bob ( 313810 ) on Friday January 29, 2010 @10:10AM (#30948820) Journal

        10 years ago called, they want their ideas back - starbridgesystems.com

      • Re: (Score:3, Informative)

        by Zerth ( 26112 )

        There are a run-time reconfigurable FPGAs on the market, but they still take time to switch(something like 200 microseconds). Not terribly long on our scale, but not exactly fast for the CPU.

        The real problem is that FPGAs are generally more expensive for anything that you can mass-produce. FPGAs shine if you want something custom and parallel, like this cluster, and can be cost-competitive compared to getting your own silicon made for prototypes.

        • Re: (Score:3, Interesting)

          by digitalunity ( 19107 )

          There could be a market for this. I see 2 obvious applications.

          First application would be for photo and movie processing. An FPGA that could be configured by photoshop plugins or other linear movie editing programs could see dramatic speedups.

          Another would be finite element analysis, in CAD/CAM applications or others such as Inventor, Simulia, Catia, MathCad, etc.

          I see some desire for GPGPU in these areas but with a little added complexity, I think these applications would see a big speedup even over GPGPU.

      • I would love a generic FPGA with some basic support. I would kill for a DNG -> JPG/TIFF batch processor after 'developing' all my Digital negatives.

        Same with encoding x264 video.

        A that could just do those two things would easily be worth some money.

    • by Anonymous Coward

      Yeah, well Xilinx pursued this in the early 90s with a swappable FPGA with an open architecture. That was discontinued pretty quickly, though.

      The main issue is that apps aren't slow in the right way. Very few apps these days are in fact ALU-bound. With GPU resources and SSE, even fewer need extra ALU power, and the hard limits come from memory access speed (especially random access, as required by a great many algorithms). FPGAs don't really make this any easier (except insofar as they can offer *small*

    • Isn’t that, what Transmeta was all about?

      I always wondered why it failed. It certainly wasn’t because of the idea or because of me.

  • by petes_PoV ( 912422 ) on Friday January 29, 2010 @09:26AM (#30948378)
    Apart form the trivial case where some ASCII, or a picture pops up, how can a decrypter know when the block or stream of apparently random data has been decrypted?

    If I was to start transmitting some random data and told people it was encrypted with DES 56 bit, but in fact it wasn't - it was just random data. Then, apart from exhaustively testing it with every possible key, how could they demonstrate that it was NOT encrypted as claimed?
    It does seem to me that one of the problems with decrypting "stuff" is that you need to have some idea what the "answer" will look like. Without that you can't ever be certain when you've succeeded.

    • by Endymion ( 12816 )

      While this is likely an issue in the general case, I suspect that in common use we already do have a good idea of what the "answer" looks like. You likely know what software did the encryption, and therefore what structures in the data to expect. Knowing where the data came from tells you a lot before you even start any of the brute-force math.

      • Yes, I accept your point. Now let's modify the situation to real life. When most governments send encrypted data, they don't just say "ooooh, we've got some sensitive data to send, we'd better encrypt it" as that alone tells a baddie that there's something going on - as the level of encrypted data increases. Further, you don't just send *all* your traffic encrypted either, as it's still prone to monitoring the volume of traffic.

        Instead, you fill the channel, all of the time. Some of the traffic is proper

        • Yes, I accept your point. Now let's modify the situation to real life. When most governments send encrypted data, they don't just say "ooooh, we've got some sensitive data to send, we'd better encrypt it" as that alone tells a baddie that there's something going on - as the level of encrypted data increases. Further, you don't just send *all* your traffic encrypted either, as it's still prone to monitoring the volume of traffic.

          Instead, you fill the channel, all of the time. Some of the traffic is proper encrypted data and some of it is just random padding (or if you really want to screw with them, encrypted random padding).
          While it may be possible to brute-force 1 particular message, until you can do a brute-force decryption in real time, encryption, even with weak keys, will almost always egt through securely. (You could nuance it further, but inserting decoy data with artifically weakened keys. too).

          Now lets switch to this reality. Military encryptors just encrypt the entire classified traffic channels. They don't "fill the channel" with garbage as it costs real money and bandwidth to do that. Granted they encrypt all traffic on that channel so the overall traffic has varying levels of sensitivity or classification, but they certainly don't generate garbage padding to confuse the enemy.

      • Exactly. Why would you crack it, if not because there’s something in there that you wanted. Which implies you know what it is or means.

        And if it’s not in there anyway, then not cracking it is OK. Since, well, you don’t want it anyway.

    • Even if it's ASCII or a picture, just encrypt it twice.
      • Re: (Score:3, Interesting)

        by txoof ( 553270 )

        Even if it's ASCII or a picture, just encrypt it twice.

        I've always wondered what would happen if you were to encrypt a file over and over again, with different keys. Would that lead to any greater security, or would somehow leave more and more obvious clues as to how the data was encrypted? What would happen if you encrypted over and over using the same key?

        • by Anonymous Coward on Friday January 29, 2010 @10:04AM (#30948772)

          Depending on the cipher, this may, or may not work.

          A common method to enhance the security of DES is/was to do a encrypt/"decrypt"/encrypt cycle, each with a different key. You may know this method under the name of 3DES. While DES is long broken, 3DES is still considered pretty secure, albeit slow. In contrast, using "tripple-XOR" will most likely not increase the security of a cipher.

          And encrypting multiple time with the same key will, for any reasonably secure crypto system*, not increase security. The security incerase in 3DES was due to the increased key length (i.e. the combined length of the 3 keys used in 3DES). Using the same key over and over again does not increase the effective key length.

          *Increasing the number of rounds used internally in a crypto system can in some cases increase the security. Usually, cryptoanalists start breaking reduced-rounds versions of ciphers before breaking the full version.

          • by microbox ( 704317 ) on Friday January 29, 2010 @12:29PM (#30950780)
            And encrypting multiple time with the same key will, for any reasonably secure crypto system*, not increase security.

            I understand that from a theoretical point of view, but from a practical point of view -- how would you break an encrypted file if it is doubly encrypted, even if you knew both algorithms involved. How do you solve the problem of recognizing if you'd actually decrypted with the first key, so that you can start working with the second key?? Haven't you increased the key-space to an exponent of itself (in practical terms), and therefore created something vastly more secure?
        • by maxume ( 22995 ) on Friday January 29, 2010 @10:13AM (#30948860)

          If the encryption is properly implemented, the repeated cycles should not reveal any information, but it works better to just use a larger key (encrypting twice with 2 different 2 bit keys should be roughly equivalent to encrypting once with a 3 bit key, 4 different 2 bit keys would be equivalent to a 4 bit key and so on, so just going up to a much larger key is probably easier).

          • If the encryption is properly implemented, the repeated cycles should not reveal any information, but it works better to just use a larger key (encrypting twice with 2 different 2 bit keys should be roughly equivalent to encrypting once with a 3 bit key, 4 different 2 bit keys would be equivalent to a 4 bit key and so on, so just going up to a much larger key is probably easier).

            Now you start getting into the number theoretic part of cryptography relating to groups, rings, etc. What you have said above d

            • No, DES is not a group [springerlink.com].

              Let's say there was a 2-bit version of DES as in your example. For a 2-bit data block, there are 2^2 = 4 possible values. That means there are 4! = 24 possible permutations, but there are only 2^2 = 4 keys. So not all the possible permutations are generatable by a single key, and what Campbell and Weiner proved in 1993 was that successive applications of DES with different keys produce (in general) one of those "lost" permutations that can't be done with a single key.

              • No, DES is not a group.

                I agree and I appreciate that you pointed that out. I probably should have stated it. That is why 3DES turns out to be stronger than DES. I was speaking to the more general assertion that seemed to be claimed by the parent post which implied that multiple encryptions with different keys would always yield increased effective key length.

          • Re: (Score:3, Insightful)

            by tepples ( 727027 )

            it works better to just use a larger key

            That's not always easy, especially if you have sunk a lot of money into crypto hardware that supports only short keys. That's one reason why triple DES took off: existing DES ASICs could still handle it.

        • Even if it's ASCII or a picture, just encrypt it twice.

          I've always wondered what would happen if you were to encrypt a file over and over again

          Not always better. For example, the text you are reading now was encrypted with ROT13. Twice.

        • The canonical example of a bad cypher is an XOR cypher, where every bit in the input stream is XOR'd with a key. If you do this twice, with two separate keys, then you have done the equivalent of XOR it once with the key generated by XORing the two keys. As such, it is no more secure (but also no less secure).

          With a perfect cypher, there is no relationship between the two applications, so if you encrypt it twice then you double the amount of effort required to crack it. However, for most decent cyphers

          • You seem to confuse several basic notions in cryptography. Firstly what you are describing as an XOR cipher includes one-time-pads (you haven't mentioned if the key is as big as the input or not), which are certainly not "bad" in any sense.

            With a perfect cipher there should be no information leaked into the ciphertext, or as Shannon put it there should be no advantage in knowing the ciphertext. So all perfect ciphers have the property that if you use them multiple times under multiple keys then the effort t

            • Firstly what you are describing as an XOR cipher includes one-time-pads (you haven't mentioned if the key is as big as the input or not), which are certainly not "bad" in any sense.

              A one time pad used in this way is an XOR cypher with an infinite key length (or, at least, one as long as the message). My point is still valid for one time pads. Using two one time pads with an XOR mechanism does not buy you any extra security. It is equivalent to using one one-time pad with different values.

              • You also claimed that a perfect cipher would require twice the effort to break after two applications. This seemed to be the thrust of your main point. Do you still think that is valid?

        • Depends whether the cipher is "idempotent" or not. If a cipher is idempotent, then encrypting with key A followed by key B is the same as encrypting just a single time with some key C. Clearly, this is no more secure than just encrypting once, since you only need to find key C. If a cipher is NOT idempotent, the result of encrypting by A and then by B can't be reproduced by any single key C.

    • by Lord Byron II ( 671689 ) on Friday January 29, 2010 @09:42AM (#30948536)

      Exactly right.

      Properly encrypted data should be indistinguishable from random noise.

      The pigeon hole principle applies to the "decrypted" data. Say you have 16 bytes of data protected by a 16-byte key. Then, there will be lots of keys that produce non-random "decrypted" sequences. But, if you have 1GB of data and a 16-byte key, then there is likely (depending on the nature of the underlying data) only one key that will produce the decrypted data.

      It's similar to why there can't exist a generic compression algorithm that *always* shrinks a file.

    • by txoof ( 553270 )

      That's a really interesting question. I'm not a math or cryptography person, so I'm just guessing here, but I bet you could some how measure how disordered the data stream was and make a guess about weather or not it was encrypted. It seems that encrypted data should also have some level of order to it. If you use something like rot-13 (which I know is a cypher not strictly encryption) to cypher an english message you can do some simple statistical analysis and make some intelligent guesses as to wich le

      • by BrotherBeal ( 1100283 ) on Friday January 29, 2010 @10:12AM (#30948844)

        ... but I bet you could some how measure how disordered the data stream was and make a guess about weather or not it was encrypted. It seems that encrypted data should also have some level of order to it.

        Encryption doesn't work that way, at least not good encryption. The goal of every encryption scheme is to transform a plaintext input into a ciphertext output that is indistinguishable from random noise. Your example of frequency analysis being used to attack ROT13 shows that it's a terrible encryption algorithm because it leaves so much information about the original message embedded in the transformed output. Every time you hear about an encryption scheme being broken, you're hearing about some way to recover information about the plaintext from the ciphertext. That information is what allows adversaries to beat brute-force decryption (although not always by much - a scheme with a keyspace of 2^n is considered broken if an attack is found that requires only 2^n-1 of the keys to be examined).

        The OP brings up an interesting point, of knowing when your data is actually decrypted.

        This is why a one-time pad is "perfect". A one-time pad leaves absolutely zero information about the original plaintext apart from length (and even that can be obfuscated by null padding). That means that there is no way for an adversary, even through a brute-force attack, to positively identify the original plaintext. Let's say we encrypt "HELLO WORLD" with a one-time pad, and the output is "ZBCHGRTKOP". "ZBCHGRTKOP" could be brute-forced by an adversary and produce "HELLO WORLD", but such an attempt would also produce "BUY MUSTARD" or "URINAL TOWN" or any other string of 10 characters (possibly including nulls - remember padding!). All of these are equally plausible if the one-time pad scheme is implemented perfectly. The point is that, depending on the encryption scheme, in a sense you can't always know that you've done it perfectly. Recreated internal structure is a good signal that you have done it correctly, but if you were trying to decrypt something you knew NOTHING about (couldn't tell it from random noise), you'd have a hell of a time telling whether you screwed up your decryption. Make things any clearer?

    • That's easy - the (uncompressed) clear text is going to be considerably less entropic [wikipedia.org] than average, and even most compression algorithms cannot raise the entropy to the same level as that of a random message.

      The key-space in DES 56 (ie, 2^56) is considerably smaller than the number of possible messages with that length (might be 2^(1000*x) for a text multiple kB message). If one of these 2^56 keys decrypts the message to a coherent clear text out of 2^1000, it's extremely unlikely to be a chance result.

      If

    • Re: (Score:2, Insightful)

      by noidentity ( 188756 )
      Any realistic encryption format will include verification information (a checksum at the very least) so the decrypter knows that it was successful. Otherwise it wouldn't even be able to tell you that you mistyped your password.
      • The problem with that is to confirm the checksum your cracker may have to decrypt a much larger portion of the message each time, slowing down the process.

      • Many encryption schemes (for secure messaging, say SSL) include a MAC or HMAC (secure checksum) with a *different* session key. So finding the MAC key is not always the same as finding the encryption key.

        You know the password is correct by generating a key from a password (say, using PKCS#5 password based encryption key generation scheme), and sending back an encrypted challenge. Using DES for this scheme is a terrible idea (known plaintext - all the encrypted "characters" are known).

        Anyway, most encrypted

    • The answer to your question is implementation-specific.

      If you're cracking encrypted email, you know you're looking for plain text, so that test is easy. Cracking TrueCrypt partitions? Use whatever algorithm TC uses to determine whether or not a password matches (TC has some deterministic means of distinguishing this, for use in its plausible deniability feature).

      Cracking SSL? Look for the one which decrypts to HTTP.

      Cracking general disk encryption products? Look for filesystem structures.

      The only time the p

    • Execpt the vast, vast, vast majority of data that's sent isn't random at all. In fact, it's extremely not-random. If you open a file in a hex editor (even a binary instead of ascii file), there are patterns, there is text, there are markers to show you what you have. Heck, in many files, the first 4-8 bytes of the file will tell you exactly what kind of file it is. While it's not a super-easy problem, it's easier than you're making it out to be.

      ~D

    • Your point is correct. You have to have additional information about the traffic. Often this comes in the form of frequency analysis of the trial decryption, or a known block of plaintext that is part of the formatting or protocol of the message (the standard example is the date of the message that the Nazi's often placed in their Enigma encrypted messages; if you know the date of the intercept, then you've got a pretty good guess for the plaintext for the first half dozen letter, so it's easier to test i

    • by Eil ( 82413 )

      When brute-forcing a key, you have to have some idea of what the decrypted data looks like. If you're cracking a hard drive, for example, you'd check for a valid partition table. If you're cracking a text file, you'd check it for natural language words. An HTTPS packet probably has HTML tags somewhere inside, etc. In order to crack a packet of encrypted data, you need to know these things:

      * The encryption algorithm
      * The properties of its implementation (key length, any key derivation functions, the kind of

  • What? (Score:5, Insightful)

    by trifish ( 826353 ) on Friday January 29, 2010 @09:30AM (#30948418)

    Parallel Algorithm Leads To Crypto Breakthrough

    Crypto Breakthrough? Huh? What's that supposed to mean?

    I mean, yes, his DES-cracking hardware is about 800x faster than a PC. Where's the "Crypto Breakthrough"?

  • by Shrike82 ( 1471633 ) on Friday January 29, 2010 @09:33AM (#30948440)
    DES is obselete anyway, though the way the decryption was carried out is fairly interesting. A little bit of homework shows that (apparently) a 56-bit DES key was broken in less than a day [wikipedia.org] over ten years ago. So he's a decade late and 66% less efficient!
    • by arcade ( 16638 ) on Friday January 29, 2010 @09:45AM (#30948570) Homepage

      apparently?

      Loads of us slashdotters was part of distributed.net's effort.

      I had 3 of my home computers running rc5des, and about ~200 university computers running it too. :)

      And you come up with this "apparantly" thing?! Less than 20 years old, prehaps? Born in the 90s? Not remembering? Harumpfh!

      • Re: (Score:3, Funny)

        by Shrike82 ( 1471633 )

        I wasn't personally involved in the decryption effort, so I naturally assumed it was probably some kind of scam carried out by a consortium of international security agencies, trying to convince us that all the encrypted pornography on our hard drives wasn't actually safe from outside scrutiny. Of course I could be wrong, so I covered myself both ways by inserting the qualifier "apparently". I'm a child of the 80's since you ask, but sadly at the time of the distributed.net decryption event I was limited to

      • Yeah, yeah, we'll get off your lawn.

  • by Pojut ( 1027544 ) on Friday January 29, 2010 @09:44AM (#30948554) Homepage

    ...reminded of the little box hidden in an answering machine in that movie Sneakers? lulz

  • What a load of crap! (Score:5, Informative)

    by ladadadada ( 454328 ) on Friday January 29, 2010 @10:21AM (#30948936) Homepage

    There has been no "crypto breakthrough".

    What they have done is created a chip that can do 1.6 billion DES operations per second (compared to 250 million for a GPU card) and then put 176 of them in a 4U server. This lowers the price to performance ratio by around a factor of 6 if you assume that their chip and a GPU are the same price.. By the way, this press release (and research) was made by the company that manufactures the chips in question.

    The "massively parallel" algorithm (their term, Dr. Dobbs just copied it) only decrypts a little of the file and looks for ASCII integers because that's what they put in the file before encrypting it. They have not found a way of culling candidate keys without already knowing what sort of data is in the encrypted file. That would be a "crypto breakthrough".

    it's a good bit of technology with many uses beyond cryptography that has, unfortunately, been marred by some overly breathless reporting.

    • With DES CBC encryption you only have to know what is in any of the encoded blocks. If they use a standard padding mechanism you can use the first to last block as IV, the key is brute force and the last block has to end with XX XX 80 00 00 00 00 00 after decryption or something similar.

      Encryption algorithms are supposed to protect against known plain text attacks and for good reason. It's not a "crypto breakthrough" because it is just a more effective way to brute force DES. Of course this *is* useful for

    • ...that has, unfortunately, been marred by some overly breathless reporting.

      Well, it seems to have gotten someone all hot and bothered.

    • by Rich0 ( 548339 )

      Well, they can look for patterns that suggest that the message contains a certain type of content (for example, frequency analysis would easily identify text of almost any language), although if you have to do that on every key that obviously greatly slows the process, and you can't just run the whole thing on a single block.

      Usually you know something about the person sending a message and possible topics under discussion, so that can give you words to grep for, which is probably a lot cheaper than frequen

  • Breakthrough? meh. (Score:2, Informative)

    by gmarsh ( 839707 )

    Brute-forcing DES doesn't require any creative algorithm to be run in parallel. You have 2^56 possible keys, split them amongst 2^n crackers and each cracker has to process 2^(56-n) keys. Not too hard.

    And loading an array of DES cracker cores onto an array of chips isn't novel either, ie:

    http://en.wikipedia.org/wiki/EFF_DES_cracker [wikipedia.org] (using ASICs)
    http://www.copacobana.org/ [copacobana.org] (using FPGAs)

  • Brute force search of the entire problem space is not an ALGORITHM breakthrough. This is a measure of hardware and how "embarrassingly parallel" the problem is.

  • Who uses DES? In the VPN's I setup and manage its at minimum AES-192.
  • I don't use numbers in my passwords.

Get hold of portable property. -- Charles Dickens, "Great Expectations"

Working...