Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Encryption Security Supercomputing IT

Parallel Algorithm Leads To Crypto Breakthrough 186

Hugh Pickens writes "Dr. Dobbs reports that a cracking algorithm using brute force methods can analyze the entire DES 56-bit keyspace with a throughput of over 280 billion keys per second, the highest-known benchmark speeds for 56-bit DES decryption and can accomplish a key recovery that would take years to perform on a PC, even with GPU acceleration, in less than three days using a single, hardware-accelerated server with a cluster of 176 FPGAs. The massively parallel algorithm iteratively decrypts fixed-size blocks of data to find keys that decrypt into ASCII numbers. Candidate keys that are found in this way can then be more thoroughly tested to determine which candidate key is correct." Update by timothy, 2010-01-29 19:05 GMT: Reader Stefan Baumgart writes to point out prior brute-force methods using reprogrammable chips, including Copacobana (PDF), have achieved even shorter cracking times for DES-56. See also this 2005 book review of Brute Force, about the EFF's distributed DES-breaking effort that succeeded in 1997 in cracking a DES-encrypted message.
"'This DES cracking algorithm demonstrates a practical, scalable approach to accelerated cryptography,' says David Hulton, an expert in code cracking and cryptography. 'Previous methods of acceleration using clustered CPUs show increasingly poor results due to non-linear power consumption and escalating system costs as more CPUs are added. Using FPGAs allows us to devote exactly the amount of silicon resources needed to meet performance and cost goals, without incurring significant parallel processing overhead.' Although 56-bit DES is now considered obsolete, having been replaced by newer and more secure Advanced Encryption Standard (AES) encryption methods, DES continues to serve an important role in cryptographic research, and in the development and auditing of current and future block-based encryption algorithms."
This discussion has been archived. No new comments can be posted.

Parallel Algorithm Leads To Crypto Breakthrough

Comments Filter:
  • by Shrike82 ( 1471633 ) on Friday January 29, 2010 @09:33AM (#30948440)
    DES is obselete anyway, though the way the decryption was carried out is fairly interesting. A little bit of homework shows that (apparently) a 56-bit DES key was broken in less than a day [wikipedia.org] over ten years ago. So he's a decade late and 66% less efficient!
  • by Lord Byron II ( 671689 ) on Friday January 29, 2010 @09:42AM (#30948536)

    Exactly right.

    Properly encrypted data should be indistinguishable from random noise.

    The pigeon hole principle applies to the "decrypted" data. Say you have 16 bytes of data protected by a 16-byte key. Then, there will be lots of keys that produce non-random "decrypted" sequences. But, if you have 1GB of data and a 16-byte key, then there is likely (depending on the nature of the underlying data) only one key that will produce the decrypted data.

    It's similar to why there can't exist a generic compression algorithm that *always* shrinks a file.

  • by arcade ( 16638 ) on Friday January 29, 2010 @09:45AM (#30948570) Homepage

    apparently?

    Loads of us slashdotters was part of distributed.net's effort.

    I had 3 of my home computers running rc5des, and about ~200 university computers running it too. :)

    And you come up with this "apparantly" thing?! Less than 20 years old, prehaps? Born in the 90s? Not remembering? Harumpfh!

  • by Anonymous Coward on Friday January 29, 2010 @10:04AM (#30948772)

    Depending on the cipher, this may, or may not work.

    A common method to enhance the security of DES is/was to do a encrypt/"decrypt"/encrypt cycle, each with a different key. You may know this method under the name of 3DES. While DES is long broken, 3DES is still considered pretty secure, albeit slow. In contrast, using "tripple-XOR" will most likely not increase the security of a cipher.

    And encrypting multiple time with the same key will, for any reasonably secure crypto system*, not increase security. The security incerase in 3DES was due to the increased key length (i.e. the combined length of the 3 keys used in 3DES). Using the same key over and over again does not increase the effective key length.

    *Increasing the number of rounds used internally in a crypto system can in some cases increase the security. Usually, cryptoanalists start breaking reduced-rounds versions of ciphers before breaking the full version.

  • by SmurfButcher Bob ( 313810 ) on Friday January 29, 2010 @10:10AM (#30948820) Journal

    10 years ago called, they want their ideas back - starbridgesystems.com

  • by Anonymous Coward on Friday January 29, 2010 @10:12AM (#30948848)

    Yeah, well Xilinx pursued this in the early 90s with a swappable FPGA with an open architecture. That was discontinued pretty quickly, though.

    The main issue is that apps aren't slow in the right way. Very few apps these days are in fact ALU-bound. With GPU resources and SSE, even fewer need extra ALU power, and the hard limits come from memory access speed (especially random access, as required by a great many algorithms). FPGAs don't really make this any easier (except insofar as they can offer *small* local caches which are blisteringly fast).

    Also, any application domain which does have a speed problem tends to get hardware accelerator support pretty quickly - think of H.264 encoding, for instance. Whatever can be done on an FPGA is already done in various other products. None-the-less, a generic FPGA in every computer would reduce the need for all this custom silicon.

    Personally I like the idea, but I fear it is too "niche" to ever make it as a standard PC component.

    OTOH, what Intel etc. might consider is putting some FPGAable "special instructions" in the ISA, then attaching some FPGA resources that can be programmed in a relatively simple manner to execute some specially-needed instruction. I used to dream of this back in the early 90s when I was writing texture-mapping software ... the bit-twiddling there can take several instructions, whereas a few custom FPGA cells could do the same thing in one inst. But then, the programmer would want to implement pipelining on anything > 1 cycle, and that could be interesting to interface back to the main core.

    For the niche applications where this all makes sense, you can buy some pretty awesome development kits. Pico Computing (mentioned in TFA) look to make interesting products, but there is also the whole XGameStation thing which - thanks to its nature - can easily be repurposed.

  • by maxume ( 22995 ) on Friday January 29, 2010 @10:13AM (#30948860)

    If the encryption is properly implemented, the repeated cycles should not reveal any information, but it works better to just use a larger key (encrypting twice with 2 different 2 bit keys should be roughly equivalent to encrypting once with a 3 bit key, 4 different 2 bit keys would be equivalent to a 4 bit key and so on, so just going up to a much larger key is probably easier).

  • What a load of crap! (Score:5, Informative)

    by ladadadada ( 454328 ) on Friday January 29, 2010 @10:21AM (#30948936) Homepage

    There has been no "crypto breakthrough".

    What they have done is created a chip that can do 1.6 billion DES operations per second (compared to 250 million for a GPU card) and then put 176 of them in a 4U server. This lowers the price to performance ratio by around a factor of 6 if you assume that their chip and a GPU are the same price.. By the way, this press release (and research) was made by the company that manufactures the chips in question.

    The "massively parallel" algorithm (their term, Dr. Dobbs just copied it) only decrypts a little of the file and looks for ASCII integers because that's what they put in the file before encrypting it. They have not found a way of culling candidate keys without already knowing what sort of data is in the encrypted file. That would be a "crypto breakthrough".

    it's a good bit of technology with many uses beyond cryptography that has, unfortunately, been marred by some overly breathless reporting.

  • Breakthrough? meh. (Score:2, Informative)

    by gmarsh ( 839707 ) on Friday January 29, 2010 @10:24AM (#30948960)

    Brute-forcing DES doesn't require any creative algorithm to be run in parallel. You have 2^56 possible keys, split them amongst 2^n crackers and each cracker has to process 2^(56-n) keys. Not too hard.

    And loading an array of DES cracker cores onto an array of chips isn't novel either, ie:

    http://en.wikipedia.org/wiki/EFF_DES_cracker [wikipedia.org] (using ASICs)
    http://www.copacobana.org/ [copacobana.org] (using FPGAs)

  • by Zerth ( 26112 ) on Friday January 29, 2010 @10:59AM (#30949412)

    There are a run-time reconfigurable FPGAs on the market, but they still take time to switch(something like 200 microseconds). Not terribly long on our scale, but not exactly fast for the CPU.

    The real problem is that FPGAs are generally more expensive for anything that you can mass-produce. FPGAs shine if you want something custom and parallel, like this cluster, and can be cost-competitive compared to getting your own silicon made for prototypes.

  • Re:What? (Score:3, Informative)

    by DigitAl56K ( 805623 ) * on Friday January 29, 2010 @12:22PM (#30950648)

    "What?" indeed.

    This is exactly the same technique as the EFF DES cracker used in 1998, except using FGPAs instead of custom chips.

    http://w2.eff.org/Privacy/Crypto/Crypto_misc/DESCracker/HTML/19980716_eff_des_faq.html#howsitwork [eff.org]

  • by keckbug ( 1525803 ) on Friday January 29, 2010 @12:56PM (#30951220)
    While I can't argue that NSA hasn't been able to decode DES from an early point, it's not all doom and gloom. From the same Wikipedia page are several sources confirming that Lucifer, as initially conceived by IBM, was vulnerable to differential cryptanalysis, and the NSA's modifications, back in 1974, significantly strengthened the subsequent DES algorithm. This wasn't confirmed until 1990. Before we dismiss any government involvement out of hand, lets consider all sides. Relevant passage from Wikipedia: Some of the suspicions about hidden weaknesses in the S-boxes were allayed in 1990, with the independent discovery and open publication by Eli Biham and Adi Shamir of differential cryptanalysis, a general method for breaking block ciphers. The S-boxes of DES were much more resistant to the attack than if they had been chosen at random, strongly suggesting that IBM knew about the technique back in the 1970s. This was indeed the case — in 1994, Don Coppersmith published some of the original design criteria for the S-boxes. According to Steven Levy, IBM Watson researchers discovered differential cryptanalytic attacks in 1974 and were asked by the NSA to keep the technique secret.
  • by billstewart ( 78916 ) on Friday January 29, 2010 @01:04PM (#30951378) Journal

    It turns out that there are attacks on 3DES that mean that the effective strength is still only 112 bits, not 168, even if you're using three different keys. Since 2 keys are just as strong, it lets you use a 128-bit or 160-bit source of randomness to generate them.

  • by fatphil ( 181876 ) on Friday January 29, 2010 @07:51PM (#30957216) Homepage
    Cracking cost is not proportional to time. Cracking cost is proportional to the product of the amount of equipment used times the time it's being used. In order to reduce the time cost by 2^56, you need to multiply the equipment cost by 2^56 - that's what MITM does. The total real cost has remained the same - 2^168. The Biham approach, as applied by Lucks to 3DES, while reducing the time to 2^90, increase the space to 2^88, for a total cost of 2^178 - a net loss. 2 keys are clearly weaker than 3.

Understanding is always the understanding of a smaller problem in relation to a bigger problem. -- P.D. Ouspensky

Working...