In Face of Flame Malware, Microsoft Will Revamp Windows Encryption Keys 100
coondoggie writes "Starting next month, updated Windows operating systems will reject encryption keys smaller than 1024 bits, which could cause problems for customer applications accessing Web sites and email platforms that use the keys. The cryptographic policy change is part of Microsoft's response to security weaknesses that came to light after Windows Update became an unwitting party to Flame Malware attacks, and affects Windows XP, Windows Server 2003, Windows Server 2003 R2, Windows Vista, Windows Server 2008, Windows 7, and Windows Server 2008 R2 operating systems."
Well at least they're making a change (Score:5, Informative)
If only there was a standards group, like NIST, that could determine what the acceptable key lengths were.
Oh yeah, NIST does have a publication on this topic and stated that 1024 bit keys were no longer acceptable back in ... 2010. [nist.gov]
by the way, is it really 1024 bit encryption keys as stated in the article? I thought that the encryption keys were symmetric and its' the signature of the public key that's 1024 bit.
Re:Er, export restrictions? (Score:5, Informative)
Because RSA-2048 keys (twice the length of RSA-1024) take about four times as long to operate on (http://www.cryptopp.com/benchmarks.html). RSA-15360 (which is roughly the strength of AES-256 (http://csrc.nist.gov/publications/nistpubs/800-57/sp800-57-Part1-revised2_Mar08-2007.pdf, page 63)) would take about (15360/1024)^3 = 3300 times as long as RSA-1024 (http://www.design-reuse.com/articles/7409/ecc-holds-key-to-next-gen-cryptography.html). This isn't a big deal for your local PC, where a single signature verification might take 250 ms rather than the sub-ms that it does with RSA-1024, but it has huge impacts on the servers that you're talking to - imagine increasing your server load by 330,000%.
Re:Moles at Microsoft and apple (Score:5, Informative)
Yes and no. Open source doesn't guarantee security. For example, BIND had a long history of bugs (many of which involved security) due to poor design prior to version 9. You didn't need a mole or any malicious intent when the software was so full of big holes you could drive your car through them. OpenBSD had an alleged FBI back door [marc.info] in the news a couple years ago that had lain unnoticed for years.
Then again, there are examples of open source uncovering security issues. A quick google search uncovered this old one [freebsd.org] and this more recent one [freebsd.org]. By the way, if it sounds like I'm picking on BSD, I was searching for that FBI link. The other stuff just popped up. I know the various BSDs have a reputation for stability and security.
Re:Well at least they're making a change (Score:2, Informative)
Correct... sort of.
For secure email, the sender encrypts with a 1024 bit public key. The recipient used the matching private key to decrypt. Only holders of that key can do so.
For signing, the signer encrypts with a 1024 bit private key. The verifier used the public key to decrypt. If the verifier can do that and the hashes match, it's a legit signature. Anything encrypted with the private key is by definition signed. The 1024 bit public key isn't the signature - it's the key used to verify the message was encrypted with the matching private key as only holders of the private key can make messages decrypt-able by the public key. There is no separate 'signature' so to speak - only messages that have been encrypted with your private key. The 'signature' part comes from the idea that the message was encrypted with your private key and that only you have it.
When you send a CSR to be signed by say Verisign, you are sending your public key. They use one of their private keys to sign your public key. Anyone can use Verisign's public key to recover your public key which is then used to send encrypted messages to you. You know Verisign vouches that your public key is really your public key so everyone knows only you will have the private key.
In theory anyway...
For SSL sessions, the following happens (in essence - the basic idea holds though I may be wrong on some specifics):
1.) You get the site's signed (encrypted with Verisign's private key) public key (1024, year or two validity).
2.) You decrypt the signed key with (Verisign's) public key (much longer length and validity) obtaining the site public key and proof of identity of the site.
4.) Algorithms, protocols and such are negotiated - highest common denominator for all parties.
5.) You generate symmetric session keys (RC4, 128 bit perhaps) and encrypt them with site's public key and send them along.
6.) Site decrypts your message with their private key to obtain the session keys.
7.) Session keys are used for traffic for the remainder of the session.
1024 bit is used in one way or the other in all case. It is not used exclusively in all cases.
Re:Moles at Microsoft and apple (Score:5, Informative)
The only way out of this is to use an open source operating system where you can do your own code review
Have you ever tried to do this? I have tried, and trust me, no single person can review all of the software that runs on their system. There are a lot of places where a back door could be hiding, especially if you are talking about cryptography. Even something as seemingly innocuous as the default environment variables that programs see could be part of a back door (in case anyone does not know, the length of the environment variables can affect the alignment of memory, which can affect cache misses and potentially amplify a side channel).
Have you reviewed the millions of lines in the Linux kernel? Have you reviewed OpenSSL? Have you reviewed GnuPG? Have you reviewed glibc, libstdc++, ld, bash, gcc, your python interpreter, your X server, your email client, your web browser, etc?
Re:Moles at Microsoft and apple (Score:5, Informative)
Well said.
Nothing compels you to run Microsoft's encryption APIs either. They are convenient, and well documented, so most programmers do use them, but you can write or bring your own from any platform you trust. If your platform is backdoored none of this will help you much.
The assertion that there are backdoors in spite of no one finding it and every single person in the chain of knowledge for the last 20+ years keeping their mouth shut right into the grave.
Re:Moles at Microsoft and apple (Score:4, Informative)
I wonder how long it would take a modern cray or a cluster of 1,000 computers to crack a 2048 cert?
Throwing 1000 computer instances at the problem does not make the difference you think it does.
Re:Moles at Microsoft and apple (Score:5, Informative)
Not to mention:
http://www.schneier.com/crypto-gram-9909.html#NSAKeyinMicrosoftCryptoAPI [schneier.com]
Re:Moles at Microsoft and apple (Score:5, Informative)
Even if you know that its the square of the power required to crack 1024 bit certs, which themselves are the square of the power to crack 512 bit certs, which are themselves the square of the power to crack 256 bit certs.. when you are ignorant of how much power THAT is, you are still just guessing.
No organization on earth considers the breaking of 256 bit hashes/encryption trivial. Thats a 1 followed by a whopping 77 zeros. Thats only about 3 zeros away from the number of baryons in the entire visible universe.
Re:Moles at Microsoft and apple (Score:4, Informative)
Except the OpenBsd back door claim was never proven and dismissed by basically everyone - subsequent audits of code and checkins haven't revealed anything suspicious.
It was basically someone who wanted to get their name in the papers, that's all.