Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Microsoft Encryption Security Software Technology

In Face of Flame Malware, Microsoft Will Revamp Windows Encryption Keys 100

coondoggie writes "Starting next month, updated Windows operating systems will reject encryption keys smaller than 1024 bits, which could cause problems for customer applications accessing Web sites and email platforms that use the keys. The cryptographic policy change is part of Microsoft's response to security weaknesses that came to light after Windows Update became an unwitting party to Flame Malware attacks, and affects Windows XP, Windows Server 2003, Windows Server 2003 R2, Windows Vista, Windows Server 2008, Windows 7, and Windows Server 2008 R2 operating systems."
This discussion has been archived. No new comments can be posted.

In Face of Flame Malware, Microsoft Will Revamp Windows Encryption Keys

Comments Filter:
  • Fact is, domestic and foreign govt agencies have moles working at Microsoft and apple to insert back doors or defeat encryption at the source. This is how stuff like flame happens. The only way out of this is to use an open source operating system where you can do your own code review, and where one guy doesn't have a bottle neck of control. Same goes for ios vs android.
    • by rsmith-mac ( 639075 ) on Wednesday July 11, 2012 @06:06PM (#40622275)

      That's a pretty serious "fact". And not to sound like a smart-ass Wikipedia editor, but some kind of citation would be great.

      One can certainly believe there are moles at Microsoft/Apple. One can even reasonably assume that the United States Government has the power to compel Microsoft/Apple to do things that are in the U.S.'s best interests. However for a foreign mole to be able to insert back doors into the Windows source code - which I would add is fairly well vetted since most governments and educational institutions have read access to the source - would be quite remarkable to the point of being unbelievable.

      • by lightknight ( 213164 ) on Wednesday July 11, 2012 @06:11PM (#40622359) Homepage

        Indeed. Why have a mole try to alter the code, and run the risk of being discovered, when you have a copy of the source, and can find existing bugs to use?

        • by ozmanjusri ( 601766 ) <.moc.liamtoh. .ta. .bob_eissua.> on Wednesday July 11, 2012 @07:32PM (#40623219) Journal

          True, but as ITWorld's Kevin Fogarty says;

          Still, the assumption seems to be true metaphorically, if not physically, so it's safer to assume Microsoft and its software have both been compromised. Given the track record of Stuxnet, Duqu and Flame for compromising everything they're aimed at, that assumption isn't even much of a stretch.

          http://www.itworld.com/security/281553/researcher-warns-stuxnet-flame-show-microsoft-may-have-been-infiltrated-nsa-cia [itworld.com]

          Personally, I use Linux because it's lower maintenance and less overhead, and gets out of my way when I'm working, but if I was a business lead, I'd certainly be avoiding Windows for anything requiring data security. The wonder is that we're not seeing users suing over compromised data/systems.

          • by Anonymous Coward

            Well - there are reason for that.

            At first - the EULA especially gives no guarantee your data will be preserved, uncompromised or protected. Yes that's right - the end user has no guarantee the software he/she/it uses is safe, and Microsoft is not responsible for anything. Great hm?
            Now - nothing is guaranteed with Linux either, but at least every end user can go trough the sourcecode to discover faults. Most end users will not do that or are not capable to do that, but at least the chances of discovery are b

            • "but at least every end user can go trough the sourcecode to discover faults"

              Yes, sure they can. And every user has such ability, and kernel level hacking skills. And each and every individual and company should employ people to do this.

              Yessss, right. While this is philosophically a nice point, I have to say that the real world aspect is delusional. And I'll add another point. Its open source. If Governments can insert code into MS codebases, they can do so with any open source. The fact is they might only

              • by Eythian ( 552130 )
                Well no, you can't just insert code just because something is open source. Trusted people and other reviewers look it over first.
          • Re: (Score:3, Insightful)

            Personally, I use Linux because it's lower maintenance and less overhead, and gets out of my way when I'm working, but if I was a business lead, I'd certainly be avoiding Windows for anything requiring data security. The wonder is that we're not seeing users suing over compromised data/systems.

            I know right... What are the chances out of the bazillion open source projects that go into your average linux distribution any of them could be be infiltrated by a three letter agency from this or any other nation... Impossible.... totally ...utterly..... impossible... ..right...?

            I know some people will say well its open source others would have the code and just know. Just like they knew about that Debian "SSL patch"... Or any of hundreds of "innocent" security bugs having later been discovered by attack

            • How long was kernel.org compromised? Without anyone knowing?

              We don't know. At least 17 days was the initial assessment, but kernel.org has - despite claims about openness - never got around to write up a post mortem.

              Many of us would like to know what the attack vector was. How a compromised *user* account could lead to the kernel being rooted. How a relatively amateurish infection (phalanx is a old, known and pretty trivial rootkit) could penetrate so deeply into infrastructure run by the people most knowledgeable on Linux *in the world*.

              Was the actual attack which

          • Comment removed based on user account deletion
        • when you have a copy of the source, and can find existing bugs to use?

          ..how about paying to deliberately place in a couple of such bugs?
          --
          “Forget Jesus. The stars died so you could be born.”

          • Because it's difficult to place bugs in the Source code that won't be discovered during quality testing (laugh with me, but yes, some companies do do it) or by your fellow developers.

      • It's far more reasonable to believe that there are moles at Intel/AMD. Their designs are not nearly as well vetted, since most governments and educational institutions don't have access to the source. It's also the smart place to put a backdoor, since it's MUCH harder for the target to remove it if found.
        • Helloooo, have you heard of WAKEUP WHILE POWER DOWN feature of all the newest Intel CPUs? No? Really? Where are coming from? The Moon?
      • by icebike ( 68054 ) * on Wednesday July 11, 2012 @09:02PM (#40623903)

        Well said.

        Nothing compels you to run Microsoft's encryption APIs either. They are convenient, and well documented, so most programmers do use them, but you can write or bring your own from any platform you trust. If your platform is backdoored none of this will help you much.

        The assertion that there are backdoors in spite of no one finding it and every single person in the chain of knowledge for the last 20+ years keeping their mouth shut right into the grave.

      • Do you happen to now how COM/DCOM is working? By design? And how easy was some 10 years ago to "sniff" your IE without any trace? Don't bother, it is not a fish.
      • However for a foreign mole to be able to insert back doors into the Windows source code - which I would add is fairly well vetted since most governments and educational institutions have read access to the source - would be quite remarkable to the point of being unbelievable.

        Yep. Anything that totally compromises the system would have been discovered by now. There's a lot of people interested in finding such things.

    • by nzac ( 1822298 )

      Yes but if the key length is sufficiently large they lose plausible deniability.
      No ones going to believe that anyone bruteforced a 2048 cert key but if you start mentioning MD5 and less and 1024 then it could be anyone.

    • by The Snowman ( 116231 ) on Wednesday July 11, 2012 @07:09PM (#40623025)

      The only way out of this is to use an open source operating system where you can do your own code review, and where one guy doesn't have a bottle neck of control.

      Yes and no. Open source doesn't guarantee security. For example, BIND had a long history of bugs (many of which involved security) due to poor design prior to version 9. You didn't need a mole or any malicious intent when the software was so full of big holes you could drive your car through them. OpenBSD had an alleged FBI back door [marc.info] in the news a couple years ago that had lain unnoticed for years.

      Then again, there are examples of open source uncovering security issues. A quick google search uncovered this old one [freebsd.org] and this more recent one [freebsd.org]. By the way, if it sounds like I'm picking on BSD, I was searching for that FBI link. The other stuff just popped up. I know the various BSDs have a reputation for stability and security.

      • by Anonymous Coward

        I think the question that GP hinted at was if you can not do code review, can it ever be secure. Code that is closed source only leaves one method to measure security, and that’s past performance. Based on past performance, you can make an educated guess over how secure something is. In that sense, new products and new market actors with closed software should never, ever, be trusted in anything requiring security. Without a past performance track record there is a complete lack of evidence to provide

    • by cavreader ( 1903280 ) on Wednesday July 11, 2012 @08:00PM (#40623465)

      Why do people assume there is a large group of developers that actually understand OS source code and are capable of locating and correcting any problems found? Most of the people with the necessary skills to do this are already busy working for companies that actually pay them for their services. The vast majority of security issues are discovered by companies and individuals who specialize in this area and expect payment for their services. OS troubleshooting and development also requires well stocked labs to test all of the different permutations of hardware and software behaviors. The low hanging fruit has already been grabbed which forces deeper analysis of the OS code to locate potential issues and determine the impact their proposed changes will have. Just because someone is half way competent in Application development does not mean they have the skills needed to understand OS development. OS development is quite different than Application development. Just downloading the OS source code and building it can be a gigantic pain in the ass when trying to sort out all of the dependencies and compiler configurations for a particular environment.

      I you want a secure system you are better off making sure the system administrators and application developers are doing their jobs. Some of most harmful security issues have exploited known issues that were corrected way before someone started exploited them. And those happens because system administrators failed to stay current on their security related service packs.

    • by betterunixthanunix ( 980855 ) on Wednesday July 11, 2012 @08:46PM (#40623795)

      The only way out of this is to use an open source operating system where you can do your own code review

      Have you ever tried to do this? I have tried, and trust me, no single person can review all of the software that runs on their system. There are a lot of places where a back door could be hiding, especially if you are talking about cryptography. Even something as seemingly innocuous as the default environment variables that programs see could be part of a back door (in case anyone does not know, the length of the environment variables can affect the alignment of memory, which can affect cache misses and potentially amplify a side channel).

      Have you reviewed the millions of lines in the Linux kernel? Have you reviewed OpenSSL? Have you reviewed GnuPG? Have you reviewed glibc, libstdc++, ld, bash, gcc, your python interpreter, your X server, your email client, your web browser, etc?

      • Have you reviewed the millions of lines in the Linux kernel? Have you reviewed OpenSSL? Have you reviewed GnuPG? Have you reviewed glibc, libstdc++, ld, bash, gcc, your python interpreter, your X server, your email client, your web browser, etc?

        There was a story a while ago with some reasonable mathematics that showed that if your CPU delivers a predictable incorrect result for _one_ 64x64 bit multiplication then this could be used to crack certain encryption. And that would be totally undetectable.

    • Comment removed based on user account deletion
    • Fact is, domestic and foreign govt agencies have moles working at Microsoft and apple to insert back doors or defeat encryption at the source. This is how stuff like flame happens. The only way out of this is to use an open source operating system where you can do your own code review, and where one guy doesn't have a bottle neck of control. Same goes for ios vs android.

      If that rabid nonsense that you are posting about Microsoft and Apple were true, what makes you think that the Linux code that you look at is the Linux code that gets compiled and shipped?

    • by Eskarel ( 565631 )

      This is only true if you're compiler hasn't been compromised [bell-labs.com], or the that compiled it, or the one that compiled that one, and on and on.

      The reality is that no matter how clever you are, how long you spend reading the source code for your favorite operating system, or how well you understand the results of that reading, you have to trust someone some time.

      Even aside from that, the number of people who truly understand the source and design of any given OS completely could probably be counted without resortin

    • I'm sure MS give them what they want. What's the worst thing that happens here? You've have to buy newer products as MS changes something to look like they're on your side. I'm sure that's keeping people at MS awake at night.
  • IIRC, crypto algorithms that use keys that large qualify as munitions and are subject to ITAR export regulations. Which means a lot of people with legal licenses will be (legally, anyway) prevented from making use of any Windows feature which requires a key length of 1024 bits or more.

    This also begs the question of why they allowed shorter keys to begin with... o_o

    • And the second amendment means I have the right to encryption.
      • by Anonymous Coward

        And the second amendment means I have the right to encryption.

        Assuming that you mean the 2nd amendment to the US Constitution, then you are simply WRONG.

        The parent post said EXPORT restrictions, so it was referring to users outside the US. Users outside the US aren't covered by the US Constitution.

    • Why are we screwing with 1024-bit keys? Why aren't we using keys that are 1048576-bits?
      • Re: (Score:3, Interesting)

        by nzac ( 1822298 )

        Because doubling the key length roughly increases the required time by 7. Increasing compute time by 7^20 is a little extreme, when just doubling it is good for a while.

      • by FrankSchwab ( 675585 ) on Wednesday July 11, 2012 @06:29PM (#40622625) Journal

        Because RSA-2048 keys (twice the length of RSA-1024) take about four times as long to operate on (http://www.cryptopp.com/benchmarks.html). RSA-15360 (which is roughly the strength of AES-256 (http://csrc.nist.gov/publications/nistpubs/800-57/sp800-57-Part1-revised2_Mar08-2007.pdf, page 63)) would take about (15360/1024)^3 = 3300 times as long as RSA-1024 (http://www.design-reuse.com/articles/7409/ecc-holds-key-to-next-gen-cryptography.html). This isn't a big deal for your local PC, where a single signature verification might take 250 ms rather than the sub-ms that it does with RSA-1024, but it has huge impacts on the servers that you're talking to - imagine increasing your server load by 330,000%.

        • by Bengie ( 1121981 )
          We should just go with ECC already. http://en.wikipedia.org/wiki/Elliptic_curve_cryptography [wikipedia.org]

          Faster than RSA1024 by a few factors, and about the strength of a 256bit symmetrical key, putting it at "universe lifes" for how long it takes to break.
          • I agree, but how about we stop giving out patents on number theory and revoke all previous patents on crypto? Seriously, ECC is a patent minefield, and those patents are holding back attempts at deploying more efficient crypto and crypto that can be used in innovative ways (like IBE, and yes, I am looking you Voltage Security).
      • Why are we screwing with 1024-bit keys?

        We are not supposed to be, 2048 should be considered the minimum going forward.

        Why aren't we using keys that are 1048576-bits?

        It would take too long to encrypt anything, and there are diminishing margins of return when key sizes grow so large. If you are using more than 16384 bit keys, you are doing it wrong -- if you really need security that far into the future, you should be using ECC (which is more efficiently in terms of key sizes) or something that is secure even in the presence of a quantum computer like McEliece.

        Also, keep in mind that s

    • by morcego ( 260031 ) on Wednesday July 11, 2012 @06:12PM (#40622373)

      IIRC, crypto algorithms that use keys that large qualify as munitions and are subject to ITAR export regulations. Which means a lot of people with legal licenses will be (legally, anyway) prevented from making use of any Windows feature which requires a key length of 1024 bits or more.

      Maybe ... we your time machine works and they are all send back to 1997. Because, since then, it is no longer restricted by ITAR and can be freely exported...

    • I recently received news that my credit card was involved in a sizeable bank hack. The take? over $20,000 in Asia, well over the card's (previous) limit, and the bank says I'm not responsible for anything I didn't charge. Now I can prove my physical location and measely charges on the other side of the world.

      If encryption is a part of this hack or any such security failures, we can't afford any more security theatre and survive financially.
  • by Anonymous Coward

    ISPs have been rejecting CSR requests with less than 1024-bit keys for a long long time. Looks like windows is forcing a long overdue change back at the server, but I suspect providers have already forced most hands earlier.

    • by sapgau ( 413511 )

      Wrong, ISP as in Internet Service Providers don't care what you are doing with your bits... It's the CAs (Certificate Authorities) who are not issuing certificates with CSRs of 1024 bits in length.

  • by joeflies ( 529536 ) on Wednesday July 11, 2012 @06:12PM (#40622375)

    If only there was a standards group, like NIST, that could determine what the acceptable key lengths were.

    Oh yeah, NIST does have a publication on this topic and stated that 1024 bit keys were no longer acceptable back in ... 2010. [nist.gov]

    by the way, is it really 1024 bit encryption keys as stated in the article? I thought that the encryption keys were symmetric and its' the signature of the public key that's 1024 bit.

    • by Goaway ( 82658 )

      Those are also encryption keys. They are most often used for signing, or for encrypting a symmetric key, but they are encryption keys.

    • Re: (Score:2, Informative)

      by Anonymous Coward

      Correct... sort of.

      For secure email, the sender encrypts with a 1024 bit public key. The recipient used the matching private key to decrypt. Only holders of that key can do so.

      For signing, the signer encrypts with a 1024 bit private key. The verifier used the public key to decrypt. If the verifier can do that and the hashes match, it's a legit signature. Anything encrypted with the private key is by definition signed. The 1024 bit public key isn't the signature - it's the key used to verify the messag

  • by Anonymous Coward

    AFAIK, the problem with flame was a trust problem, not a bitstrength problem. They allowed Terminal Services certificates signed by Microsoft to be used to sign application code and the certificate chain still passed. Presumably those TS certs could have been 2048 bit or higher.

  • I had to update my certs 2 years ago to meet PCI compliance. Honestly, Im shocked vendors still allow 1024 certs to be distributed.

    The tin foil hat guy inside me says this is great for vendors who will get to charge fees to upgrade everyones certs....

The truth of a proposition has nothing to do with its credibility. And vice versa.

Working...