Forgot your password?
typodupeerror
Encryption Google Security

Google Moves Post-Quantum Encryption Timeline Up To 2029 (cyberscoop.com) 68

Google has moved up its post-quantum encryption migration target to 2029. "This new timeline reflects migration needs for the PQC era in light of progress on quantum computing hardware development, quantum error correction, and quantum factoring resource estimates," said vice president of security engineering Heather Adkins and senior staff cryptology engineer Sophie Schmieg in a blog post. CyberScoop reports: Google is replacing outdated encryption across their devices, systems and data with new algorithms vetted by the National Institute for Standards and Technology. Those algorithms, developed over a decade by NIST and independent cryptologists, are designed to protect against future attacks from quantum computers. While Google has said it is on track to migrate its own systems ahead of the 2035 timeline provided in NIST guidelines, last month leaders at the company teased an updated timeline for migration and called on private businesses and other entities to act more urgently to prepare.

Unlike the federal government, there is no mandate for private businesses to migrate to quantum-resistant encryption, or even that they do so at all. Adkins and Schmieg said the hope is that other businesses will view Google's aggressive timeframe as a signal to follow suit. "As a pioneer in both quantum and PQC, it's our responsibility to lead by example and share an ambitious timeline," they wrote. "By doing this, we hope to provide the clarity and urgency needed to accelerate digital transitions not only for Google, but also across the industry."

Google Moves Post-Quantum Encryption Timeline Up To 2029

Comments Filter:
  • NIST algorithms (Score:4, Interesting)

    by Valgrus Thunderaxe ( 8769977 ) on Friday March 27, 2026 @07:17PM (#66065418)
    Wasn't NIST shown to have been compromised by the NSA? Is this still the case?
    • NSA pressured NIST to include compromised parts into elliptical curve encryption.

      It allowed for a private key (presumably held by the NSA) to greatly reduce the difficulty of breaking things using that part of the suite.

    • Re:NIST algorithms (Score:5, Informative)

      by gweihir ( 88907 ) on Friday March 27, 2026 @11:06PM (#66065620)

      No idea. But what we have in "post quantum" crypto is all laughably weak against conventional attacks and laughably unverified. We have had finalists of competitions broken with low effort (one laptop) and the like. Moving to these algorithms is an excessively bad idea.

      • No idea. But what we have in "post quantum" crypto is all laughably weak against conventional attacks and laughably unverified. We have had finalists of competitions broken with low effort (one laptop) and the like. Moving to these algorithms is an excessively bad idea.

        There were several finalists for the post-quantum cryptographic submissions and one (SIKE) was found to have a mathematical flaw and was dropped from further consideration. One flawed approach does not make all of the others "laughably weak against conventional attacks and laughably unverified."

        NIST would not be actively testing and then publishing finalized standards if the new algorithms could be broken by a laptop over a single weekend.

        https://www.nist.gov/news-even... [nist.gov]

        • by gweihir ( 88907 )

          You have an irrational trust in an agency that has published intentionally compromised algorithms before. Well, there are tons of fools around. You fit right in.

          • Because 'some Slashdot user' is trustworthy.

          • You have an irrational trust in an agency that has published intentionally compromised algorithms before. Well, there are tons of fools around. You fit right in.

            The only one that is known for sure is the ECC-based PRNG. Nothing has been disclosed about their choice of elliptic curve parameters themselves for the NIST-approved elliptic curves, but so far no indications have been found that they may have selected them deviously - and the fact remains that ECC was first discovered in academia: the NSA were caught by surprise on this. As for DES, they tinkered with its initial design on NSA's advice all right, and it was only decades later that it was revealed why they

      • There's also the obligatory woof! woof! woof! comment [iacr.org] that's mandatory in any discussion of quantum.
      • by bjoast ( 1310293 )
        Which is why the general direction is towards hybrid schemes, where non-PQC algorithms are combined with the new PQC algorithms.
        • by gweihir ( 88907 )

          These do not work and cannot work. The core needs to be 100% quantum with full entanglement or you are not getting the gains. And that gets you two really bad bottlenecks: Number of effective Qbits (with very likely exponentially increasing effort) and length of the computation (with very likely exponentially increasing effort).

          All "hybrid" schemes get you is that it becomes easier to hide how utterly pathetic real-world QC performance is. I mean, factoring 35 is not even a challenge for a dog-slow, ultra l

      • They're fine against conventional attacks, I don't know where you're getting that idea from. It's true that one finalist turned out to have a catastrophic flaw, that doesn't mean they're generally worthless. What it actually means is that we have less confidence in their primitives than we do in conventional primitives, since they haven't been around for nearly as long. Whether you think the increased risk of a break being found is more or less than the risk of someone attacking your conventional algorithms

        • by gweihir ( 88907 )

          Sorry, but that is a KISS violation. If there is no credible threat (and there is none from QCs at this time), it is utterly irrational and decreases security to add countermeasures for non-credible threats. A push to do so does raise a couple if questions though, like why the push exists, and there are not good answers given. And that makes the whole endeavor a giant big red flag.

          As to resistance of these algorithms, it will require something like 20-30 years of research to bring them up to current classic

          • If you see no threat from quantum computers, then remaining completely classical for simplicity and performance is indeed the consequent choice. But whether there's a threat from them is a very hard question to evaluate since predicting technological development is very difficult, so one shouldn't jump to suspicion simply from there being people making the opposite judgement.
            • by gweihir ( 88907 )

              The facts are _really_ clear: The current factorization record without trickery and deception is 21. Might as well predict that "magic" will break classical crypto in any meaningful time with about the same level of justification.

      • No idea. But what we have in "post quantum" crypto is all laughably weak against conventional attacks and laughably unverified.

        This isn't true.

        Yes, one of the finalists was broken, utterly. There are no successful attacks against ML-DSA, ML-KEM or SLH-DSA, and they have good security proofs. Note that "successful attack" and "security proof" both have different meanings to cryptographers. A successful attack is one that reduces the security even a little from what it theoretically should be, even if the reduction still leaves the algorithm completely unbreakable in practice. A security proof is a proof that the construction i

    • Wasn't NIST shown to have been compromised by the NSA? Is this still the case?

      No.

      What was shown is that one random number generation algorithm was found to have been backdoored at the NSA's request. There is no evidence that this has ever happened with any of the other NIST-standardized algorithms, and it's also known that the NSA has stepped into strengthen other NIST algorithms (notably, DES -- though the the NSA both strengthened that by improving the S boxes and weakened it by asking for a smaller key size, though that wasn't a secret weakening; everyone understands the implic

  • I've started to see warnings in terminal when using keyed SSH, something along the lines of "vulnerable to store-now-decode-later attacks". I assume this is due to not using eliptic-curve codes in my PK generation?

    • by Entrope ( 68843 ) on Friday March 27, 2026 @07:56PM (#66065462) Homepage

      Elliptic curve crypto is vulnerable to the same kind of theoretical quantum attacks as integer-factorization cryptography. You currently need to use algorithms with unfortunate trade-off (large public keys or large signatures/key agreements) to get resistance to quantum attacks.

      Assuming quantum computers ever factor numbers larger than 21 without cheating or falling back to deterministic algorithms, at least.

      • by Tomahawk ( 1343 ) on Friday March 27, 2026 @08:59PM (#66065510) Homepage
        The public/private key can be big and slow, as it's only used during the initial handshaking and login anyway. I'm not going to notice any extra couple if tenths of a second logging in.

        After that everything is (much much faster) symmetric encryption.

        You still need a PQC algorithm here too, though. AES-256 is still considered quantum-resistant, for now, at least, so we're good.
        • by gweihir ( 88907 )

          AWS-256 will remain quantum resistant forever. QCs only get you a halving of the bits for block-ciphers. Hence AES-256 gets you a computational safety of 2^128 and that is unbreakable in this universe and even more so with dog-slow QCs that cannot do long computations and is about the most unsuitable mechanism for brute-forcing anything that is imaginable. The real threat to AES is conventional attacks getting within reach (reducing the effective key-length to something like 80 bit), but AES is built on top

          • AES-256 will remain quantum resistant forever. QCs only get you a halving of the bits for block-ciphers.

            These statements are too strong -- in both directions!

            First, although Grover's algorithm is proven to be the optimal quantum algorithm for generalized search, you don't necessarily need a generalized search algorithm to break a block cipher. Block ciphers have internal structure that may be exploitable by quantum algorithms. Indeed researchers have made some progress in designing quantum algorithms to break Feistel network-based ciphers (which AES is not, but the previous standard cipher, DES, is). The

            • "quantum resistant forever" is too strong.

              I've only taken fairly general master's level courses in quantum information and regular cryptography, but I agree with this overall sentiment. My math professors used to say that no asymmetric encryption scheme has been proved unbreakable; we only know if they haven't been broken so far. Assuming something is unbreakable is like saying Fermat's last theorem is unprovable — until one day it's proved. So to me "post quantum cryptography" is essentially a buzzword.

              • "quantum resistant forever" is too strong.

                I've only taken fairly general master's level courses in quantum information and regular cryptography, but I agree with this overall sentiment. My math professors used to say that no asymmetric encryption scheme has been proved unbreakable; we only know if they haven't been broken so far. Assuming something is unbreakable is like saying Fermat's last theorem is unprovable — until one day it's proved. So to me "post quantum cryptography" is essentially a buzzword.

                Yes, but... I think you're confusing some things. We're talking about AES, which is a symmetric encryption algorithm, not asymmetric.

                Of course, no cryptographic construction has been "proven" secure, in the sense that mathematicians use the word "prove", not symmetric or asymmetric. Asymmetric schemes have an additional challenge, though, which is they have to have some sort of "trapdoor function" that mathematically relates a public key and a private key, and the public key has to be published to the a

              • by gweihir ( 88907 )

                You seem to have slept through that course because you do not even have the very basics right. First, for the El Gamal asymmetric scheme, there is a security proof. This proof does not extend to all possible attacks, but is pretty strong and the limitations can be fixed. Second, AES is a _symmetric_ algorithm. This is so basic that I must conclude you would have failed that course if there was any real examination. And lastly, you seem to have a reading disability, because I did nowhere claim that AES is un

        • by Entrope ( 68843 )

          Sure, it is not a big problem for SSH. It is a problem when you connect to a web site, especially as certificate lifetimes get shorter: you need the whole certificate chain from a root (that your browser trusts) to the web server, which means at least two public keys and signatures and often more.

          The NIST-approved post-quantum options and PK/sig sizes (in bytes, for "security level 1", which is the lowest) are Crystals Dilithium 2 (1312 / 2420), Falcon-512 (897 / 666 but computationally expensive) or SPHIN

      • by gweihir ( 88907 )

        It is far worse: ECC uses significantly shorter keys, hence the QCs needed to break it are exponentially easier to build. May still be out of reach, but the safety margins are much smaller.

        The reality of things is that unless you have stuff that needs to stay secret for, say, > 20 years, classical algorithms with currently recommended key-lengths are entirely fine. And there is not a lot of things that really need to stay secret that long. The whole push to actually put post-quantum crypto in production

        • by Entrope ( 68843 )

          Yup. I'm waiting for any quantum computer to actually break a non-trivial public key, even of a laughably small order (like RSA130, which was factored by classical computers 30 years ago). Lots of people get famous for papers based on theoretical quantum gates that nobody knows how to realize.

          • by gweihir ( 88907 )

            Well, my current estimate id +5 effective qbits every 50 years. That linear scaling may be massive overestimating things, chances are the real scaling is inverse exponential, but lets assume it is linear for the moment. RSA130 needs around 450 effective qbits in a long calculation. We are currently able to factor 21, i.e. 5 bits. Hence we may see RSA130 fall to a QC in something like 4500 years.

            I have absolutely no problem with QCs as physics experiments and for advancing some areas of Math. But pushing the

    • That's in all of the latest versions of ssh, which is or can be installed on more than just Macs. For example, the latest version of RHEL has that warning too.

      The warning can be disabled through the WarnWeakCrypto option in ssh_config, if you wish. Obviously that doesn't actually "fix" the issue that is being warned about, but that hide the warning and avoids having it continually popping up.

      • by gweihir ( 88907 )

        The sad thing is that at this time, disabling the warning is probably the most secure thing to do. Of course, that comes with other problems.

    • by gweihir ( 88907 )

      Probably. The whole thing is deeply irrational (or active sabotage by the USA), hence ignoring the alerts may be the safer choice. But it may also not be. The whole thing is a complete mess and there seem to be real surveillance-fascism interests at work in the background.

  • by battingly ( 5065477 ) on Friday March 27, 2026 @08:17PM (#66065482)

    I would imagine there are already many encrypted password repositories, acquired through breaches, just waiting to be cracked when the quantum hardware is up to the task. There's not much that the new encryption algorithms can do about that particular issue.

    • Unless quantum computing becomes cheap and comparatively widely available quite quickly after becoming viable passwords seem like they'll be a manageable problem. Nobody likes rotating them; but it's merely tedious to do and the passwords themselves are of zero interest unless they are still being accepted. If it does go from 'not possible' to 'so cheap we can just go through through in bulk' overnight that could ruin some people's days; but if there's any interval of 'nope, the fancy physics machine in the
    • by gweihir ( 88907 ) on Friday March 27, 2026 @11:05PM (#66065618)

      Quantum hardware may never be up to the task. They cannot even factorize 35 at this time (https://eprint.iacr.org/2025/1237). The whole thing is a mirage and a bad idea that refuses to die.

      Incidentally, even if they ever become able to do tasks of meaningful size, QCs are completely unsuitable for reversing hashes and that is what cracking passwords needs.

      • by parityshrimp ( 6342140 ) on Friday March 27, 2026 @11:25PM (#66065638)

        This is very true. gweihir is 100% correct: quantum computing isn't computing and is never going to work.

        The real tragedy is all the companies and scientists spending so much time and money researching this technology and improving the state of the art. They could all save themselves a whole lot of wasted effort by listening to gweihir and not bothering. A shame.

        In other news, that newfangled device Bardeen and Brattain just cooked up is a mere laboratory curiosity and has abysmal gain. Call me when it has a gain of over 100, is smaller than 10 mm^3, and I can buy one for less than a nickel.

        • by gweihir ( 88907 )

          I guess you think Peter Gutmann has no clue as well. You are a fool.

          • Hey, I'm agreeing with you. I'm completely flabbergasted that all of the experts working in this field think they're accomplishing something and haven't sought the superior knowledge that you and Peter Gutmann posess. It's disappointing that all of these companies and smart people are wasting so much time and so many resources as a result.

      • QCs are completely unsuitable for reversing hashes and that is what cracking passwords needs.

        Translation: we don't currently have a quantum algorithm for reversing hashes. But there was a time, not that long ago, when we didn't have a quantum algo for factorization either. However, I don't expect to see a quantum algo for hash reversion any time soon, because the whole problem of reversing hashes is pretty complex.

        Factorization as a classical problem is essentially trivial, in that there are very simple classical algorithms for it. They just take a lot of time to run. But coming up with an effic

    • QC is very marginally better against symmetric algorithms than traditional computers. Plus it remains to be seen whether QCs will ever amount to anything other than lab curiosities with no practical applications.
  • by Uldis Segliņš ( 4468089 ) on Friday March 27, 2026 @10:27PM (#66065578)
    This looks like shifting the goal posts after realizing that they can't reach the quantum computer. Any 5 years now. Just like fusion, just like AGI, just like selfdriving and colonizing Mars any day now. Show me a practically working one. Show me it's build method scalability. Show me that your machine can do anything more than a few very narrow usecase problemsolving. Haven't seen any proof yet. Until you do the homework, not gonna believe one nonquantum bit of your claims, regardless of your size. It ceases to be magic when you look at the details.
    • The self-driving thing is sorta happening.
      • Even BYD is experiencing a drop in sales because of highly publicized problems with unexpected stops and accelerations by their "Gods eye" self driving, which can use two LIDAR units. Apparently God never learned to drive. Or wants to kill people, as evidenced by past history.

        • I guess it's Waymo vs Apollo right now, and from what I can tell, both are pretty good when geofenced and nothing weird happens.
          • Well we now know waymo is all smoke and mirrors. They have remote drivers in the Philippines. Don't know anything about Apollo.

            • We all know that Waymos had remote drivers for oh shit moments. We all know that Waymo is not 100% there yet, but it's close and getting closer. It's not like your Waymo is 100% or even 5% driven by some Filipino. 19,234 miles per disengagement according to Waymo, and getting better. One remote driver per 40 vehicles. "pretty good when geofenced and nothing weird happens". - what part of that is confusing to you?
      • by gweihir ( 88907 )

        Yes. Not quite there, may take another 20 years or so, but I had an opportunity to see where they where 35 years ago. And they already were deep in the details at that time back when. But the thing is, self-driving is a classical problem and classical problems can be divided, parallelized, special cases and maps put into databases, etc. Self-driving is conceptually _easy_. The practical aspects are not. None of that is true for Quantum Computations. Quantum Computations are all-or-nothing and you cannot bre

        • There needs to be a breakthrough for quantum to happen, yes, but there also needs to be a lot of other work done. But then, thirty years ago, you didn't have LED light bulbs, and they've taken over almost completely. In the meantime, I know that Curtis Priem is trying to do to quantum what he helped to do processing.
          • by gweihir ( 88907 )

            You cannot know whether there will be a breakthrough. At this time, all that is known says there will not be one.

            As to breakthroughs having happened in other areas, that is not an indicator that one will happen here. That is just a form of survivorship bias.

            • Finding a name for a bias doesn't make the bias actually present. I expect that there will be a breakthrough in quantum computing because there is a lot of effort being put into that area. Historically speaking, that will be enough to get the job done eventually. Several areas of engineering where I thought we would not see breakthroughs, like speech to text and self-driving, have seen breakthroughs thanks to modern AI. Solar and battery tech keep improving by leaps and bounds, as does light technology.
    • Post-Quantum means after it happens. The period of time in which people prepare for things is generally before they happen.

      • Yeah, look at global warming or legal system to take a bit simpler stuff. or even simpler - nucular arms race, the doomsday clock. We only act when something that matters has happened, until then we just look at it approaching. And seems another one is incoming with AGI - when it gets done, we are done too. So far all the "prepare for doomsday" looks just like any other marketing with a few legal entities at interest.
        • by gweihir ( 88907 )

          We are not going to get AGI this century. The people that claim that are lying (Altman) or are delusional. AGI is not a question of throwing more computing power at the problem. Something fundamental is missing and we have no idea what. Also note that most humans may not actually have any meaningful amount of general intelligence. Only about 10-15% are independent thinkers and can fact-check. And that is basically what AGI would need to be able to do to qualify. Unless we find out a lot more, we cannot even

          • We are not going to get AGI this century.

            You cannot possibly know that.

            AGI is not a question of throwing more computing power at the problem. Something fundamental is missing and we have no idea what.

            This seems plausible, but it implies that you cannot possibly know whether we're going to get AGI this century. If it's true, it means that we'll get AGI when we discover that as-yet-missing knowledge, and there's no way to predict when that might happen. It might have happened yesterday and we just don't know it yet. What is certain is that (a) the knowledge exists and (b) we're looking for it, really hard.

          • You are totally right about us reaching AGI soon not being a reality. But that does not mean that we should ignore the inherent human wipeout when it happens and have tight control before it gets too late.
        • Y2K is a better example. Y2K could have been a castrophe. A decade before it happened we started working to fix all the systems. Hundreds of millions of dollars (maybe billions) were spent on Y2K remediation. Then Y2K came and... nothing much happened. Lots of people pointed and said "Haha! All that money spent fixing the problem was a waste!", but they were wrong. All of the money spent fixing the problem fixed the problem.

          This is what we have to do with cryptography and quantum computers. If we

          • So far I don't see anything termendous besides money wasted.
            • So far I don't see anything termendous besides money wasted.

              I'm sure you'd have said the same about Y2K. It's a good thing that some people have more foresight.

    • by gweihir ( 88907 )

      QCs exist. With extreme effort and some trickery, they can even factorize 21 now (35 is still a fail at this time). That is 5 effective qbits in a somewhat complex computation. It makes for a nice physics experiment. But that is after about 50 years of research. And it looks very likely that QC effort scales exponentially in two dimensions of the the size of the computation (qbits and steps in the computation). Hence, if we progress at this speed, we may be able to factor 10 bit numbers with a QC in, say, 5

  • What "progress"? (Score:5, Insightful)

    by gweihir ( 88907 ) on Friday March 27, 2026 @11:02PM (#66065614)

    They are hallucinating hard. The current actual actual quantum factorization is not even 35 (that attempt failed, overview in https://eprint.iacr.org/2025/1... [iacr.org]).

    While crypto-agility is a good idea, there is no threat from Quantum "Computing" and there may never be one.

  • Google moving the deadline up and saying "because our own quantum tech is progressing faster than we thought"* sounds like using one of their branches to spin another.

    * Paraphrased

    • by gweihir ( 88907 )

      Indeed. Also note that "basically no progress" can be a lot faster than "basically no progress". At the glacial pace that QCs are making, and with the laughably low performance they currently have (factoring 21 after 50 years of research, seriously???) relative speeds are strongly subject to meaningless artefacts.

  • The target is now 2029.

    WHAT WAS IT BEFORE, you dufus ?!
  • There are a lot of denialists in this thread. But, while it's easy to crap on a technology that isn't yet truly developed, and one I don't really understand, this announcement seems like one that should be taken far more seriously.

    I can't see Google changing a 13 year horizon to a three year horizon without it being very serious. Three years is a very short window for a technology to "not exist".

    • Another possibility is that they are attempting to drum up interest in a field that has been promising to revolutionize things for a couple of decades now, and whose actual practical achievements remain extremely limited.

Our business in life is not to succeed but to continue to fail in high spirits. -- Robert Louis Stevenson

Working...