Scientific Breakthrough Gives New Hope To Building Quantum Computers (ft.com) 83
Google has achieved a major breakthrough in quantum error correction that could enable practical quantum computers by 2030, the company announced in a paper published Monday in Nature. The research demonstrated significant error reduction when scaling up from 3x3 to 7x7 grids of quantum bits, with errors dropping by half at each step. The advance addresses quantum computing's core challenge of maintaining stable quantum states, which typically last only microseconds.
Google's new quantum chip, manufactured in-house, maintains quantum states for nearly 100 microseconds -- five times longer than previous versions. The company aims to build a full-scale system with about 1 million qubits, projecting costs around $1 billion by decade's end.
IBM, Google's main rival, questioned the scalability of Google's "surface code" error correction approach, claiming it would require billions of qubits. IBM is pursuing an alternative three-dimensional design requiring new connector technology expected by 2026. The breakthrough parallels the first controlled nuclear chain reaction in 1942, according to MIT physics professor William Oliver, who noted that both achievements required years of engineering to realize theoretical predictions from decades earlier.
Further reading: Google: Meet Willow, our state-of-the-art quantum chip.
Google's new quantum chip, manufactured in-house, maintains quantum states for nearly 100 microseconds -- five times longer than previous versions. The company aims to build a full-scale system with about 1 million qubits, projecting costs around $1 billion by decade's end.
IBM, Google's main rival, questioned the scalability of Google's "surface code" error correction approach, claiming it would require billions of qubits. IBM is pursuing an alternative three-dimensional design requiring new connector technology expected by 2026. The breakthrough parallels the first controlled nuclear chain reaction in 1942, according to MIT physics professor William Oliver, who noted that both achievements required years of engineering to realize theoretical predictions from decades earlier.
Further reading: Google: Meet Willow, our state-of-the-art quantum chip.
and it will arrive the same time (Score:1)
Re:and it will arrive the same time (Score:4, Insightful)
Re: (Score:2)
Re: (Score:2)
Re: A reason not to break up Google (Score:2)
Errm, DeepMind, hello?
It's often the VC funded small start ups who can blue sky stuff without having to ship something to keep shareholders to happy every 6 months.
Re: (Score:2, Insightful)
Parent is correct.
DeepMind and other "smaller companies" could not do pure research like this without Google providing an abundant stream of funding and freedom. Even the Venture Capital markets are highly averse to "moon-shot" type efforts where the investment required is huge and probability of success is low, even though a successful venture would have massive returns.
Bells labs produced many incredible moon-shot innovations until the breakup. Starved of funding and focus, it withered and died. Xerox
Scalability, Schmalabity (Score:3)
"IBM, Google's main rival, questioned the scalability of Google's "surface code" error correction approach"
I believe I questioned the scalability, right here on slashdot, on the discussion around the original surface code paper. The numbers were in the paper showing the error correction could not scale to a lid on BER as the number of qubits and iterations increase.
The text of TFS is underwhelming. "Dropped by half". So you had 10 errors, now you have 5. Your algorithm still doesn't work.
I haven't read the paper yet - I guess I've got to go and do that now just so I can see why it won't work.
Re:Scalability, Schmalabity (Score:4, Interesting)
Sadly the paper is behind a $30 paywall.
I read the abstract. The claim seems weak - "beyond break even" I.E. they do better than not having error correction. Well I should coco, but is it enough? 63us decoder time for a 1.1 us cycle. So how does that work? 1.1us then you spend 63us waiting for the error correction before doing the next cycle? They've upped it from a distance 5 to distance 7 code - a distance 7 code can correct 3 bits in (if I read it right) 101 qbits comprising the logical qbit. That's not enough.
Perhaps a real quantum computer academic can tell me where I'm misreading it? I'm just a humble cryptography implementor trying to work out if I should be worried.
Re: Scalability, Schmalabity (Score:3)
I dont think cryptographers will have to worry for a decade or 2 yet given no one has yet produced any kind of basic usable functionality , never mind the quantum equivalent of Eniac.
Re: (Score:2)
The result I'm interested in is whether it is possible or not. We don't have an answer either way.
If the noise always grows or the energy to create low entropy matter or something else conspires to prevent it being possible then that it good for stored data.
If QEC can work in a scalable way and BER can be reduced such that the ECC beats the BER, then you have a problem for stored data.
Being crypto minded, my personal opsec led me to keep anything secret that went into a cloud or transited across the interne
Re:Scalability, Schmalabity (Score:4, Informative)
Sadly the paper is behind a $30 paywall.
I believe this is the pre print https://arxiv.org/pdf/2408.136... [arxiv.org]
Re: (Score:2)
Sadly the paper is behind a $30 paywall.
I believe this is the pre print https://arxiv.org/pdf/2408.136... [arxiv.org]
Thank you.
Baloney computing (Score:4, Insightful)
They still haven't shown they can factorize a number great than 35. That's the true test of a useful quantum computer, "can it factorize large numbers?" .. so far the record is the number 35. A human can do that in his head in a few seconds if not instantly. It means our most powerful quantum computer is pathetic. I'm not saying we ought to give up, but man we have a long way to go.
Re:Baloney computing (Score:4, Informative)
Re:Baloney computing (Score:4, Funny)
Umm, not 35 integers. 35. The number 35. 7 x 5.
Re:Baloney computing (Score:4, Funny)
Umm, not 35 integers. 35. The number 35. 7 x 5.
Wait - how did you come up with those two factors? Do you have Quantum computer of your own!?
Re: (Score:2)
I just simultaneously multiply all primes up to N/2 together in my head, in parallel, and then pull up the one that results in N.
Re: (Score:2)
I used a Time Machine to go back to the beginning of the universe and watched God making the numbers. He was so lazy instead of making 35 a prime number he just used a 7 and a 5 he had lying around.
Re: (Score:1)
So the question I would ask is, is there _anything_ these qu
Re: (Score:3)
Sure. They can produce random numbers quickly with very controllable correlation structures. The current state of the art is competitive with large clusters of conventional computers running the latest algorithms, certainly much, much faster than anything we could do in the 1990s, never mind on a single desktop.
That doesn't sound all that useful, but it's what you need for doing quantum simulations. The most useful of those is probably quantum chemistry, where we're up to simulating fairly simple molecules
Re: (Score:2)
Re: (Score:2)
Generating univariate Gaussian random numbers is about the easiest thing you can do.
Re: (Score:2)
Re: (Score:2)
You know, there are a lot of interesting things to learn in the world. For example, you could spend half an hour actually learning about quantum computing [quantum.country], which is probably a lot less time than you've spent reading breathless pop sci articles and dumb Slashdot comments about it.
Re: (Score:2)
I guess it's finally time to replace my AES-6 encryption.
Re: (Score:2)
Yep. Even a slow 4-bit MCU does orders of magnitude better. And that is after 50 years of research. Any other mechanism would have been dropped as a failure long ago. But some people think GCs are magic, so more money is wasted.
Re: (Score:2)
I think the Chinese demonstrated factoring a 36 bit number, not 35.
36 bits is basically a number in the 64 billion range. This is still small enough that a classical computer can brute force fairly easily, or a human can do it with a pencil and paper, and maybe a calculator.
Of course, the recommended bit length for RSA is 4096, so there's still a ways to go.
Re: (Score:2)
What about this?
https://www.nature.com/article... [nature.com]
It says 23 bits (8219999 = 32749x251).
Was there something special about this method that doesn't allow it to be considered a general case?
Re: (Score:2)
It's true that quantum computers will eventually need a killer app that justifies the cost, but it's OK if it's not factorization. Not sure how factorization could even be that killer app. As far as I'm aware, big number factorization is only used for encryption. It would just force people to move on to something else for encryption. More like the Y2K problem than the spreadsheet...
I'll believe it when I see it (Score:4, Insightful)
Wake me when I can factor 1024-bit RSA keys. The Nintendo DSi and I have unfinished business.
I'm a pessimist, so I'm guessing--with no evidence--that we will find out that keeping N qubits coherent requires energy exponential in N, meaning that quantum computers are mostly useless.
Re: (Score:2)
It shouldn't require energy exponential in n. Just build it beyond Earth's orbit, and use a shade to keep the sun from heating it. You might need a large radiator, but the default temperature would be about 3.5K. (That's a bit warmer than 2.7K which you could get with a more distant orbit.)
Re: (Score:3)
expected by 2026... (Score:3)
Quantum Computers have been just a few years away like cold fusion, flying cars, and self driving cars for decades. I think I'll wait to get my hopes up until there are real reproducible results.
Also, the main use I've seen for QC is factoring large numbers which would break current most popular encryption methods. I guess that could be useful except there are already several alternative solutions ready to go as soon as QC is a legit threat.
Re: (Score:3)
Re: (Score:2)
I think FSD will happen, eventually. Flying cars exists for very special cases. But Musk has been promising INVESTORS and the world for a decade now that FSD will be here "next year". And not even limited FSD, he's stated a person can summon a car across the country and it can find its way to you without any human intervention.
Re: (Score:2)
They are getting there and will get there. Fact is he's trying .. unlike all the other automakers. Without Elon promising it and pushing it relentlessly,, it would have taken Benz 50 years to first try to implement it. It would have taken decades to get there incrementally with basic ADAS features. Thanks to Elon pushing it to get it we'll have it. Same thing with robotics. Would have taken Boston Dynamics decades to make a useful humanoid robot and thats if they hadn't run out of money.
Re: expected by 2026... (Score:2)
Umm, Boston Dynamics have produced a human like robot, unlike Musk and his farcical demonstration unit which was never seen again. I suggest you check out Boston Dynamics youtube channel.
As for flying cars, give me a fucking break. Theyre a cartoon fantasy that for many reasons will remain a toy for the rich just like helicopters are now.
Re: (Score:2, Insightful)
You seem to have a lot of faith in someone who has yet to build anything beyond an electric car. I will give him credit for throwing a lot of money at the problem though.
Re: (Score:1)
Well, he has been dabbling in rocketry too...
Re: (Score:2)
Re: expected by 2026... (Score:1)
Wow, they even break cars!? That is dangerous and unnecessary.
Re: (Score:2)
Re: (Score:3)
Waymo, on the other hand, is delivering an exponentially growing number of paid driverless rides per month, per the graph I linked to. And the deal they just inked for Miami shows they are open to partering with other companies for operations, which is scalable
Re:expected by 2026... (Score:4, Informative)
Some people bought a Model 3 on the promise that "next year" you could rent it out as a robotaxi during the day and have a net negative cost.
They count on 80% of people forgetting what they hear.
Re: expected by 2026... (Score:3)
Because who wouldn't want to buy an expensive vehicle then rent it out unsupervised to be totally trashed inside. Some people are fucking dumb.
Good grief (Score:2)
The list of authors is a full page of tiny type, all by itself!
I remember when people complained about Physical Review papers which had 50 authors...
Wait a minute (Score:2)
Re: (Score:2)
Yes, and many have been claiming that QCs will turn out to be impossible. My guess is that they'll be possible, but of really limited utility.
Re: (Score:2)
QCs are definitely possible. We have them, but they are tiny. But QCs of useful size are a completely different proposition. The scaling of all known mechanisms (including this new one) is so abysmally bad that they may well never scale high enough to become useful. So far, the scaling seems to be inverse exponential with effort. Unless and until that can at the very least be made linear, QCs are a completely lost cause.
Re: (Score:2)
Yeah, I should have said QCs that are better than a classical computer and are general computers rather than just, e.g., relaxation machines.
Re: (Score:2)
They are like a quantum computer co-processor. Ideally they will solve certain types of problems quickly just like a vector processor unit does, ie, not great for general computing but amazing at what they do.
Re: (Score:2)
For variable ways of "quickly". For the problems they are useful for, if they ever scale high enough, "quickly" can mean weeks, months or years.
Re: (Score:2)
Re: (Score:2)
Do you mean "exponentially"? Because that is what conventional computers have done for a long time and seem to have stopped doing in the last 10 years or so.
Re: (Score:2)
Re: (Score:2)
I see. Well, my impression is they scale inverse-exponentially. That makes some sense because all qbits have to be entangled and that gets progressively harder. There is no such requirement with conventional computers. You do have a communication (interconnect) limit there, bit it is much, much more benign.
Re: (Score:2)
So, is five years out the new ten years out? (Score:2)
Do I now need to start doing that when people make five-year predictions as well?
Re: (Score:2)
Looks like it. This prediction here is simply a complete lie, nothing else. The "could" that usually gets thrown in does not make the lie any better.
Are large scale quantum computers even possible? (Score:2)
If we assume quantum mechanics is the fundamental theory of reality and then isn't the universe some huge quantum computer? If that is true how can a computer simulate itself?
Re: (Score:3, Insightful)
If we assume the Sun and its planets use the fundamental theory of gravity, isn't the solar system some huge analog computer? Hey this is fun, anyone can male shit up.
Re: (Score:2)
To simulate the entire universe with perfect accuracy would require a computer of equal complexity.
However, we tend to simplify the models, limit their scope, and run them at vastly lower resolution than reality.
This is how we can simulate electrons using electronics.
Re: (Score:2)
>This is how we can simulate electrons using electronics.
But how well does that scale? Also one of quantum computers' use-case is exactly that because classical computers are so bad at it.
Re: (Score:2)
You can never run a simulation more complex than the simulator. The technology you use is irrelevant, this is a fundamental limit of information.
"Does that scale" doesn't really apply.
Re: (Score:2)
Re: (Score:2)
That argument is nonsense. Using a part of a complex system as a computer is not a problem for the complex system. Otherwise you would run into this effect already with electronic computers or even an Abacus.
That said, the problem with QCs is that they scale abysmally bad, probably inverse exponential with effort. That means they will never scale to relevant sizes.
Re: (Score:2)
>That argument is nonsense. Using a part of a complex system as a computer is not a problem for the complex system. Otherwise you would run into this effect already with electronic computers or even an Abacus.
Actually a complex system talking or referring to itself is indeed a huge problem. See Godel's Incompleteness and the Halting Problem.
>Otherwise you would run into this effect already with electronic computers or even an Abacus.
You do run into these types of problems.. at least at sufficient scal
Re: (Score:1)
>That argument is nonsense. Using a part of a complex system as a computer is not a problem for the complex system. Otherwise you would run into this effect already with electronic computers or even an Abacus.
Actually a complex system talking or referring to itself is indeed a huge problem. See Godel's Incompleteness and the Halting Problem.
These are problems with theories, not with physical reality.
>Otherwise you would run into this effect already with electronic computers or even an Abacus.
You do run into these types of problems.. at least at sufficient scale. An Abacus is bad at math where the numbers are larger than the Abacus itself. Electronic computers are bad at quantum chemistry.
No, you do not. Your argument is complete nonsense. Obviously, any practical QC will be of limited size just as well.
Re: (Score:2)
>These are problems with theories, not with physical reality.
Theories don't exist in physical reality? The model of a physical system doesn't share the same complexity?
> Your argument is complete nonsense.
You don't get the argument to begin with. Anyway we do agree that QCs are BS.
Re: (Score:2)
>These are problems with theories, not with physical reality.
Theories don't exist in physical reality? The model of a physical system doesn't share the same complexity?
That is a beginner's question. Obviously, a model never has the complexity of the thing it models. That is the very purpose of a model.
While theories can be _described_ in physical reality, the problems the theories have do not transfer to physical reality just for that reason alone. You need a bit more. And for the two you quoted, incompleteness is an property of formal systems which do not exist in physical reality and can only be described there. The halting problem assumes a computing mechanism that is
Re: (Score:2)
And GPUs are great at massively parallel multiply-and-accumulate, which translates into matrix operations being really fast. This is a large part of why generative AIs are built the way they are, because their workflow decomposes into matrix multiplication, which GPUs are good at. I'm not so convinced that's actually the ideal way to do it, but it's the expedient way because the hardware is already good at that.
Last I heard, the chatter was about how AI was going to eat quantum computing's lunch in many (ma
Re: (Score:2)
All my ugly monkey NFTs worthless by 2030 (Score:2)
Need to sell them right now to some sucker...
Still complete bullshit (Score:2)
These things are wayyyyyyyyyyyyyyy to small to be useful. If they are now just wayyyyyyyyyyyyyy too small, that does not change much. And no, 2030 is simply a complete lie. Maybe in several 100 years. Or maybe never.