Supercomputing

IBM Says It's Cracked Quantum Error Correction (ieee.org) 26

Edd Gent reporting for IEEE Spectrum: IBM has unveiled a new quantum computing architecture it says will slash the number of qubits required for error correction. The advance will underpin its goal of building a large-scale, fault-tolerant quantum computer, called Starling, that will be available to customers by 2029. Because of the inherent unreliability of the qubits (the quantum equivalent of bits) that quantum computers are built from, error correction will be crucial for building reliable, large-scale devices. Error-correction approaches spread each unit of information across many physical qubits to create "logical qubits." This provides redundancy against errors in individual physical qubits.

One of the most popular approaches is known as a surface code, which requires roughly 1,000 physical qubits to make up one logical qubit. This was the approach IBM focused on initially, but the company eventually realized that creating the hardware to support it was an "engineering pipe dream," Jay Gambetta, the vice president of IBM Quantum, said in a press briefing. Around 2019, the company began to investigate alternatives. In a paper published in Nature last year, IBM researchers outlined a new error-correction scheme called quantum low-density parity check (qLDPC) codes that would require roughly one-tenth of the number of qubits that surface codes need. Now, the company has unveiled a new quantum-computing architecture that can realize this new approach.
"We've cracked the code to quantum error correction and it's our plan to build the first large-scale, fault-tolerant quantum computer," said Gambetta, who is also an IBM Fellow. "We feel confident it is now a question of engineering to build these machines, rather than science."
AMD

New Supercomputing Record Set - Using AMD's Instinct GPUs (tomshardware.com) 23

"AMD processors were instrumental in achieving a new world record," reports Tom's Hardware, "during a recent Ansys Fluent computational fluid dynamics simulation run on the Frontier supercomputer at the Oak Ridge National Laboratory."

The article points out that Frontier was the fastest supercomputer in the world until it was beaten by Lawrence Livermore Lab's El Capitan — with both computers powered by AMD GPUs: According to a press release by Ansys, it ran a 2.2-billion-cell axial turbine simulation for Baker Hughes, an energy technology company, testing its next-generation gas turbines aimed at increasing efficiency. The simulation previously took 38.5 hours to complete on 3,700 CPU cores. By using 1,024 AMD Instinct MI250X accelerators paired with AMD EPYC CPUs in Frontier, the simulation time was slashed to 1.5 hours. This is more than 25 times faster, allowing the company to see the impact of the changes it makes on designs much more quickly...

Given those numbers, the Ansys Fluent CFD simulator apparently only used a fraction of the power available on Frontier. That means it has the potential to run even faster if it can utilize all the available accelerators on the supercomputer. It also shows that, despite Nvidia's market dominance in AI GPUs, AMD remains a formidable competitor, with its CPUs and GPUs serving as the brains of some of the fastest supercomputers on Earth.

Math

JPMorgan Says Quantum Experiment Generated Truly Random Numbers (financialpost.com) 111

JPMorgan Chase used a quantum computer from Honeywell's Quantinuum to generate and mathematically certify truly random numbers -- an advancement that could significantly enhance encryption, security, and financial applications. The breakthrough was validated with help from U.S. national laboratories and has been published in the journal Nature. From a report: Between May 2023 and May 2024, cryptographers at JPMorgan wrote an algorithm for a quantum computer to generate random numbers, which they ran on Quantinuum's machine. The US Department of Energy's supercomputers were then used to test whether the output was truly random. "It's a breakthrough result," project lead and Head of Global Technology Applied Research at JPMorgan, Marco Pistoia told Bloomberg in an interview. "The next step will be to understand where we can apply it."

Applications could ultimately include more energy-efficient cryptocurrency, online gambling, and any other activity hinging on complete randomness, such as deciding which precincts to audit in elections.

Supercomputing

Supercomputer Draws Molecular Blueprint For Repairing Damaged DNA (phys.org) 10

Using the Summit supercomputer at the Department of Energy's Oak Ridge National Laboratory, researchers have modeled a key component of nucleotide excision repair (NER) called the pre-incision complex (PInC), which plays a crucial role in DNA damage repair. Their study, published in Nature Communications, provides new insights into how the PInC machinery orchestrates precise DNA excision, potentially leading to advancements in treating genetic disorders, preventing premature aging, and understanding conditions like xeroderma pigmentosum and Cockayne syndrome. Phys.Org reports: "Computationally, once you assemble the PInC, molecular dynamics simulations of the complex become relatively straightforward, especially on large supercomputers like Summit," [said lead investigator Ivaylo Ivanov, a chemistry professor at Georgia State University]. Nanoscale Molecular Dynamics, or NAMD, is a molecular dynamics code specifically designed for supercomputers and is used to simulate the movements and interactions of large biomolecular systems that contain millions of atoms. Using NAMD, the research team ran extensive simulations. The number-crunching power of the 200-petaflop Summit supercomputer -- capable of performing 200,000 trillion calculations per second -- was essential in unraveling the functional dynamics of the PInC complex on a timescale of microseconds. "The simulations showed us a lot about the complex nature of the PInC machinery. It showed us how these different components move together as modules and the subdivision of this complex into dynamic communities, which form the moving parts of this machine," Ivanov said.

The findings are significant in that mutations in XPF and XPG can lead to severe human genetic disorders. They include xeroderma pigmentosum, which is a condition that makes people more susceptible to skin cancer, and Cockayne syndrome, which can affect human growth and development, lead to impaired hearing and vision, and speed up the aging process. "Simulations allow us to zero in on these important regions because mutations that interfere with the function of the NER complex often occur at community interfaces, which are the most dynamic regions of the machine," Ivanov said. "Now we have a much better understanding of how and from where these disorders manifest."

ISS

Axiom Space and Red Hat Will Bring Edge Computing to the International Space Station (theregister.com) 7

Axiom Space and Red Hat will collaborate to launch Data Center Unit-1 (AxDCU-1) to the International Space Station this spring. It's a small data processing prototype (powered by lightweight, edge-optimized Red Hat Device Edge) that will demonstrate initial Orbital Data Center (ODC) capabilities.

"It all sounds rather grand for something that resembles a glorified shoebox," reports the Register. Axiom Space said: "The prototype will test applications in cloud computing, artificial intelligence, and machine learning (AI/ML), data fusion and space cybersecurity."

Space is an ideal environment for edge devices. Connectivity to datacenters on Earth is severely constrained, so the more processing that can be done before data is transmitted to a terrestrial receiving station, the better. Tony James, chief architect, Science and Space at Red Hat, said: "Off-planet data processing is the next frontier, and edge computing is a crucial component. With Red Hat Device Edge and in collaboration with Axiom Space, Earth-based mission partners will have the capabilities necessary to make real-time decisions in space with greater reliability and consistency...."

The Red Hat Device Edge software used by Axiom's device combines Red Hat Enterprise Linux, the Red Hat Ansible Platform, and MicroShift, a lightweight Kubernetes container orchestration service derived from Red Hat OpenShift. The plan is for Axiom Space to host hybrid cloud applications and cloud-native workloads on-orbit. Jason Aspiotis, global director of in-space data and security, Axiom Space, told The Register that the hardware itself is a commercial off-the-shelf unit designed for operation in harsh environments... "AxDCU-1 will have the ability to be controlled and utilized either via ground-to-space or space-to-space communications links. Our current plans are to maintain this device on the ISS. We plan to utilize this asset for at least two years."

The article notes that HPE has also "sent up a succession of Spaceborne computers — commercial, off-the-shelf supercomputers — over the years to test storage, recovery, and operational potential on long-duration missions." (They apparently use Red Hat Enterprise Linux.) "At the other end of the scale, the European Space Agency has run Raspberry Pi computers on the ISS for years as part of the AstroPi educational outreach program."

Axiom Space says their Orbital Data Center is deigned to "reduce delays traditionally associated with orbital data processing and analysis." By utilizing Earth-independent cloud storage and edge processing infrastructure, Axiom Space ODCs will enable data to be processed closer to its source, spacecraft or satellites, bypassing the need for terrestrial-based data centers. This architecture alleviates reliance on costly, slow, intermittent or contested network connections, creating more secure and quicker decision-making in space.

The goal is to allow Axiom Space and its partners to have access to real-time processing capabilities, laying the foundation for increased reliability and improved space cybersecurity with extensive applications. Use cases for ODCs include but are not limited to supporting Earth observation satellites with in-space and lower latency data storage and processing, AI/ML training on-orbit, multi-factor authentication and cyber intrusion detection and response, supervised autonomy, in-situ space weather analytics and off-planet backup & disaster recovery for critical infrastructure on Earth.

Supercomputing

Amazon Uses Quantum 'Cat States' With Error Correction (arstechnica.com) 11

An anonymous reader quotes a report from Ars Technica: Following up on Microsoft's announcement of a qubit based on completely new physics, Amazon is publishing a paper describing a very different take on quantum computing hardware. The system mixes two different types of qubit hardware to improve the stability of the quantum information they hold. The idea is that one type of qubit is resistant to errors, while the second can be used for implementing an error-correction code that catches the problems that do happen. While there have been more effective demonstrations of error correction in the past, a number of companies are betting that Amazon's general approach is the best route to getting logical qubits that are capable of complex algorithms. So, in that sense, it's an important proof of principle. Amazon's quantum computing approach combines cat qubits for data storage and transmons for error correction.

Cat qubits are quantum bits that distribute their superposition state across multiple photons in a resonator, making them highly resistant to bit flip errors. Transmons are superconducting qubits that help detect and correct phase flip errors by enabling weak measurements without destroying the quantum state. Meanwhile, a phase flip is a quantum error that alters the relative phase of a qubit's superposition state without changing its probability distribution. Unlike a bit flip, which swaps a qubit's state probabilities, a phase flip changes how the quantum states interfere, potentially disrupting quantum computations.

By alternating cat qubits with transmons, Amazon reduces the number of hardware qubits needed for error correction. Their tests show that increasing qubits lowers the error rate, proving the system's effectiveness. However, rare bit flips still cause entire logical qubits to fail, and transmons remain prone to both bit and phase flips. If you're still entangled in this story without decohering into pure quantum chaos, kudos to you!
AI

DeepSeek Accelerates AI Model Timeline as Market Reacts To Low-Cost Breakthrough (reuters.com) 25

Chinese AI startup DeepSeek is speeding up the release of its R2 model following the success of January's R1, which outperformed many US competitors at a fraction of the cost and triggered a $1 trillion-plus market selloff. The Hangzhou-based firm had planned a May release but now wants R2 out "as early as possible," Reuters reported Tuesday.

The upcoming model promises improved coding capabilities and reasoning in multiple languages beyond English. DeepSeek's competitive advantage stems from its parent company High-Flyer's early investment in computing power, including two supercomputing clusters acquired before U.S. export bans on advanced Nvidia chips. The second cluster, Fire-Flyer II, comprised approximately 10,000 Nvidia A100 chips. DeepSeek's cost-efficiency comes from innovative architecture choices like Mixture-of-Experts (MoE) and multihead latent attention (MLA).

According to Bernstein analysts, DeepSeek's pricing was 20-40 times cheaper than OpenAI's equivalent models. The competitive pressure has already forced OpenAI to cut prices and release a scaled-down model, while Google's Gemini has introduced discounted access tiers.
Supercomputing

Microsoft Reveals Its First Quantum Computing Chip, the Majorana 1 (cnbc.com) 31

After two decades of quantum computing research, Microsoft has unveiled its first quantum chip: the Majorana 1. CNBC reports: Microsoft's quantum chip employs eight topological qubits using indium arsenide, which is a semiconductor, and aluminum, which is a superconductor. A new paper in the journal Nature describes the chip in detail. Microsoft won't be allowing clients to use its Majorana 1 chip through the company's Azure public cloud, as it plans to do with its custom artificial intelligence chip, Maia 100. Instead, Majorana 1 is a step toward a goal of a million qubits on a chip, following extensive physics research.

Rather than rely on Taiwan Semiconductor or another company for fabrication, Microsoft is manufacturing the components of Majorana 1 itself in the U.S. That's possible because the work is unfolding at a small scale. "We want to get to a few hundred qubits before we start talking about commercial reliability," Jason Zander, a Microsoft executive vice president, told CNBC. In the meantime, the company will engage with national laboratories and universities on research using Majorana 1.

Supercomputing

The IRS Is Buying an AI Supercomputer From Nvidia (theintercept.com) 150

According to The Intercept, the IRS is set to purchase an Nvidia SuperPod AI supercomputer to enhance its machine learning capabilities for tasks like fraud detection and taxpayer behavior analysis. From the report: With Elon Musk's so-called Department of Government Efficiency installing itself at the IRS amid a broader push to replace federal bureaucracy with machine-learning software, the tax agency's computing center in Martinsburg, West Virginia, will soon be home to a state-of-the-art Nvidia SuperPod AI computing cluster. According to the previously unreported February 5 acquisition document, the setup will combine 31 separate Nvidia servers, each containing eight of the company's flagship Blackwell processors designed to train and operate artificial intelligence models that power tools like ChatGPT. The hardware has not yet been purchased and installed, nor is a price listed, but SuperPod systems reportedly start at $7 million. The setup described in the contract materials notes that it will include a substantial memory upgrade from Nvidia.

Though small compared to the massive AI-training data centers deployed by companies like OpenAI and Meta, the SuperPod is still a powerful and expensive setup using the most advanced technology offered by Nvidia, whose chips have facilitated the global machine-learning spree. While the hardware can be used in many ways, it's marketed as a turnkey means of creating and querying an AI model. Last year, the MITRE Corporation, a federally funded military R&D lab, acquired a $20 million SuperPod setup to train bespoke AI models for use by government agencies, touting the purchase as a "massive increase in computing power" for the United States.

How exactly the IRS will use its SuperPod is unclear. An agency spokesperson said the IRS had no information to share on the supercomputer purchase, including which presidential administration ordered it. A 2024 report by the Treasury Inspector General for Tax Administration identified 68 different AI-related projects underway at the IRS; the Nvidia cluster is not named among them, though many were redacted. But some clues can be gleaned from the purchase materials. "The IRS requires a robust and scalable infrastructure that can handle complex machine learning (ML) workloads," the document explains. "The Nvidia Super Pod is a critical component of this infrastructure, providing the necessary compute power, storage, and networking capabilities to support the development and deployment of large-scale ML models."

The document notes that the SuperPod will be run by the IRS Research, Applied Analytics, and Statistics division, or RAAS, which leads a variety of data-centric initiatives at the agency. While no specific uses are cited, it states that this division's Compliance Data Warehouse project, which is behind this SuperPod purchase, has previously used machine learning for automated fraud detection, identity theft prevention, and generally gaining a "deeper understanding of the mechanisms that drive taxpayer behavior."

Supercomputing

Quantum Teleportation Used To Distribute a Calculation (arstechnica.com) 58

An anonymous reader quotes a report from Ars Technica: In today's issue of Nature, a team at Oxford University describes using quantum teleportation to link two pieces of quantum hardware that were located about 2 meters apart, meaning they could easily have been in different rooms entirely. Once linked, the two pieces of hardware could be treated as a single quantum computer, allowing simple algorithms to be performed that involved operations on both sides of the 2-meter gap. [...] The Oxford team was simply interested in a proof-of-concept, and so used an extremely simplified system. Each end of the 2-meter gap had a single trap holding two ions, one strontium and one calcium. The two atoms could be entangled with each other, getting them to operate as a single unit.

The calcium ion served as a local memory and was used in computations, while the strontium ion served as one of the two ends of the quantum network. An optical cable between the two ion traps allowed photons to entangle the two strontium ions, getting the whole system to operate as a single unit. The key thing about the entanglement processes used here is that a failure to entangle left the system in its original state, meaning that the researchers could simply keep trying until the qubits were entangled. The entanglement event would also lead to a photon that could be measured, allowing the team to know when success had been achieved (this sort of entanglement with a success signal is termed "heralded" by those in the field).

The researchers showed that this setup allowed them to teleport with a specific gate operation (controlled-Z), which can serve as the basis for any other two-qubit gate operation -- any operation you might want to do can be done by using a specific combination of these gates. After performing multiple rounds of these gates, the team found that the typical fidelity was in the area of 70 percent. But they also found that errors typically had nothing to do with the teleportation process and were the product of local operations at one of the two ends of the network. They suspect that using commercial hardware, which has far lower error rates, would improve things dramatically. Finally, they performed a version of Grover's algorithm, which can, with a single query, identify a single item from an arbitrarily large unordered list. The "arbitrary" aspect is set by the number of available qubits; in this case, having only two qubits, the list maxed out at four items. Still, it worked, again with a fidelity of about 70 percent.

While the work was done with trapped ions, almost every type of qubit in development can be controlled with photons, so the general approach is hardware-agnostic. And, given the sophistication of our optical hardware, it should be possible to link multiple chips at various distances, all using hardware that doesn't require the best vacuum or the lowest temperatures we can generate. That said, the error rate of the teleportation steps may still be a problem, even if it was lower than the basic hardware rate in these experiments. The fidelity there was 97 percent, which is lower than the hardware error rates of most qubits and high enough that we couldn't execute too many of these before the probability of errors gets unacceptably high.

Supercomputing

Google Says Commercial Quantum Computing Applications Arriving Within 5 Years (msn.com) 38

Google aims to release commercial quantum computing applications within five years, challenging Nvidia's prediction of a 20-year timeline. "We're optimistic that within five years we'll see real-world applications that are possible only on quantum computers," founder and lead of Google Quantum AI Hartmut Neven said in a statement. Reuters reports: Real-world applications Google has discussed are related to materials science - applications such as building superior batteries for electric cars - creating new drugs and potentially new energy alternatives. [...] Google has been working on its quantum computing program since 2012 and has designed and built several quantum chips. By using quantum processors, Google said it had managed to solve a computing problem in minutes that would take a classical computer more time than the history of the universe.

Google's quantum computing scientists announced another step on the path to real world applications within five years on Wednesday. In a paper published in the scientific journal Nature, the scientists said they had discovered a new approach to quantum simulation, which is a step on the path to achieving Google's objective.

Supercomputing

Quantum Computer Built On Server Racks Paves the Way To Bigger Machines (technologyreview.com) 27

An anonymous reader quotes a report from MIT Technology Review: A Canadian startup called Xanadu has built a new quantum computer it says can be easily scaled up to achieve the computational power needed to tackle scientific challenges ranging from drug discovery to more energy-efficient machine learning. Aurora is a "photonic" quantum computer, which means it crunches numbers using photonic qubits -- information encoded in light. In practice, this means combining and recombining laser beams on multiple chips using lenses, fibers, and other optics according to an algorithm. Xanadu's computer is designed in such a way that the answer to an algorithm it executes corresponds to the final number of photons in each laser beam. This approach differs from one used by Google and IBM, which involves encoding information in properties of superconducting circuits.

Aurora has a modular design that consists of four similar units, each installed in a standard server rack that is slightly taller and wider than the average human. To make a useful quantum computer, "you copy and paste a thousand of these things and network them together," says Christian Weedbrook, the CEO and founder of the company. Ultimately, Xanadu envisions a quantum computer as a specialized data center, consisting of rows upon rows of these servers. This contrasts with the industry's earlier conception of a specialized chip within a supercomputer, much like a GPU. [...]

Xanadu's 12 qubits may seem like a paltry number next to IBM's 1,121, but Tiwari says this doesn't mean that quantum computers based on photonics are running behind. In his opinion, the number of qubits reflects the amount of investment more than it does the technology's promise. [...] Xanadu's next goal is to improve the quality of the photons in the computer, which will ease the error correction requirements. "When you send lasers through a medium, whether it's free space, chips, or fiber optics, not all the information makes it from the start to the finish," he says. "So you're actually losing light and therefore losing information." The company is working to reduce this loss, which means fewer errors in the first place. Xanadu aims to build a quantum data center, with thousands of servers containing a million qubits, in 2029.
The company published its work on chip design optimization and fabrication in the journal Nature.
Supercomputing

Nvidia CEO: Quantum Computers Won't Be Very Useful for Another 20 Years (pcmag.com) 48

Nvidia CEO Jensen Huang said quantum computers won't be very useful for another 20 years, causing stocks in this emerging sector to plunge more than 40% for a total market value loss of over $8 billion. "If you kind of said 15 years for very useful quantum computers, that'd probably be on the early side. If you said 30, is probably on the late side. But if you picked 20, I think a whole bunch of us would believe it," Huang said during a Q&A with analysts. PCMag reports: The field of quantum computing hasn't gotten nearly as much hype as generative AI and the tech giants promoting it in the past few years. Right now, part of the reason quantum computers aren't currently that helpful is because of their error rates. Nord Quantique CEO Julien Lemyre previously told PCMag that quantum error correction is the future of the field, and his firm is working on a solution. The errors that qubits, the basic unit of information in a quantum machine, currently make result in quantum computers being largely unhelpful. It's an essential hurdle to overcomeâ"but we don't currently know if or when quantum errors will be eliminated.

Chris Erven, CEO and co-founder of Kets Quantum, believes quantum computers will eventually pose a significant threat to cybersecurity. "China is making some of the largest investments in quantum computing, pumping in billions of dollars into research and development in the hope of being the first to create a large-scale, cryptographically relevant machine," Erven tells PCMag in a statement. "Although they may be a few years away from being fully operational, we know a quantum computer will be capable of breaking all traditional cyber defenses we currently use. So they, and others, are actively harvesting now, to decrypt later."
"The 15 to 20-year timeline seems very realistic," said Ivana Delevska, investment chief of Spear Invest, which holds Rigetti and IonQ shares in an actively managed ETF. "That is roughly what it took Nvidia to develop accelerated computing."
Supercomputing

Microsoft, Atom Computing Leap Ahead On the Quantum Frontier With Logical Qubits (geekwire.com) 18

An anonymous reader quotes a report from GeekWire: Microsoft and Atom Computing say they've reached a new milestone in their effort to build fault-tolerant quantum computers that can show an advantage over classical computers. Microsoft says it will start delivering the computers' quantum capabilities to customers by the end of 2025, with availability via the Azure cloud service as well as through on-premises hardware. "Together, we are co-designing and building what we believe will be the world's most powerful quantum machine," Jason Zander, executive vice president at Microsoft, said in a LinkedIn posting.

Like other players in the field, Microsoft's Azure Quantum team and Atom Computing aim to capitalize on the properties of quantum systems -- where quantum bits, also known as qubits, can process multiple values simultaneously. That's in contrast to classical systems, which typically process ones and zeros to solve algorithms. Microsoft has been working with Colorado-based Atom Computing on hardware that uses the nuclear spin properties of neutral ytterbium atoms to run quantum calculations. One of the big challenges is to create a system that can correct the errors that turn up during the calculations due to quantum noise. The solution typically involves knitting together "physical qubits" to produce an array of "logical qubits" that can correct themselves.

In a paper posted to the ArXiv preprint server, members of the research team say they were able to connect 256 noisy neutral-atom qubits using Microsoft's qubit-virtualization system in such a way as to produce a system with 24 logical qubits. "This represents the highest number of entangled logical qubits on record," study co-author Krysta Svore, vice president of advanced quantum development for Microsoft Azure Quantum, said today in a blog posting. "Entanglement of the qubits is evidenced by their error rates being significantly below the 50% threshold for entanglement." Twenty of the system's logical qubits were used to perform successful computations based on the Bernstein-Vazirani algorithm, which is used as a benchmark for quantum calculations. "The logical qubits were able to produce a more accurate solution than the corresponding computation based on physical qubits," Svore said. "The ability to compute while detecting and correcting errors is a critical component to scaling to achieve scientific quantum advantage."

Supercomputing

'El Capitan' Ranked Most Powerful Supercomputer In the World (engadget.com) 44

Lawrence Livermore National Laboratory's "El Capitan" supercomputer is now ranked as the world's most powerful, exceeding a High-Performance Linpack (HPL) score of 1.742 exaflops on the latest Top500 list. Engadget reports: El Capitan is only the third "exascale" computer, meaning it can perform more than a quintillion calculations in a second. The other two, called Frontier and Aurora, claim the second and third place slots on the TOP500 now. Unsurprisingly, all of these massive machines live within government research facilities: El Capitan is housed at Lawrence Livermore National Laboratory; Frontier is at Oak Ridge National Laboratory; Argonne National Laboratory claims Aurora. [Cray Computing] had a hand in all three systems.

El Capitan has more than 11 million combined CPU and GPU cores based on AMD 4th-gen EPYC processors. These 24-core processors are rated at 1.8GHz each and have AMD Instinct M1300A APUs. It's also relatively efficient, as such systems go, squeezing out an estimated 58.89 Gigaflops per watt. If you're wondering what El Capitan is built for, the answer is addressing nuclear stockpile safety, but it can also be used for nuclear counterterrorism.

Supercomputing

With First Mechanical Qubit, Quantum Computing Goes Steampunk (science.org) 14

An anonymous reader quotes a report from Science Magazine: Qubits, the strange devices at the heart of a quantum computer that can be set to 0, 1, or both at once, could hardly be more different from the mechanical clockwork used in the earliest computers. Today, most quantum computers rely on qubits made out of tiny circuits of superconducting metal, individual ions, photons, or other things. But now, physicists have made a working qubit from a tiny, moving machine, an advance that echoes back to the early 20th century when the first computers employed mechanical switches. "For many years, people were thinking it would be impossible to make a qubit from a mechanical system," says Adrian Bachtold, a condensed matter physicist at the Institute of Photonic Sciences who was not involved in the work, published today in Science. Stephan Durr, a quantum physicist at the Max Planck Institute for Quantum Optics, says the result "puts a new system on the map," which could be used in other experiments—and perhaps to probe the interface of quantum mechanics and gravity. [...]

The new mechanical qubit is unlikely to run more mature competition off the field any time soon. Its fidelity -- a measure of how well experimenters can set the state they desire -- is just 60%, compared with greater than 99% for the best qubits. For that reason, "it's an advance in principle," Bachtold says. But Durr notes that a mechanical qubit might serve as a supersensitive probe of forces, such as gravity, that don't affect other qubits. And ETHZ researchers hope to take their demonstration a step further by using two mechanical qubits to perform simple logical operations. "That's what Igor is working on now," [says Yiwen Chu, a physicist at ETH Zurich]. If they succeed, the physical switches of the very first computers will have made a tiny comeback.

Supercomputing

IBM Boosts the Amount of Computation You Can Get Done On Quantum Hardware (arstechnica.com) 30

An anonymous reader quotes a report from Ars Technica: There's a general consensus that we won't be able to consistently perform sophisticated quantum calculations without the development of error-corrected quantum computing, which is unlikely to arrive until the end of the decade. It's still an open question, however, whether we could perform limited but useful calculations at an earlier point. IBM is one of the companies that's betting the answer is yes, and on Wednesday, it announced a series of developments aimed at making that possible. On their own, none of the changes being announced are revolutionary. But collectively, changes across the hardware and software stacks have produced much more efficient and less error-prone operations. The net result is a system that supports the most complicated calculations yet on IBM's hardware, leaving the company optimistic that its users will find some calculations where quantum hardware provides an advantage. [...]

Wednesday's announcement was based on the introduction of the second version of its Heron processor, which has 133 qubits. That's still beyond the capability of simulations on classical computers, should it be able to operate with sufficiently low errors. IBM VP Jay Gambetta told Ars that Revision 2 of Heron focused on getting rid of what are called TLS (two-level system) errors. "If you see this sort of defect, which can be a dipole or just some electronic structure that is caught on the surface, that is what we believe is limiting the coherence of our devices," Gambetta said. This happens because the defects can resonate at a frequency that interacts with a nearby qubit, causing the qubit to drop out of the quantum state needed to participate in calculations (called a loss of coherence). By making small adjustments to the frequency that the qubits are operating at, it's possible to avoid these problems. This can be done when the Heron chip is being calibrated before it's opened for general use.

Separately, the company has done a rewrite of the software that controls the system during operations. "After learning from the community, seeing how to run larger circuits, [we were able to] almost better define what it should be and rewrite the whole stack towards that," Gambetta said. The result is a dramatic speed-up. "Something that took 122 hours now is down to a couple of hours," he told Ars. Since people are paying for time on this hardware, that's good for customers now. However, it could also pay off in the longer run, as some errors can occur randomly, so less time spent on a calculation can mean fewer errors. Despite all those improvements, errors are still likely during any significant calculations. While it continues to work toward developing error-corrected qubits, IBM is focusing on what it calls error mitigation, which it first detailed last year. [...] The problem here is that using the function is computationally difficult, and the difficulty increases with the qubit count. So, while it's still easier to do error mitigation calculations than simulate the quantum computer's behavior on the same hardware, there's still the risk of it becoming computationally intractable. But IBM has also taken the time to optimize that, too. "They've got algorithmic improvements, and the method that uses tensor methods [now] uses the GPU," Gambetta told Ars. "So I think it's a combination of both."

Supercomputing

Google Identifies Low Noise 'Phase Transition' In Its Quantum Processor (arstechnica.com) 31

An anonymous reader quotes a report from Ars Technica: Back in 2019, Google made waves by claiming it had achieved what has been called "quantum supremacy" -- the ability of a quantum computer to perform operations that would take a wildly impractical amount of time to simulate on standard computing hardware. That claim proved to be controversial, in that the operations were little more than a benchmark that involved getting the quantum computer to behave like a quantum computer; separately, improved ideas about how to perform the simulation on a supercomputer cut the time required down significantly.

But Google is back with a new exploration of the benchmark, described in a paper published in Nature on Wednesday. It uses the benchmark to identify what it calls a phase transition in the performance of its quantum processor and uses it to identify conditions where the processor can operate with low noise. Taking advantage of that, they again show that, even giving classical hardware every potential advantage, it would take a supercomputer a dozen years to simulate things.

Supercomputing

IBM Opens Its Quantum-Computing Stack To Third Parties (arstechnica.com) 7

An anonymous reader quotes a report from Ars Technica, written by John Timmer: [P]art of the software stack that companies are developing to control their quantum hardware includes software that converts abstract representations of quantum algorithms into the series of commands needed to execute them. IBM's version of this software is called Qiskit (although it was made open source and has since been adopted by other companies). Recently, IBM made a couple of announcements regarding Qiskit, both benchmarking it in comparison to other software stacks and opening it up to third-party modules. [...] Right now, the company is supporting six third-party Qiskit functions that break down into two categories.

The first can be used as stand-alone applications and are focused on providing solutions to problems for users who have no expertise programming quantum computers. One calculates the ground-state energy of molecules, and the second performs optimizations. But the remainder are focused on letting users get more out of existing quantum hardware, which tends to be error prone. But some errors occur more often than others. These errors can be due to specific quirks of individual hardware qubits or simply because some specific operations are more error prone than others. These can be handled in two different ways. One is to design the circuit being executed to avoid the situations that are most likely to produce an error. The second is to examine the final state of the algorithm to assess whether errors likely occurred and adjust to compensate for any. And third parties are providing software that can handle both of these.

One of those third parties is Q-CTRL, and we talked to its CEO, Michael Biercuk. "We build software that is really focused on everything from the lowest level of hardware manipulation, something that we call quantum firmware, up through compilation and strategies that help users map their problem onto what has to be executed on hardware," he told Ars. (Q-CTRL is also providing the optimization tool that's part of this Qiskit update.) "We're focused on suppressing errors everywhere that they can occur inside the processor," he continued. "That means the individual gate or logic operations, but it also means the execution of the circuit. There are some errors that only occur in the whole execution of a circuit as opposed to manipulating an individual quantum device." Biercuk said Q-CTRL's techniques are hardware agnostic and have been demonstrated on machines that use very different types of qubits, like trapped ions. While the sources of error on the different hardware may be distinct, the manifestations of those problems are often quite similar, making it easier for Q-CTRL's approach to work around the problems.

Those work-arounds include things like altering the properties of the microwave pulses that perform operations on IBM's hardware, and replacing the portion of Qiskit that converts an algorithm to a series of gate operations. The software will also perform operations that suppress errors that can occur when qubits are left idle during the circuit execution. As a result of all these differences, he claimed that using Q-CTRL's software allows the execution of more complex algorithms than are possible via Qiskit's default compilation and execution. "We've shown, for instance, optimization with all 156 qubits on [an IBM] system, and importantly -- I want to emphasize this word -- successful optimization," Biercuk told Ars. "What it means is you run it and you get the right answer, as opposed to I ran it and I kind of got close."

Supercomputing

As Quantum Computing Threats Loom, Microsoft Updates Its Core Crypto Library (arstechnica.com) 33

An anonymous reader quotes a report from Ars Technica: Microsoft has updated a key cryptographic library with two new encryption algorithms designed to withstand attacks from quantum computers. The updates were made last week to SymCrypt, a core cryptographic code library for handing cryptographic functions in Windows and Linux. The library, started in 2006, provides operations and algorithms developers can use to safely implement secure encryption, decryption, signing, verification, hashing, and key exchange in the apps they create. The library supports federal certification requirements for cryptographic modules used in some governmental environments. Despite the name, SymCrypt supports both symmetric and asymmetric algorithms. It's the main cryptographic library Microsoft uses in products and services including Azure, Microsoft 365, all supported versions of Windows, Azure Stack HCI, and Azure Linux. The library provides cryptographic security used in email security, cloud storage, web browsing, remote access, and device management. Microsoft documented the update in a post on Monday. The updates are the first steps in implementing a massive overhaul of encryption protocols that incorporate a new set of algorithms that aren't vulnerable to attacks from quantum computers. [...]

The first new algorithm Microsoft added to SymCrypt is called ML-KEM. Previously known as CRYSTALS-Kyber, ML-KEM is one of three post-quantum standards formalized last month by the National Institute of Standards and Technology (NIST). The KEM in the new name is short for key encapsulation. KEMs can be used by two parties to negotiate a shared secret over a public channel. Shared secrets generated by a KEM can then be used with symmetric-key cryptographic operations, which aren't vulnerable to Shor's algorithm when the keys are of a sufficient size. [...] The other algorithm added to SymCrypt is the NIST-recommended XMSS. Short for eXtended Merkle Signature Scheme, it's based on "stateful hash-based signature schemes." These algorithms are useful in very specific contexts such as firmware signing, but are not suitable for more general uses. Monday's post said Microsoft will add additional post-quantum algorithms to SymCrypt in the coming months. They are ML-DSA, a lattice-based digital signature scheme, previously called Dilithium, and SLH-DSA, a stateless hash-based signature scheme previously called SPHINCS+. Both became NIST standards last month and are formally referred to as FIPS 204 and FIPS 205.
In Monday's post, Microsoft Principal Product Manager Lead Aabha Thipsay wrote: "PQC algorithms offer a promising solution for the future of cryptography, but they also come with some trade-offs. For example, these typically require larger key sizes, longer computation times, and more bandwidth than classical algorithms. Therefore, implementing PQC in real-world applications requires careful optimization and integration with existing systems and standards."

Slashdot Top Deals