Chrome

Google Chrome Is Finally Coming To ARM64 Linux (nerds.xyz) 35

BrianFagioli writes: Google says it will finally release Chrome for ARM64 Linux in the second quarter of 2026, bringing the company's full browser to a platform that has existed for years without official support. Until now, Linux users running Arm hardware have largely relied on Chromium builds or unofficial packages if they wanted something close to Chrome. Google says the new build will include the same features found on other platforms, including Google account syncing, Chrome Web Store extensions, built-in translation, Safe Browsing protections, and Google Password Manager.

The timing reflects how ARM hardware is becoming more common across the Linux ecosystem, from developer laptops to AI systems. Google also pointed to NVIDIA's DGX Spark, a compact AI supercomputing device built on the Grace Blackwell architecture, which will support installing Chrome through NVIDIA's package management tools. For many Linux users, the announcement feels like a "finally" moment, as ARM64 Linux systems have been widespread for years despite the absence of an official Chrome build.

China

China Releases First Homegrown Quantum Computing OS (globaltimes.cn) 33

The Global Times reports: China's first domestically developed quantum computer operating system, Origin Pilot, has been made available for online download, the Global Times learned from the Anhui Quantum Computing Engineering Research Center on Wednesday. A Chinese scientist said while several quantum computing operating system efforts are underway worldwide, this is the first developed in China where it is seen as part of China's broad effort to achieve technology independence and to achieve technology advance in quantum computing.

The center said the release marks the world's first open-source quantum computer operating system available for public download, which is expected to lower development barriers and support the growth of China's quantum computing ecosystem. Developed by Hefei-based Origin Quantum Computing Technology Co, the company behind China's third-generation superconducting quantum computer, Origin Wukong, Origin Pilot was first launched in 2021 and has gone through multiple rounds of iteration and upgrade.

The developer describes it as an integrated quantum-classical-intelligent computing operating system compatible with major hardware approaches, including superconducting qubits, trapped ions and neutral atoms. It is now deployed on the company's Origin Wukong series and is available to external users, the company said. Guo Guoping, chief scientist of Origin Quantum and director at the Anhui Quantum Computing Engineering Research Center, told the Global Times that a quantum operating system is the "soft heart" of the quantum computing ecosystem. He said the decision to make Origin Pilot available globally marks a shift in China's quantum computing industry from closed-door tech innovation to broader open-source ecosystem development.
Dou Menghan, head of the research team, said: "Users can quickly integrate with quantum chips of multiple physical types and, using autonomous programming frameworks such as QPanda, execute quantum computing jobs across different physical quantum chips to support both research and commercialization needs."
Supercomputing

Mexico Unveils Plans To Build Most Powerful Supercomputer In Latin America (apnews.com) 22

An anonymous reader quotes a report from the Associated Press: Mexico unveiled plans Wednesday to build what it claims will be Latin America's most powerful supercomputer -- a project the government says will help the country capitalize on the rapidly evolving uses of artificial intelligence and exponentially expand the country's computing capacity. Dubbed "Coatlicue" for the Mexica goddess considered the earth mother, the supercomputer would be seven times more powerful than the region's current leader in Brazil, Jose Merino, head of the Telecommunications and Digital Transformation Agency.

President Claudia Sheinbaum said during her morning news briefing that the location for the project had not been decided yet, but construction will begin next year. "We're very excited," said Sheinbaum, an academic and climate scientist. "It is going to allow Mexico to fully get in on the use of artificial intelligence and the processing of data that today we don't have the capacity to do." Merino said that Mexico's most powerful supercomputer operates at 2.3 petaflops -- a unit to measure computing speed, meaning it can perform one quadrillion operations per second. Coatlicue would have a capacity of 314 petaflops.

AI

Amazon Pledges Up To $50 Billion To Expand AI, Supercomputing For US Government 15

Amazon is committing up to $50 billion to massively expand AI and supercomputing capacity for U.S. government cloud regions, adding 1.3 gigawatts of high-performance compute and giving federal agencies access to its full suite of AI tools. Reuters reports: The project, expected to break ground in 2026, will add nearly 1.3 gigawatts of artificial intelligence and high-performance computing capacity across AWS Top Secret, AWS Secret and AWS GovCloud regions by building data centers equipped with advanced compute and networking technologies. The project, expected to break ground in 2026, will add nearly 1.3 gigawatts of artificial intelligence and high-performance computing capacity across AWS Top Secret, AWS Secret and AWS GovCloud regions by building data centers equipped with advanced compute and networking technologies.

Under the latest initiative, federal agencies will gain access to AWS' comprehensive suite of AI services, including Amazon SageMaker for model training and customization, Amazon Bedrock for deploying models and agents, as well as foundation models such as Amazon Nova and Anthropic Claude. The federal government seeks to develop tailored AI solutions and drive cost-savings by leveraging AWS' dedicated and expanded capacity.
Supercomputing

A Quantum Error Correction Breakthrough? (harvard.edu) 39

The dream of quantum computers has been hampered by the challenge of error correction, writes the Harvard Gazette, since qubits "are inherently susceptible to slipping out of their quantum states and losing their encoded information."

But in a newly-published paper, a research team "combined various methods to create complex circuits with dozens of error correction layers" that "suppresses errors below a critical threshold — the point where adding qubits further reduces errors rather than increasing them." "For the first time, we combined all essential elements for a scalable, error-corrected quantum computation in an integrated architecture," said Mikhail Lukin, co-director of the Quantum Science and Engineering Initiative, Joshua and Beth Friedman University Professor, and senior author of the new paper. "These experiments — by several measures the most advanced that have been done on any quantum platform to date — create the scientific foundation for practical large-scale quantum computation..."

"There are still a lot of technical challenges remaining to get to very large-scale computer with millions of qubits, but this is the first time we have an architecture that is conceptually scalable," said lead author Dolev Bluvstein, Ph.D. '25, who did the research during his graduate studies at Harvard and is now an assistant professor at Caltech. "It's going to take a lot of effort and technical development, but it's becoming clear that we can build fault-tolerant quantum computers...."

Hartmut Neven, vice president of engineering at the Google Quantum AI team, said the new paper came amid an "incredibly exciting" race between qubit platforms. "This work represents a significant advance toward our shared goal of building a large-scale, useful quantum computer," he said... With recent advances, Lukin believes the core elements for building quantum computers are falling into place. "This big dream that many of us had for several decades, for the first time, is really in direct sight," he said.

"In theory, a system of 300 quantum bits can store more information than the number of particles in the known universe..." the article points out.

"The new paper represents an important advance in a three-decade pursuit of quantum error correction."

Thanks to long-time Slashdot reader schwit1 for sharing the article.
Supercomputing

A New Ion-Based Quantum Computer Makes Error Correction Simpler (technologyreview.com) 10

An anonymous reader quotes a report from MIT Technology Review: The US- and UK-based company Quantinuum today unveiled Helios, its third-generation quantum computer, which includes expanded computing power and error correction capability. Like all other existing quantum computers, Helios is not powerful enough to execute the industry's dream money-making algorithms, such as those that would be useful for materials discovery or financial modeling. But Quantinuum's machines, which use individual ions as qubits, could be easier to scale up than quantum computers that use superconducting circuits as qubits, such as Google's and IBM's. "Helios is an important proof point in our road map about how we'll scale to larger physical systems," says Jennifer Strabley, vice president at Quantinuum, which formed in 2021 from the merger of Honeywell Quantum Solutions and Cambridge Quantum. Honeywell remains Quantinuum's majority owner.

Located at Quantinuum's facility in Colorado, Helios comprises a myriad of components, including mirrors, lasers, and optical fiber. Its core is a thumbnail-size chip containing the barium ions that serve as the qubits, which perform the actual computing. Helios computes with 98 barium ions at a time; its predecessor, H2, used 56 ytterbium qubits. The barium ions are an upgrade, as they have proven easier to control than ytterbium. These components all sit within a chamber that is cooled to about 15 Kelvin (-432.67 ), on top of an optical table. Users can access the computer by logging in remotely over the cloud. [...] Helios is noteworthy for its qubits' precision, says Rajibul Islam, a physicist at the University of Waterloo in Canada, who is not affiliated with Quantinuum. The computer's qubit error rates are low to begin with, which means it doesn't need to devote as much of its hardware to error correction. Quantinuum had pairs of qubits interact in an operation known as entanglement and found that they behaved as expected 99.921% of the time. "To the best of my knowledge, no other platform is at this level," says Islam.

[...] Besides increasing the number of qubits on its chip, another notable achievement for Quantinuum is that it demonstrated error correction "on the fly," says David Hayes, the company's director of computational theory and design, That's a new capability for its machines. Nvidia GPUs were used to identify errors in the qubits in parallel. Hayes thinks that GPUs are more effective for error correction than chips known as FPGAs, also used in the industry. Quantinuum has used its computers to investigate the basic physics of magnetism and superconductivity. Earlier this year, it reported simulating a magnet on H2, Quantinuum's predecessor, with the claim that it "rivals the best classical approaches in expanding our understanding of magnetism." Along with announcing the introduction of Helios, the company has used the machine to simulate the behavior of electrons in a high-temperature superconductor.
Quantinuum is expanding its Helios line with a new system in Minnesota. It's also started developing its fourth-generation quantum computer, Sol, set for 2027 with 192 qubits. Then, a fifth-generation system, Apollo, is expected in 2029 with thousands of qubits and full fault tolerance.
Supercomputing

Nvidia's New Product Merges AI Supercomputing With Quantum (thequantuminsider.com) 14

NVIDIA has introduced NVQLink, an open system architecture that directly connects quantum processors with GPU-based supercomputers. The Quantum Insider reports: The new platform connects the high-speed, high-throughput performance of NVIDIA's GPU computing with quantum processing units (QPUs), allowing researchers to manage the intricate control and error-correction workloads required by quantum devices. According to a NVIDIA statement, the system was developed with guidance from researchers at major U.S. national laboratories including Brookhaven, Fermi, Lawrence Berkeley, Los Alamos, MIT Lincoln, Oak Ridge, Pacific Northwest, and Sandia.

Qubits, the basic units of quantum information, are extremely sensitive to noise and decoherence, making them prone to errors. Correcting and stabilizing these systems requires near-instantaneous feedback and coordination with classical processors. NVQLink is meant to meet that demand by providing an open, low-latency interconnect between quantum processors, control systems, and supercomputers -- effectively creating a unified environment for hybrid quantum applications.

The architecture offers a standardized, open approach to quantum integration, aligning with the company's CUDA-Q software platform to enable researchers to develop, test, and scale hybrid algorithms that draw simultaneously on CPUs, GPUs, and QPUs. The U.S. Department of Energy (DOE) -- which oversees several of the participating laboratories -- framed NVQLink as part of a broader national effort to sustain leadership in high-performance computing, according to NVIDIA.

AMD

IBM Says Conventional AMD Chips Can Run Quantum Computing Error Correction Algorithm (reuters.com) 23

IBM announced that its quantum error-correction algorithm can now run in real time on standard AMD field-programmable gate array (FPGA) chips -- a major step toward making quantum computing more practical and affordable. Reuters reports: In June, IBM said it had developed an algorithm to run alongside quantum chips that can address such errors. In a research paper seen by Reuters to be published on Monday, IBM will show it can run those algorithms in real time on a type of chip called a field programmable gate array manufactured by AMD.

Jay Gambetta, director of IBM research, said the work showed that IBM's algorithm not only works in the real world, but can operate on a readily available AMD chip that is not "ridiculously expensive." "Implementing it, and showing that the implementation is actually 10 times faster than what is needed, is a big deal," Gambetta said in an interview. IBM has a multi-year plan to build a quantum computer called Starling by 2029. Gambetta said the algorithm work disclosed Friday was completed a year ahead of schedule.

Supercomputing

Europe Hopes To Join Competitive AI Race With Supercomputer Jupiter (france24.com) 41

Europe on Friday inaugurated Jupiter, its first exascale supercomputer and the most powerful AI machine on the continent. Built in Germany with 24,000 Nvidia chips, the 500-million-euro system aims to close the AI gap with the US and China while also advancing climate modeling, neuroscience, and renewable energy research. France 24 reports: Based at Juelich Supercomputing Centre in western Germany, it is Europe's first "exascale" supercomputer -- meaning it will be able to perform at least one quintillion (or one billion billion) calculations per second. The United States already has three such computers, all operated by the Department of Energy. Jupiter is housed in a centre covering some 3,600 meters (38,000 square feet) -- about half the size of a football pitch -- containing racks of processors, and packed with about 24,000 Nvidia chips, which are favored by the AI industry.

Half the 500 million euros ($580 million) to develop and run the system over the next few years comes from the European Union and the rest from Germany. Its vast computing power can be accessed by researchers across numerous fields as well as companies for purposes such as training AI models. "Jupiter is a leap forward in the performance of computing in Europe," Thomas Lippert, head of the Juelich centre, told AFP, adding that it was 20 times more powerful than any other computer in Germany. [...]

Yes, Jupiter will require on average around 11 megawatts of power, according to estimates -- equivalent to the energy used to power thousands of homes or a small industrial plant. But its operators insist that Jupiter is the most energy-efficient among the fastest computer systems in the world. It uses the latest, most energy-efficient hardware, has water-cooling systems and the waste heat that it generates will be used to heat nearby buildings, according to the Juelich centre.

Supercomputing

Scientists Make 'Magic State' Breakthrough After 20 Years (livescience.com) 38

An anonymous reader quotes a report from Live Science: In a world first, scientists have demonstrated an enigmatic phenomenon in quantum computing that could pave the way for fault-tolerant machines that are far more powerful than any supercomputer. The process, called "magic state distillation," was first proposed 20 years ago, but its use in logical qubits has eluded scientists ever since. It has long been considered crucial for producing the high-quality resources, known as "magic states," needed to fulfill the full potential of quantum computers. [...] Now, however, scientists with QuEra say they have demonstrated magic state distillation in practice for the first time on logical qubits. They outlined their findings in a new study published July 14 in the journal Nature.

In the study, using the Gemini neutral-atom quantum computer, the scientists distilled five imperfect magic states into a single, cleaner magic state. They performed this separately on a Distance-3 and a Distance-5 logical qubit, demonstrating that it scales with the quality of the logical qubit. "A greater distance means better logical qubits. A Distance-2, for instance, means that you can detect an error but not correct it. Distance-3 means that you can detect and correct a single error. Distance-5 would mean that you can detect and correct up to two errors, and so on, and so on," [explained Yuval Boger, chief commercial officer at QuEra who was not personally involved in the research]. "So the greater the distance, the higher fidelity of the qubit is -- and we liken it to distilling crude oil into a jet fuel."

As a result of the distillation process, the fidelity of the final magic state exceeded that of any input. This proved that fault-tolerant magic state distillation worked in practice, the scientists said. This means that a quantum computer that uses both logical qubits and high-quality magic states to run non-Clifford gates is now possible. "We're seeing sort of a shift from a few years ago," Boger said. "The challenge was: can quantum computers be built at all? Then it was: can errors be detected and corrected? Us and Google and others have shown that, yes, that can be done. Now it's about: can we make these computers truly useful? And to make one computer truly useful, other than making them larger, you want them to be able to run programs that cannot be simulated on classical computers."

Supercomputing

IBM Says It's Cracked Quantum Error Correction (ieee.org) 26

Edd Gent reporting for IEEE Spectrum: IBM has unveiled a new quantum computing architecture it says will slash the number of qubits required for error correction. The advance will underpin its goal of building a large-scale, fault-tolerant quantum computer, called Starling, that will be available to customers by 2029. Because of the inherent unreliability of the qubits (the quantum equivalent of bits) that quantum computers are built from, error correction will be crucial for building reliable, large-scale devices. Error-correction approaches spread each unit of information across many physical qubits to create "logical qubits." This provides redundancy against errors in individual physical qubits.

One of the most popular approaches is known as a surface code, which requires roughly 1,000 physical qubits to make up one logical qubit. This was the approach IBM focused on initially, but the company eventually realized that creating the hardware to support it was an "engineering pipe dream," Jay Gambetta, the vice president of IBM Quantum, said in a press briefing. Around 2019, the company began to investigate alternatives. In a paper published in Nature last year, IBM researchers outlined a new error-correction scheme called quantum low-density parity check (qLDPC) codes that would require roughly one-tenth of the number of qubits that surface codes need. Now, the company has unveiled a new quantum-computing architecture that can realize this new approach.
"We've cracked the code to quantum error correction and it's our plan to build the first large-scale, fault-tolerant quantum computer," said Gambetta, who is also an IBM Fellow. "We feel confident it is now a question of engineering to build these machines, rather than science."
AMD

New Supercomputing Record Set - Using AMD's Instinct GPUs (tomshardware.com) 23

"AMD processors were instrumental in achieving a new world record," reports Tom's Hardware, "during a recent Ansys Fluent computational fluid dynamics simulation run on the Frontier supercomputer at the Oak Ridge National Laboratory."

The article points out that Frontier was the fastest supercomputer in the world until it was beaten by Lawrence Livermore Lab's El Capitan — with both computers powered by AMD GPUs: According to a press release by Ansys, it ran a 2.2-billion-cell axial turbine simulation for Baker Hughes, an energy technology company, testing its next-generation gas turbines aimed at increasing efficiency. The simulation previously took 38.5 hours to complete on 3,700 CPU cores. By using 1,024 AMD Instinct MI250X accelerators paired with AMD EPYC CPUs in Frontier, the simulation time was slashed to 1.5 hours. This is more than 25 times faster, allowing the company to see the impact of the changes it makes on designs much more quickly...

Given those numbers, the Ansys Fluent CFD simulator apparently only used a fraction of the power available on Frontier. That means it has the potential to run even faster if it can utilize all the available accelerators on the supercomputer. It also shows that, despite Nvidia's market dominance in AI GPUs, AMD remains a formidable competitor, with its CPUs and GPUs serving as the brains of some of the fastest supercomputers on Earth.

Math

JPMorgan Says Quantum Experiment Generated Truly Random Numbers (financialpost.com) 111

JPMorgan Chase used a quantum computer from Honeywell's Quantinuum to generate and mathematically certify truly random numbers -- an advancement that could significantly enhance encryption, security, and financial applications. The breakthrough was validated with help from U.S. national laboratories and has been published in the journal Nature. From a report: Between May 2023 and May 2024, cryptographers at JPMorgan wrote an algorithm for a quantum computer to generate random numbers, which they ran on Quantinuum's machine. The US Department of Energy's supercomputers were then used to test whether the output was truly random. "It's a breakthrough result," project lead and Head of Global Technology Applied Research at JPMorgan, Marco Pistoia told Bloomberg in an interview. "The next step will be to understand where we can apply it."

Applications could ultimately include more energy-efficient cryptocurrency, online gambling, and any other activity hinging on complete randomness, such as deciding which precincts to audit in elections.

Supercomputing

Supercomputer Draws Molecular Blueprint For Repairing Damaged DNA (phys.org) 10

Using the Summit supercomputer at the Department of Energy's Oak Ridge National Laboratory, researchers have modeled a key component of nucleotide excision repair (NER) called the pre-incision complex (PInC), which plays a crucial role in DNA damage repair. Their study, published in Nature Communications, provides new insights into how the PInC machinery orchestrates precise DNA excision, potentially leading to advancements in treating genetic disorders, preventing premature aging, and understanding conditions like xeroderma pigmentosum and Cockayne syndrome. Phys.Org reports: "Computationally, once you assemble the PInC, molecular dynamics simulations of the complex become relatively straightforward, especially on large supercomputers like Summit," [said lead investigator Ivaylo Ivanov, a chemistry professor at Georgia State University]. Nanoscale Molecular Dynamics, or NAMD, is a molecular dynamics code specifically designed for supercomputers and is used to simulate the movements and interactions of large biomolecular systems that contain millions of atoms. Using NAMD, the research team ran extensive simulations. The number-crunching power of the 200-petaflop Summit supercomputer -- capable of performing 200,000 trillion calculations per second -- was essential in unraveling the functional dynamics of the PInC complex on a timescale of microseconds. "The simulations showed us a lot about the complex nature of the PInC machinery. It showed us how these different components move together as modules and the subdivision of this complex into dynamic communities, which form the moving parts of this machine," Ivanov said.

The findings are significant in that mutations in XPF and XPG can lead to severe human genetic disorders. They include xeroderma pigmentosum, which is a condition that makes people more susceptible to skin cancer, and Cockayne syndrome, which can affect human growth and development, lead to impaired hearing and vision, and speed up the aging process. "Simulations allow us to zero in on these important regions because mutations that interfere with the function of the NER complex often occur at community interfaces, which are the most dynamic regions of the machine," Ivanov said. "Now we have a much better understanding of how and from where these disorders manifest."

ISS

Axiom Space and Red Hat Will Bring Edge Computing to the International Space Station (theregister.com) 7

Axiom Space and Red Hat will collaborate to launch Data Center Unit-1 (AxDCU-1) to the International Space Station this spring. It's a small data processing prototype (powered by lightweight, edge-optimized Red Hat Device Edge) that will demonstrate initial Orbital Data Center (ODC) capabilities.

"It all sounds rather grand for something that resembles a glorified shoebox," reports the Register. Axiom Space said: "The prototype will test applications in cloud computing, artificial intelligence, and machine learning (AI/ML), data fusion and space cybersecurity."

Space is an ideal environment for edge devices. Connectivity to datacenters on Earth is severely constrained, so the more processing that can be done before data is transmitted to a terrestrial receiving station, the better. Tony James, chief architect, Science and Space at Red Hat, said: "Off-planet data processing is the next frontier, and edge computing is a crucial component. With Red Hat Device Edge and in collaboration with Axiom Space, Earth-based mission partners will have the capabilities necessary to make real-time decisions in space with greater reliability and consistency...."

The Red Hat Device Edge software used by Axiom's device combines Red Hat Enterprise Linux, the Red Hat Ansible Platform, and MicroShift, a lightweight Kubernetes container orchestration service derived from Red Hat OpenShift. The plan is for Axiom Space to host hybrid cloud applications and cloud-native workloads on-orbit. Jason Aspiotis, global director of in-space data and security, Axiom Space, told The Register that the hardware itself is a commercial off-the-shelf unit designed for operation in harsh environments... "AxDCU-1 will have the ability to be controlled and utilized either via ground-to-space or space-to-space communications links. Our current plans are to maintain this device on the ISS. We plan to utilize this asset for at least two years."

The article notes that HPE has also "sent up a succession of Spaceborne computers — commercial, off-the-shelf supercomputers — over the years to test storage, recovery, and operational potential on long-duration missions." (They apparently use Red Hat Enterprise Linux.) "At the other end of the scale, the European Space Agency has run Raspberry Pi computers on the ISS for years as part of the AstroPi educational outreach program."

Axiom Space says their Orbital Data Center is deigned to "reduce delays traditionally associated with orbital data processing and analysis." By utilizing Earth-independent cloud storage and edge processing infrastructure, Axiom Space ODCs will enable data to be processed closer to its source, spacecraft or satellites, bypassing the need for terrestrial-based data centers. This architecture alleviates reliance on costly, slow, intermittent or contested network connections, creating more secure and quicker decision-making in space.

The goal is to allow Axiom Space and its partners to have access to real-time processing capabilities, laying the foundation for increased reliability and improved space cybersecurity with extensive applications. Use cases for ODCs include but are not limited to supporting Earth observation satellites with in-space and lower latency data storage and processing, AI/ML training on-orbit, multi-factor authentication and cyber intrusion detection and response, supervised autonomy, in-situ space weather analytics and off-planet backup & disaster recovery for critical infrastructure on Earth.

Supercomputing

Amazon Uses Quantum 'Cat States' With Error Correction (arstechnica.com) 11

An anonymous reader quotes a report from Ars Technica: Following up on Microsoft's announcement of a qubit based on completely new physics, Amazon is publishing a paper describing a very different take on quantum computing hardware. The system mixes two different types of qubit hardware to improve the stability of the quantum information they hold. The idea is that one type of qubit is resistant to errors, while the second can be used for implementing an error-correction code that catches the problems that do happen. While there have been more effective demonstrations of error correction in the past, a number of companies are betting that Amazon's general approach is the best route to getting logical qubits that are capable of complex algorithms. So, in that sense, it's an important proof of principle. Amazon's quantum computing approach combines cat qubits for data storage and transmons for error correction.

Cat qubits are quantum bits that distribute their superposition state across multiple photons in a resonator, making them highly resistant to bit flip errors. Transmons are superconducting qubits that help detect and correct phase flip errors by enabling weak measurements without destroying the quantum state. Meanwhile, a phase flip is a quantum error that alters the relative phase of a qubit's superposition state without changing its probability distribution. Unlike a bit flip, which swaps a qubit's state probabilities, a phase flip changes how the quantum states interfere, potentially disrupting quantum computations.

By alternating cat qubits with transmons, Amazon reduces the number of hardware qubits needed for error correction. Their tests show that increasing qubits lowers the error rate, proving the system's effectiveness. However, rare bit flips still cause entire logical qubits to fail, and transmons remain prone to both bit and phase flips. If you're still entangled in this story without decohering into pure quantum chaos, kudos to you!
AI

DeepSeek Accelerates AI Model Timeline as Market Reacts To Low-Cost Breakthrough (reuters.com) 25

Chinese AI startup DeepSeek is speeding up the release of its R2 model following the success of January's R1, which outperformed many US competitors at a fraction of the cost and triggered a $1 trillion-plus market selloff. The Hangzhou-based firm had planned a May release but now wants R2 out "as early as possible," Reuters reported Tuesday.

The upcoming model promises improved coding capabilities and reasoning in multiple languages beyond English. DeepSeek's competitive advantage stems from its parent company High-Flyer's early investment in computing power, including two supercomputing clusters acquired before U.S. export bans on advanced Nvidia chips. The second cluster, Fire-Flyer II, comprised approximately 10,000 Nvidia A100 chips. DeepSeek's cost-efficiency comes from innovative architecture choices like Mixture-of-Experts (MoE) and multihead latent attention (MLA).

According to Bernstein analysts, DeepSeek's pricing was 20-40 times cheaper than OpenAI's equivalent models. The competitive pressure has already forced OpenAI to cut prices and release a scaled-down model, while Google's Gemini has introduced discounted access tiers.
Supercomputing

Microsoft Reveals Its First Quantum Computing Chip, the Majorana 1 (cnbc.com) 31

After two decades of quantum computing research, Microsoft has unveiled its first quantum chip: the Majorana 1. CNBC reports: Microsoft's quantum chip employs eight topological qubits using indium arsenide, which is a semiconductor, and aluminum, which is a superconductor. A new paper in the journal Nature describes the chip in detail. Microsoft won't be allowing clients to use its Majorana 1 chip through the company's Azure public cloud, as it plans to do with its custom artificial intelligence chip, Maia 100. Instead, Majorana 1 is a step toward a goal of a million qubits on a chip, following extensive physics research.

Rather than rely on Taiwan Semiconductor or another company for fabrication, Microsoft is manufacturing the components of Majorana 1 itself in the U.S. That's possible because the work is unfolding at a small scale. "We want to get to a few hundred qubits before we start talking about commercial reliability," Jason Zander, a Microsoft executive vice president, told CNBC. In the meantime, the company will engage with national laboratories and universities on research using Majorana 1.

Supercomputing

The IRS Is Buying an AI Supercomputer From Nvidia (theintercept.com) 150

According to The Intercept, the IRS is set to purchase an Nvidia SuperPod AI supercomputer to enhance its machine learning capabilities for tasks like fraud detection and taxpayer behavior analysis. From the report: With Elon Musk's so-called Department of Government Efficiency installing itself at the IRS amid a broader push to replace federal bureaucracy with machine-learning software, the tax agency's computing center in Martinsburg, West Virginia, will soon be home to a state-of-the-art Nvidia SuperPod AI computing cluster. According to the previously unreported February 5 acquisition document, the setup will combine 31 separate Nvidia servers, each containing eight of the company's flagship Blackwell processors designed to train and operate artificial intelligence models that power tools like ChatGPT. The hardware has not yet been purchased and installed, nor is a price listed, but SuperPod systems reportedly start at $7 million. The setup described in the contract materials notes that it will include a substantial memory upgrade from Nvidia.

Though small compared to the massive AI-training data centers deployed by companies like OpenAI and Meta, the SuperPod is still a powerful and expensive setup using the most advanced technology offered by Nvidia, whose chips have facilitated the global machine-learning spree. While the hardware can be used in many ways, it's marketed as a turnkey means of creating and querying an AI model. Last year, the MITRE Corporation, a federally funded military R&D lab, acquired a $20 million SuperPod setup to train bespoke AI models for use by government agencies, touting the purchase as a "massive increase in computing power" for the United States.

How exactly the IRS will use its SuperPod is unclear. An agency spokesperson said the IRS had no information to share on the supercomputer purchase, including which presidential administration ordered it. A 2024 report by the Treasury Inspector General for Tax Administration identified 68 different AI-related projects underway at the IRS; the Nvidia cluster is not named among them, though many were redacted. But some clues can be gleaned from the purchase materials. "The IRS requires a robust and scalable infrastructure that can handle complex machine learning (ML) workloads," the document explains. "The Nvidia Super Pod is a critical component of this infrastructure, providing the necessary compute power, storage, and networking capabilities to support the development and deployment of large-scale ML models."

The document notes that the SuperPod will be run by the IRS Research, Applied Analytics, and Statistics division, or RAAS, which leads a variety of data-centric initiatives at the agency. While no specific uses are cited, it states that this division's Compliance Data Warehouse project, which is behind this SuperPod purchase, has previously used machine learning for automated fraud detection, identity theft prevention, and generally gaining a "deeper understanding of the mechanisms that drive taxpayer behavior."

Supercomputing

Quantum Teleportation Used To Distribute a Calculation (arstechnica.com) 58

An anonymous reader quotes a report from Ars Technica: In today's issue of Nature, a team at Oxford University describes using quantum teleportation to link two pieces of quantum hardware that were located about 2 meters apart, meaning they could easily have been in different rooms entirely. Once linked, the two pieces of hardware could be treated as a single quantum computer, allowing simple algorithms to be performed that involved operations on both sides of the 2-meter gap. [...] The Oxford team was simply interested in a proof-of-concept, and so used an extremely simplified system. Each end of the 2-meter gap had a single trap holding two ions, one strontium and one calcium. The two atoms could be entangled with each other, getting them to operate as a single unit.

The calcium ion served as a local memory and was used in computations, while the strontium ion served as one of the two ends of the quantum network. An optical cable between the two ion traps allowed photons to entangle the two strontium ions, getting the whole system to operate as a single unit. The key thing about the entanglement processes used here is that a failure to entangle left the system in its original state, meaning that the researchers could simply keep trying until the qubits were entangled. The entanglement event would also lead to a photon that could be measured, allowing the team to know when success had been achieved (this sort of entanglement with a success signal is termed "heralded" by those in the field).

The researchers showed that this setup allowed them to teleport with a specific gate operation (controlled-Z), which can serve as the basis for any other two-qubit gate operation -- any operation you might want to do can be done by using a specific combination of these gates. After performing multiple rounds of these gates, the team found that the typical fidelity was in the area of 70 percent. But they also found that errors typically had nothing to do with the teleportation process and were the product of local operations at one of the two ends of the network. They suspect that using commercial hardware, which has far lower error rates, would improve things dramatically. Finally, they performed a version of Grover's algorithm, which can, with a single query, identify a single item from an arbitrarily large unordered list. The "arbitrary" aspect is set by the number of available qubits; in this case, having only two qubits, the list maxed out at four items. Still, it worked, again with a fidelity of about 70 percent.

While the work was done with trapped ions, almost every type of qubit in development can be controlled with photons, so the general approach is hardware-agnostic. And, given the sophistication of our optical hardware, it should be possible to link multiple chips at various distances, all using hardware that doesn't require the best vacuum or the lowest temperatures we can generate. That said, the error rate of the teleportation steps may still be a problem, even if it was lower than the basic hardware rate in these experiments. The fidelity there was 97 percent, which is lower than the hardware error rates of most qubits and high enough that we couldn't execute too many of these before the probability of errors gets unacceptably high.

Slashdot Top Deals