Supercomputing

E=mc^2 Verified In Quantum Chromodynamic Calculation 268

chirishnique and other readers sent in a story in AFP about a heroic supercomputer computation that has verified Einstein's most famous equation at the level of subatomic particles for the first time. "A brainpower consortium led by Laurent Lellouch of France's Centre for Theoretical Physics, using some of the world's mightiest supercomputers, have set down the calculations for estimating the mass of protons and neutrons, the particles at the nucleus of atoms. ... [T]he mass of gluons is zero and the mass of quarks is only five per cent. Where, therefore, is the missing 95 per cent? The answer, according to the study published in the US journal Science on Thursday, comes from the energy from the movements and interactions of quarks and gluons. ... [E]nergy and mass are equivalent, as Einstein proposed in his Special Theory of Relativity in 1905." Update: 11/21 15:50 GMT by KD : New Scientist has a slightly more technical look at the accomplishment.
Supercomputing

Windows Breaks Into Supercomputer Top 10 294

yanx0016 writes "Wow, that's some news this week at SuperComputing 08. Apparently Microsoft Windows HPC Server 2008, with a Chinese hardware OEM (Dawning), made #10 on the Top500 list, edging out #11 by only 600 Gflops. Folks were shocked to see Microsoft getting so serious around HPC; I think we are only beginning to see a glimpse of Microsoft in the HPC field."
Supercomputing

New Top 500 Supercomputer List 138

geaux and other readers let us know that the new Top 500 Supercomputer list is out. The top two both break the Petaflops barrier: LANL's IBM "RoadRunner" and ORNL's Cray XT5 "Jaguar." (Contrary to our discussion a few days back, IBM's last-minute upgrade of RoadRunner salvaged the top spot for Big Blue. Kind of like bidding on eBay.) The top six all run in excess of 400 Teraflops. HP has more systems in the top 500 than IBM, reversing the order of the previous list. Both Intel and AMD issued press releases crowing over their wins, and both are correct — AMD highlights its presence in 7 of the top 10, while Intel boasts that 379 of the top 500 use their chips.
Supercomputing

Jaguar, World's Most Powerful Supercomputer 154

Protoclown writes "The National Center for Computational Sciences (NCCS), located at Oak Ridge National Labs (ORNL) in Tennessee, has upgraded the Jaguar supercomputer to 1.64-petaflops for use by scientists and engineers working in areas such as climate modeling, renewable energy, materials science, fusion and combustion. The current upgrade is the result of an addition of 200 cabinets of the Cray XT5 to the existing 84 cabinets of the XT4 Jaguar system. Jaguar is now the world's most powerful supercomputer available for open scientific research."
Math

Achieving Mathematical Proofs Via Computers 209

eldavojohn writes "A special issue of Notices of the American Mathematical Society (AMS) provides four beautiful articles illustrating formal proof by computation. PhysOrg has a simpler article on these assistant mathematical computer programs and states 'One long-term dream is to have formal proofs of all of the central theorems in mathematics. Thomas Hales, one of the authors writing in the Notices, says that such a collection of proofs would be akin to the sequencing of the mathematical genome.' You may recall a similar quest we discussed."
The Internet

Amazon Beefs Up Its Cloud Ahead of MS Announcement 89

Amazon has announced several major improvements to its EC2 service for cloud computing. The service is now in production (no longer beta); it offers a service-level agreement; and Windows and SQL Server are available in beta form. ZDNet points out that all this news is intended to take some wind out of Microsoft's sails as MS is expected to introduce its own cloud services next week at its Professional Developers Conference.
Supercomputing

Greenspan Tells Congress Bad Data Hurt Wall Street 496

CWmike writes "Former Reserve Bank chairman Alan Greenspan has long praised technology as a tool to limit risks in financial markets. In 2005, he said better risk scoring by high-performance computing made it possible for lenders to extend credit to subprime borrowers. But today Greenspan told Congress that the data fed into financial systems was often a case of garbage in, garbage out. Christopher Cox, chairman of the Securities and Exchange Commission, told the committee that bad code led the credit rating agencies to give AAA ratings to mortgage-backed securities that didn't deserve them. Explaining in his testimony what failed, Cox noted a 2004 decision to rely on the computer models for assessing risks — a decision that essentially outsourced regulatory duties to Wall Street firms themselves."
Supercomputing

New State of Matter Could Extend Moore's Law 329

rennerik writes "Scientists at McGill University in Montreal say they've discovered a new state of matter that could help extend Moore's Law and allow for the fabrication of more tightly packed transistors, or a new kind of transistor altogether. The researchers call the new state of matter 'a quasi-three-dimensional electron crystal.' It was discovered using a device cooled to a temperature about 100 times colder than intergalactic space, following the application of the most powerful continuous magnetic field on Earth."
Supercomputing

Cray's CX1 Desktop Supercomputer, Now For Sale 294

ocularb0b writes "Cray has announced the CX1 desktop supercomputer. Cray teamed with Microsoft and Intel to build the new machine that supports up to 8 nodes, a total of 64 cores and 64Gb of memory per node. CX1 can be ordered online with starting prices of $25K, and a choice of Linux or Windows HPC. This should be a pretty big deal for smaller schools and scientists waiting in line for time on the world's big computing centers, as well as 3D and VFX shops."
Supercomputing

eBay Makes Huge Gains In Parallel Efficiency 47

CurtMonash writes "Parallel Efficiency is a simple metric that divides the actual work your parallel CPUs do by the sum of their total capacity. If you can get your parallel efficiency up, it's like getting free servers, free floor space, and some free power as well. eBay reports that it amazed even itself by increasing overall PE from 50% to 80% in about 6 months — across tens of thousands of servers. The secret sauce was data warehouse-based analytics. I.e., eBay instrumented its own network to do minute-by-minute status checks, then crunched the resulting data to find bottlenecks that needed removing. Obviously, savings are in the many millions of dollars. eBay has been offering some glimpses into its analytic efforts this year, and the PE savings are one of the most concrete examples they're offering to validate all this analytic cleverness."
Supercomputing

CERN Launches Huge LHC Computing Grid 46

RaaVi writes "Yesterday CERN launched the largest computing grid in the world, which is destined to analyze the data coming from the world's biggest particle accelerator, the Large Hadron Collider. The computing grid consists of more than 140 computer centers from around the world working together to handle the expected 10-15 petabytes of data the LHC will generate each year." The Worldwide LHC Computing Grid will initially handle data for up to 7,000 scientists around the world. Though the LHC itself is down for some lengthy repairs, an event called GridFest was held yesterday to commemorate the occasion. The LCG will run alongside the LHC@Home volunteer project.
Microsoft

Microsoft To Release Cloud-Oriented Windows OS 209

CWmike writes "Within a month, Microsoft will unveil what CEO Steve Ballmer called 'Windows Cloud.' The operating system, which will likely have a different name, is intended for developers writing cloud-computing applications, said Ballmer, who spoke to an auditorium of IT managers at a Microsoft-sponsored conference in London. Ballmer was short on details, saying more information would spoil the announcement. Windows Cloud is a separate project from Windows 7, the operating system that Microsoft is developing to succeed Windows Vista."
Supercomputing

Red Hat HPC Linux Cometh 34

Slatterz writes "Red Hat will announce its first high-performance computing optimised distro, Red Hat HPC, on 7 October. The distro is a step forward from the current Red Hat Enterprise Linux for HPC Compute Nodes. A part of the new distro is, by the way, created by a small Project Kusu team in Singapore. Kusu is the foundation for Platform Open Cluster Stack (OCS) which is an integral feature of Red Hat HPC. It might be sign of things to come, as more of hardware and software development moves to the Far East — even top-of-the-line computing performance."
NASA

NASA Upgrades Weather Research Supercomputer 71

Cowards Anonymous writes "NASA's Center for Computational Sciences is nearly tripling the performance of a supercomputer it uses to simulate Earth's climate and weather, and our planet's relationship with the Sun. NASA is deploying a 67-teraflop machine that takes advantage of IBM's iDataPlex servers, new rack-mount products originally developed to serve heavily trafficked social networking sites."
Supercomputing

The Supercomputer Race 158

CWmike writes "Every June and November a new list of the world's fastest supercomputers is revealed. The latest Top 500 list marked the scaling of computing's Mount Everest — the petaflops barrier. IBM's 'Roadrunner' topped the list, burning up the bytes at 1.026 petaflops. A computer to die for if you are a supercomputer user for whom no machine ever seems fast enough? Maybe not, says Richard Loft, director of supercomputing research at the National Center for Atmospheric Research in Boulder, Colo. The Top 500 list is only useful in telling you the absolute upper bound of the capabilities of the computers ... It's not useful in terms of telling you their utility in real scientific calculations. The problem with the rankings: a decades-old benchmark called Linpack, which is Fortran code that measures the speed of processors on floating-point math operations. One possible fix: Invoking specialization. Loft says of petaflops, peak performance, benchmark results, positions on a list — 'it's a little shell game that everybody plays. ... All we care about is the number of years of climate we can simulate in one day of wall-clock computer time. That tells you what kinds of experiments you can do.' State-of-the-art systems today can simulate about five years per day of computer time, he says, but some climatologists yearn to simulate 100 years in a day."
Supercomputing

Unholy Matrimony? Microsoft and Cray 358

fetusbear writes with a ZDNet story that says "'Microsoft and Cray are set to unveil on September 16 the Cray CX1, a compact supercomputer running Windows HPC Server 2008. The pair is expected to tout the new offering as "the most affordable supercomputer Cray has ever offered," with pricing starting at $25,000.' Although this would be the lowest cost hardware ever offered by Cray, it would also be the most expensive desktop ever offered by Microsoft."
Supercomputing

One Data Center To Rule Them All 112

1sockchuck writes "Weta Digital, the New Zealand studio that created the visual effects for the 'Lord of the Rings' movie trilogy, has launched a new "extreme density" data center to provide the computing horsepower to power its digital renderings. Weta is running four clusters that are each equipped with 156 of HP's new 2-in-1 blade servers, and use liquid cooling to manage the heat loads. The Weta render farms currently hold spots 219 through 222 on the current Top 500 list of the world's fastest supercomputers."
Supercomputing

$208 Million Petascale Computer Gets Green Light 174

coondoggie writes "The 200,000 processor core system known as Blue Waters got the green light recently as the University of Illinois at Urbana-Champaign and its National Center for Supercomputing Applications (NCSA) said it has finalized the contract with IBM to build the world's first sustained petascale computational system. Blue Waters is expected to deliver sustained performance of more than one petaflop on many real-world scientific and engineering applications. A petaflop equals about 1 quadrillion calculations per second. They will be coupled to more than a petabyte of memory and more than 10 petabytes of disk storage. All of that memory and storage will be globally addressable, meaning that processors will be able to share data from a single pool exceptionally quickly, researchers said. Blue Waters, is supported by a $208 million grant from the National Science Foundation and will come online in 2011."
Supercomputing

IBM Open Sources Supercomputer Code 77

eldavojohn writes "IBM has announced at the LinuxWorld conference that they are now hosting all their supercomputing stack software as open source from the University of Illinois. From the article: 'The software will initially support Red Hat Enterprise Linux 5.2 and IBM Power6 processors. IBM is planning to add support for Power 575 supercomputing servers and IBM x86 platforms such as System x 3450 servers, BladeCenter servers and System x iDataPlex servers. The stack includes several distinct software tools that have been tested and integrated by IBM. These include the Extreme Cluster Administration Toolkit (xCAT), originally developed for large clusters based on Intel's commodity x86 architecture but now modified for clusters based on IBM's own Power architecture. xCAT is used in the National Nuclear Security Administration's Roadrunner Project at Los Alamos National Laboratory in New Mexico — a hybrid cluster currently ranked by the official Top 500 list as the world's most powerful supercomputer.' For several years, Linux has been a strong tool for supercomputing."

Slashdot Top Deals