×
Supercomputing

Cray's CX1 Desktop Supercomputer, Now For Sale 294

ocularb0b writes "Cray has announced the CX1 desktop supercomputer. Cray teamed with Microsoft and Intel to build the new machine that supports up to 8 nodes, a total of 64 cores and 64Gb of memory per node. CX1 can be ordered online with starting prices of $25K, and a choice of Linux or Windows HPC. This should be a pretty big deal for smaller schools and scientists waiting in line for time on the world's big computing centers, as well as 3D and VFX shops."
Supercomputing

eBay Makes Huge Gains In Parallel Efficiency 47

CurtMonash writes "Parallel Efficiency is a simple metric that divides the actual work your parallel CPUs do by the sum of their total capacity. If you can get your parallel efficiency up, it's like getting free servers, free floor space, and some free power as well. eBay reports that it amazed even itself by increasing overall PE from 50% to 80% in about 6 months — across tens of thousands of servers. The secret sauce was data warehouse-based analytics. I.e., eBay instrumented its own network to do minute-by-minute status checks, then crunched the resulting data to find bottlenecks that needed removing. Obviously, savings are in the many millions of dollars. eBay has been offering some glimpses into its analytic efforts this year, and the PE savings are one of the most concrete examples they're offering to validate all this analytic cleverness."
Supercomputing

CERN Launches Huge LHC Computing Grid 46

RaaVi writes "Yesterday CERN launched the largest computing grid in the world, which is destined to analyze the data coming from the world's biggest particle accelerator, the Large Hadron Collider. The computing grid consists of more than 140 computer centers from around the world working together to handle the expected 10-15 petabytes of data the LHC will generate each year." The Worldwide LHC Computing Grid will initially handle data for up to 7,000 scientists around the world. Though the LHC itself is down for some lengthy repairs, an event called GridFest was held yesterday to commemorate the occasion. The LCG will run alongside the LHC@Home volunteer project.
Microsoft

Microsoft To Release Cloud-Oriented Windows OS 209

CWmike writes "Within a month, Microsoft will unveil what CEO Steve Ballmer called 'Windows Cloud.' The operating system, which will likely have a different name, is intended for developers writing cloud-computing applications, said Ballmer, who spoke to an auditorium of IT managers at a Microsoft-sponsored conference in London. Ballmer was short on details, saying more information would spoil the announcement. Windows Cloud is a separate project from Windows 7, the operating system that Microsoft is developing to succeed Windows Vista."
Supercomputing

Red Hat HPC Linux Cometh 34

Slatterz writes "Red Hat will announce its first high-performance computing optimised distro, Red Hat HPC, on 7 October. The distro is a step forward from the current Red Hat Enterprise Linux for HPC Compute Nodes. A part of the new distro is, by the way, created by a small Project Kusu team in Singapore. Kusu is the foundation for Platform Open Cluster Stack (OCS) which is an integral feature of Red Hat HPC. It might be sign of things to come, as more of hardware and software development moves to the Far East — even top-of-the-line computing performance."
NASA

NASA Upgrades Weather Research Supercomputer 71

Cowards Anonymous writes "NASA's Center for Computational Sciences is nearly tripling the performance of a supercomputer it uses to simulate Earth's climate and weather, and our planet's relationship with the Sun. NASA is deploying a 67-teraflop machine that takes advantage of IBM's iDataPlex servers, new rack-mount products originally developed to serve heavily trafficked social networking sites."
Supercomputing

The Supercomputer Race 158

CWmike writes "Every June and November a new list of the world's fastest supercomputers is revealed. The latest Top 500 list marked the scaling of computing's Mount Everest — the petaflops barrier. IBM's 'Roadrunner' topped the list, burning up the bytes at 1.026 petaflops. A computer to die for if you are a supercomputer user for whom no machine ever seems fast enough? Maybe not, says Richard Loft, director of supercomputing research at the National Center for Atmospheric Research in Boulder, Colo. The Top 500 list is only useful in telling you the absolute upper bound of the capabilities of the computers ... It's not useful in terms of telling you their utility in real scientific calculations. The problem with the rankings: a decades-old benchmark called Linpack, which is Fortran code that measures the speed of processors on floating-point math operations. One possible fix: Invoking specialization. Loft says of petaflops, peak performance, benchmark results, positions on a list — 'it's a little shell game that everybody plays. ... All we care about is the number of years of climate we can simulate in one day of wall-clock computer time. That tells you what kinds of experiments you can do.' State-of-the-art systems today can simulate about five years per day of computer time, he says, but some climatologists yearn to simulate 100 years in a day."
Supercomputing

Unholy Matrimony? Microsoft and Cray 358

fetusbear writes with a ZDNet story that says "'Microsoft and Cray are set to unveil on September 16 the Cray CX1, a compact supercomputer running Windows HPC Server 2008. The pair is expected to tout the new offering as "the most affordable supercomputer Cray has ever offered," with pricing starting at $25,000.' Although this would be the lowest cost hardware ever offered by Cray, it would also be the most expensive desktop ever offered by Microsoft."
Supercomputing

One Data Center To Rule Them All 112

1sockchuck writes "Weta Digital, the New Zealand studio that created the visual effects for the 'Lord of the Rings' movie trilogy, has launched a new "extreme density" data center to provide the computing horsepower to power its digital renderings. Weta is running four clusters that are each equipped with 156 of HP's new 2-in-1 blade servers, and use liquid cooling to manage the heat loads. The Weta render farms currently hold spots 219 through 222 on the current Top 500 list of the world's fastest supercomputers."
Supercomputing

$208 Million Petascale Computer Gets Green Light 174

coondoggie writes "The 200,000 processor core system known as Blue Waters got the green light recently as the University of Illinois at Urbana-Champaign and its National Center for Supercomputing Applications (NCSA) said it has finalized the contract with IBM to build the world's first sustained petascale computational system. Blue Waters is expected to deliver sustained performance of more than one petaflop on many real-world scientific and engineering applications. A petaflop equals about 1 quadrillion calculations per second. They will be coupled to more than a petabyte of memory and more than 10 petabytes of disk storage. All of that memory and storage will be globally addressable, meaning that processors will be able to share data from a single pool exceptionally quickly, researchers said. Blue Waters, is supported by a $208 million grant from the National Science Foundation and will come online in 2011."
Supercomputing

IBM Open Sources Supercomputer Code 77

eldavojohn writes "IBM has announced at the LinuxWorld conference that they are now hosting all their supercomputing stack software as open source from the University of Illinois. From the article: 'The software will initially support Red Hat Enterprise Linux 5.2 and IBM Power6 processors. IBM is planning to add support for Power 575 supercomputing servers and IBM x86 platforms such as System x 3450 servers, BladeCenter servers and System x iDataPlex servers. The stack includes several distinct software tools that have been tested and integrated by IBM. These include the Extreme Cluster Administration Toolkit (xCAT), originally developed for large clusters based on Intel's commodity x86 architecture but now modified for clusters based on IBM's own Power architecture. xCAT is used in the National Nuclear Security Administration's Roadrunner Project at Los Alamos National Laboratory in New Mexico — a hybrid cluster currently ranked by the official Top 500 list as the world's most powerful supercomputer.' For several years, Linux has been a strong tool for supercomputing."
Technology

Opening Quantum Computing To the Public 191

director_mr writes "Tom's Hardware is running a story with an interesting description of a 28-qubit quantum computer that was developed by D-Wave Systems. They intend to open up use of their quantum computer to the public. It is particularly good at pattern recognition, it operates at 10 milliKelvin, and it is shielded to limit electromagnetic interference to one nanotesla in three dimensions across the whole chip. Could this be the first successful commercial quantum computer?"
Programming

The Father of Multi-Core Chips Talks Shop 90

pacopico writes "Stanford professor Kunle Olukotun designed the first mainstream multi-core chip, crafting what would become Sun Microsystems's Niagra product. Now, he's heading up Stanford's Pervasive Parallelism Lab where researchers are looking at 100s of core systems that might power robots, 3-D virtual worlds and insanely big server applications. The Register just interviewed Olukotun about this work and the future of multi-core chips. Weird and interesting stuff."
Software

BOINC Now Available For GPU/CUDA 20

GDI Lord writes "BOINC, open-source software for volunteer computing and grid computing, has posted news that GPU computing has arrived! The GPUGRID.net project from the Barcelona Biomedical Research Park uses CUDA-capable NVIDIA chips to create an infrastructure for biomolecular simulations. (Currently available for Linux64; other platforms to follow soon. To participate, follow the instructions on the web site.) I think this is great news, as GPUs have shown amazing potential for parallel computing."
Supercomputing

IBM's Eight-Core, 4-GHz Power7 Chip 425

pacopico writes "The first details on IBM's upcoming Power7 chip have emerged. The Register is reporting that IBM will ship an eight-core chip running at 4.0 GHz. The chip will support four threads per core and fit into some huge systems. For example, University of Illinois is going to house a 300,000-core machine that can hit 10 petaflops. It'll have 620 TB of memory and support 5 PB/s of memory bandwidth. Optical interconnects anyone?"
Supercomputing

Simple Mod Turns Diodes Into Photon Counters 118

KentuckyFC writes "The standard way to detect single photons is to use an avalanche photodiode in which a single photon can trigger an avalanche of current. These devices have an important drawback, however. They cannot distinguish the arrival of a single photon from the simultaneous arrival of two or more. But a team of physicists in the UK has found a simple mod that turns avalanche photodiodes into photon counters. They say that in the first instants after the avalanche forms, its current is proportional to the number of photons that have struck. All you have to do is measure it at this early stage. That's like turning a Fiat 500 into a Ferrari. Photon counting is one of the enabling technologies behind optical quantum computing. A number of schemes are known in which it is necessary to count the arrival of 0, 1 or 2 photons at specific detectors (abstract). With such a cheap detector now available (as well as decent photon guns), we could see dramatic progress in this field in the coming months."
Supercomputing

Cool/Weird Stuff To Do On a Cluster? 608

Gori writes "I'm a researcher at a university. Our group mainly does Agent Based Modeling of interdisciplinary problems (think massive simulations where technology, policy, and economics meet). Recently, we managed to get a bunch of money for a High Performance Cluster to run our stuff on. The code is mostly written in Java. Our IT support people are very capable of setting up a stable cluster that will run Java perfectly. But where's the fun in that? What I'm trying to figure out are other, more far-out and interesting things to do with this machine — think 500+ Opteron cores, 2 GB RAM per core, a gigabit interconnect with some badass switches, a massive storage array, plus a bunch of UltraSPARC boxes. So at times when there's no stuff to crunch, I'd like to boot the thing up with a 'weird' system image and geek around in the name of science. Try fancy ways of building models, dynamically adding all sorts of hardware to it, etc. Have different schedulers compete for resources. Imagine a Matlab vs. Boinc vs. ProActive shootout. Maybe run plan9 on it? Most of us are not CE/CS people, but we are geeky enough. So, what would be the coolest and most far out thing you would do with this kind of hardware ?"
Microsoft

Fastest-Ever Windows HPC Cluster 216

An anonymous reader links to an eWeek story which says that Microsoft's "fastest-yet homegrown supercomputer, running the U.S. company's new Windows HPC Server 2008, debuted in the top 25 of the world's top 500 fastest supercomputers, as tested and operated by the National Center for Supercomputing Applications. ... Most of the cores were made up of Intel Xeon quad-core chips. Storage for the system was about 6 terabytes," and asks "I wonder how the uptime compares? When machines scale to this size, they tend to quirk out in weird ways."
Supercomputing

"Intrepid" Supercomputer Fastest In the World 122

Stony Stevenson writes "The US Department of Energy's (DoE) high performance computing system is now the fastest supercomputer in the world for open science, according to the Top 500 list of the world's fastest computers. The list was announced this week during the International Supercomputing Conference in Dresden, Germany. IBM's Blue Gene/P, known as 'Intrepid,' is located at the Argonne Leadership Computing Facility and is also ranked third fastest overall. The supercomputer has a peak performance of 557 teraflops and achieved a speed of 450.3 teraflops on the Linpack application used to measure speed for the Top 500 rankings. According to the list, 74.8 percent of the world's supercomputers (some 374 systems) use Intel processors, a rise of 4 percent in six months. This represents the biggest slice of the supercomputer cake for the firm ever."

Slashdot Top Deals