×
IBM

Cray Replaces IBM To Build $188M Supercomputer 99

wiredmikey writes "Supercomputer maker Cray today said that the University of Illinois' National Center for Supercomputing Applications (NCSA) awarded the company a contract to build a supercomputer for the National Science Foundation's Blue Waters project. The supercomputer will be powered by new 16-core AMD Opteron 6200 Series processors (formerly code-named 'Interlagos') a next-generation GPU from NVIDIA, called 'Kepler,' and a new integrated storage solution from Cray. IBM was originally selected to build the supercomputer in 2007, but terminated the contract in August 2011, saying the project was more complex and required significantly increased financial and technical support beyond its original expectations. Once fully deployed, the system is expected to have a sustained performance of more than one petaflops on demanding scientific applications."
Japan

Fujitsu Announces 16-core SPARC64 IXfx (and the Supercomputer It Powers) 68

First time accepted submitter A12m0v writes with a link to Fujitsu's announcement of its next generation of supercomputer, from which he pastes: "PRIMEHPC FX10 runs on the newly-developed SPARC64 IXfx processors, which offer a very significant boost in performance over the SPARC64 VIIIfx processor on which they are based and which power the K computer. Each processor has 16 cores and achieves world-class standalone performance levels of 236.5 gigaflops and performance per watt of over 2 gigaflops." Not that K is any slouch.
Supercomputing

Japanese Supercomputer K Hits 10.51 Petaflops 125

coondoggie writes "The Japanese supercomputer ranked #1 on the Top 500 fastest supercomputers broke its own record this week by hitting 10 quadrillion calculations per second (10.51 petaflops), according to its operators, Fujitsu and Riken.
The supercomputer 'K' consists of 864 racks, comprising a total of 88,128 interconnected CPUs and has a theoretical calculation speed of 11.28 petaflops, the companies said."
China

China Builds 1-Petaflop Homegrown Supercomputer 185

MrSeb writes "Drawing yet another battle line between the incumbent oligarchs of the West and the developing hordes of the East, China has unveiled a new supercomputer that uses entirely-homegrown processors — 8,704 of them, to be exact. The computer is called Sunway BlueLight MPP and it has a peak performance of just over 1 petaflop — or around the 15th fastest supercomputer in the world. Sunway uses the ShenWei SW-3 1600, a 16-core, 64-bit MIPS-compatible (RISC) CPU. The process used to make the chips is not known, but it is likely 65 or 45nm, a few generations behind Intel's latest and greatest. Each of the 139,264 cores runs at 1.1GHz, the entire system has 150TB of memory and 2PB of storage, and of course it's water-cooled. The ShenWei chips are based on the Loongson/Godson architecture, which China — as in, the country itself — probably reverse engineered from a DEC Alpha CPU in 2001 and has been developing ever since. Sunway is significant for two reasons: a) It's very low-power; it consumes just one megawatt, about half of its contemporaries and one seventh of the US's Jaguar — and b) This is China's first significant supercomputer to be built without Intel or AMD processors."
Supercomputing

Jaguar Supercomputer Being Upgraded To Regain Fastest Cluster Crown 89

MrSeb writes with an article in Extreme Tech about the Titan supercomputer. From the article: "Cray, AMD, Nvidia, and the Department of Energy have announced that the Oak Ridge National Laboratory's Jaguar supercomputer will soon be upgraded to yet again become the fastest HPC installation in the world. The new, mighty-morphing computer will feature thousands of Cray XK6 blades, each one accommodating up to four 16-core AMD Opteron 6200 (Interlagos) chips and four Nvidia Tesla 20-series GCGPU coprocessors. The Jaguar name will be suitably inflated, too: the new behemoth will be called Titan. The exact specs of Titan haven't been revealed, but the Jaguar supercomputer currently sports 200 cabinets of Cray XT5 blades — and each cabinet, in theory, can be upgraded to hold 24 XK6 blades. That's a total of 4,800 servers, or 38,400 processors in total; 19,200 Opterons 6200s, and 19,200 Tesla GPUs. ... that's 307,200 CPU cores — and with 512 shaders in each Tesla chip that's 9,830,400 compute units. In other words, Titan should be capable of massive parallelism of more than one million concurrent operations. When the server is complete, towards the end of 2012, Titan will be capable of between 10 and 20 petaflops, and should recapture the crown of Fastest Supercomputer in the World from the Japanese 'K' computer."
Education

Michael Nielsen's Free Video Courseware On Quantum Computing 54

New submitter quax writes "Michael Nielsen, who co-authored the book on Quantum Computing, released a set of short video lectures on his blog this summer (link to Google cache). They make a great introduction to the subject. But here's the catch: Due to other work responsibilities, he stopped short of completing the course, and will only complete it if he sees enough interest in the videos. Let's show him some numbers."
Australia

New Supercomputer Boosts Aussie SKA Telescope Bid 32

angry tapir writes "Australian academic supercomputing consortium iVEC has acquired another major supercomputer, Fornax, to be based at the University of Western Australia, to further the country's ability to conduct data-intensive research. The SGI GPU-based system, also known as iVEC@UWA, is made up of 96 nodes, each containing two 6-core Intel Xeon X5650 CPUs, an NVIDIA Tesla C2050 GPU, 48 GB RAM and 7TB of storage. All up, the system has 1152 cores, 96 GPUs and an additional dedicated 500TB fabric attached storage-based global filesystem. The system is a boost to the Australian-NZ bid to host the Square Kilometer Array radio telescope."
IBM

Behind the Parting of IBM and Blue Waters 36

An anonymous reader writes "The News-Gazette has an article about the troubled Blue Waters supercomputer project, providing some new information about why IBM and the University of Illinois parted ways back in August. Quoting: 'More than three dozen changes, most suggested by IBM, would have delayed the Blue Waters project by a year ... The requested changes caused friction as early as December 2010, eight months before IBM pulled out, leaving the project to look for a new vendor for the supercomputer. Documents released under the Freedom of Information Act show Big Blue and the Big U asserting their rights in lengthy and increasingly testy, but always polite, language. In the documents, IBM suggested that if changes were not made, the project would become overly expensive.'"
Supercomputing

Will Quantum Computing Make It Out of the Lab? 129

alphadogg writes "Researchers have been working on quantum systems for more than a decade, in the hopes of developing super-tiny, super-powerful computers. And while there is still plenty of excitement surrounding quantum computing, significant roadblocks are causing some to question whether quantum computing will ever make it out of the lab. 'Artur Ekert, professor of Quantum Physics, Mathematical Institute at the University of Oxford, says physicists today can only control a handful of quantum bits, which is adequate for quantum communication and quantum cryptography, but nothing more. He notes that it will take a few more domesticated qubits to produce quantum repeaters and quantum memories, and even more to protect and correct quantum data. "Add still a few more qubits, and we should be able to run quantum simulations of some quantum phenomena and so forth. But when this process arrives to 'a practical quantum computer' is very much a question of defining what 'a practical quantum computer' really is. The best outcome of our research in this field would be to discover that we cannot build a quantum computer for some very fundamental reason, then maybe we would learn something new and something profound about the laws of nature," Ekert says.'"
Networking

Ask Slashdot: Best Use For a New Supercomputing Cluster? 387

Supp0rtLinux writes "In about 2 weeks time I will be receiving everything necessary to build the largest x86_64-based supercomputer on the east coast of the U.S. (at least until someone takes the title away from us). It's spec'ed to start with 1200 dual-socket six-core servers. We primarily do life-science/health/biology related tasks on our existing (fairly small) HPC. We intend to continue this usage, but to also open it up for new uses (energy comes to mind). Additionally, we'd like to lease access to recoup some of our costs. So, what's the best Linux distro for something of this size and scale? Any that include a chargeback option/module? Additionally, due to cost contracts, we have to choose either InfiniBand or 10Gb Ethernet for the backend: which would Slashdot readers go with if they had to choose? Either way, all nodes will have four 1Gbps Ethernet ports. Finally, all nodes include only a basic onboard GPU. We intend to put powerful GPUs into the PCI-e slot and open up the new HPC for GPU related crunching. Any suggestions on the most powerful Linux friendly PCI-e GPU available?"
AI

IBM's Watson To Help Diagnose, Treat Cancer 150

Lucas123 writes "IBM's Jeopardy-playing supercomputer, Watson, will be turning its data compiling engine toward helping oncologists diagnose and treat cancer. According to IBM, the computer is being assembled in the Richmond, Va. data center of WellPoint, the country's largest Blue Cross, Blue Shield-based healthcare company. Physicians will be able to input a patient's symptoms and Watson will use data from a patient's electronic health record, insurance claims data, and worldwide clinical research to come up with both a diagnosis and treatment based on evidence-based medicine. 'If you think about the power of [combining] all our information along with all that comparative research and medical knowledge... that's what really creates this game changing capability for healthcare,' said Lori Beer, executive vice president of Enterprise Business Services at WellPoint."
Data Storage

IBM Building 120PB Cluster Out of 200,000 Hard Disks 290

MrSeb writes "Smashing all known records by some margin, IBM Research Almaden, California, has developed hardware and software technologies that will allow it to strap together 200,000 hard drives to create a single storage cluster of 120 petabytes — 120 million gigabytes. The data repository, which currently has no name, is being developed for an unnamed customer, but with a capacity of 120PB, it's most likely use will be a storage device for a governmental (or Facebook) supercomputer. With IBM's GPFS (General Parallel File System), over 30,000 files can be created per second — and with massive parallelism, and no doubt thanks to the 200,000 individual drives in the array, single files can be read or written at several terabytes per second."
IBM

NCSA and IBM Part Ways Over Blue Waters 76

An anonymous reader writes "IBM has terminated its contract with NCSA for the petascale Blue Waters system that was expected to go online in the next year. The reason stated was that NCSA found IBM's technology 'was more complex and required significantly increased financial and technical support by IBM beyond its original expectations.' The IT community is now wondering if NCSA will be renting out space in the new data center that is being built to house Blue Waters or if they will go with another vendor."
Data Storage

IBM Speeds Storage With Flash: 10B Files In 43 Min 76

CWmike writes "With an eye toward helping tomorrow's data-deluged organizations, IBM researchers have created a super-fast storage system capable of scanning in 10 billion files in 43 minutes. This system handily bested their previous system, demonstrated at Supercomputing 2007, which scanned 1 billion files in three hours. Key to the increased performance was the use of speedy flash memory to store the metadata that the storage system uses to locate requested information. Traditionally, metadata repositories reside on disk, access to which slows operations. (See IBM's whitepaper.)"
Supercomputing

Breakthrough Toward Quantum Computing 61

redwolfe7707 writes "Qubit registers have been a hard thing to construct; this looks to be a substantial advance in the multiple entanglements required for their use. Quoting: 'Olivier Pfister, a professor of physics in the University of Virginia's College of Arts & Sciences, has just published findings in the journal Physical Review Letters demonstrating a breakthrough in the creation of massive numbers of entangled qubits, more precisely a multilevel variant thereof called Qmodes. ... Pfister and researchers in his lab used sophisticated lasers to engineer 15 groups of four entangled Qmodes each, for a total of 60 measurable Qmodes, the most ever created. They believe they may have created as many as 150 groups, or 600 Qmodes, but could measure only 60 with the techniques they used.'" In related news, research published in the New Journal of Physics (abstract) shows "how quantum and classical data can be interlaced in a real-world fiber optics network, taking a step toward distributing quantum information to the home, and with it a quantum internet."
The Almighty Buck

Banks' Big Upgrade: Meet Real-Time Processing 89

CWmike writes "It has been years since the banking industry made any large investments in core IT systems, but some of the largest financial services firms in the U.S. are now in the midst of rolling out multi-million dollar projects, say industry experts. About a decade ago, they began replacing decades-old Cobol-based core systems, with open, Web-enabled apps. Now, they are spending more than $100,000,000 to replace aging systems, converting to real-time mobile applications for retail services such as savings and checking accounts and lending systems. The idea behind going real-time: Grab more business — and money — from customers. 'Five of the top 20 banks are engaged in some sort of core banking replacement and we expect to see another three or four in next 12 months,' said Fiaz Sindhu, who leads Accenture's North American core banking practice. 'They're looking at those upgrades as a path to growth.'"
Supercomputing

JPMorgan Rolls Out FPGA Supercomputer 194

An anonymous reader writes "As heterogeneous computing starts to take off, JP Morgan have revealed they are using an FPGA based supercomputer to process risk on their credit portfolio. 'Prior to the implementation, JP Morgan would take eight hours to do a complete risk run, and an hour to run a present value, on its entire book. If anything went wrong with the analysis, there was no time to re-run it. It has now reduced that to about 238 seconds, with an FPGA time of 12 seconds.' Also mentioned is a Stanford talk given in May."
Supercomputing

A Million Node Supercomputer 116

An anonymous reader writes "Veteran of microcomputing Steve Furber, in his role as ICL Professor of Computer Engineering in the School of Computer Science at the University of Manchester, has called upon some old friends for his latest project: a brain-simulating supercomputer based on more than a million ARM processors." More detailed information can be found in the research paper.
AMD

AMD Gains In the TOP500 List 77

MojoKid writes "AMD recently announced its share of the TOP500 supercomputer list has grown 15 percent in the past six months. The company credits industry trends, upgrade paths, and competitive pricing for the increase. Of the 68 Opteron-based systems on the list, more than half of them use the Opteron 6100 series processors. The inflection point was marked by AMD's launch of their Magny-Cours architecture more than a year ago and includes the twelve-core Opteron 6180 SE at 2.5GHz at one end and two low-power parts at the other. Magny-Cours adoption is important. Companies typically don't upgrade HPC clusters with new CPUs, but AMD is billing their next-gen Interlagos architecture as a drop-in option for Magny-Cours. As such, it'll offer up to 2x the cores as well as equal-to or faster clock speeds."

Slashdot Top Deals