Education

Indiana University Dedicates Biggest College-Owned Supercomputer 83

Indiana University has replaced their supercomputer, Big Red, with a new system predictably named Big Red II. At the dedication HPC scientist Paul Messina said: "It's important that this is a university-owned resource. ... Here you have the opportunity to have your own faculty, staff and students get access with very little difficulty to this wonderful resource." From the article: "Big Red II is a Cray-built machine, which uses both GPU-enabled and standard CPU compute nodes to deliver a petaflop -- or 1 quadrillion floating-point operations per second -- of max performance. Each of the 344 CPU nodes uses two 16-core AMD Abu Dhabi processors, while the 676 GPU nodes use one 16-core AMD Interlagos and one NVIDIA Kepler K20."
Power

Harvard Grid Computing Project Discovers 20k Organic Photovoltaic Molecules 125

Lucas123 writes "In June, Harvard's Clean Energy Project plans to release to solar power developers a list of the top 20,000 organic compounds, any one of which could be used to make cheap, printable photovoltaic cells (PVCs). The CEP uses the computing resources of IBM's World Community Grid for the computational chemistry to find the best molecules for organic photovoltaics culled the list from about 7 million. About 6,000 computers are part of the project at any one time. If successful, the crowdsourcing-style project, which has been crunching data for the past two-plus years, could lead to PVCs that cost about as much as paint to cover a one-meter square wall." The big thing here is that they've discovered a lot of organic molecules that have the potential for 10% or better conversion; roughly equivalent to the current best PV material, and twice as efficient as other available organic PV materials.
United States

US Gov't Blocks Sales To Russian Supercomputer Maker 116

Nerval's Lobster writes "T-Platforms, which manufactured the fastest supercomputer in Russia (and twenty-sixth fastest in the world), has been placed on the IT equivalent of the no-fly list. In March, the U.S. Department of Commerce's Bureau of Industry and Security added T-Platforms' businesses in Germany, Russia and Taiwan to the 'Entity List,' which includes those believed to be acting contrary to the national security or foreign policy interests of the United States. U.S. IT companies are essentially banned from doing business with T-Platforms, especially with regards to HPC hardware such as microprocessors, which could be used for what the government views as illegal purposes. The rule, discovered by HPCWire, was published in March. According to the rule, Commerce's End-User Review Committee (ERC) believes that T-Platforms may be assisting the Russian government and military conduct nuclear research — which, given historical tensions between the two countries, apparently falls outside the bounds of permitted use. An email address that T-Platforms listed for its German office bounced, and Slashdot was unable to reach executives at its Russian headquarters for comment."
IBM

First Petaflop Supercomputer To Shut Down 84

An anonymous reader writes "In 2008 Roadrunner was the world's fastest supercomputer. Now that the first system to break the petaflop barrier has lost a step on today's leaders it will be shut down and dismantled. In its five years of operation, the Roadrunner was the 'workhorse' behind the National Nuclear Security Administration's Advanced Simulation and Computing program, providing key computer simulations for the Stockpile Stewardship Program."
Supercomputing

'Blue Waters' Supercomputer Lucky To Exist 39

Nerval's Lobster writes "One could argue that the University of Illinois' "Blue Waters" supercomputer, scheduled to officially open for business March 28, is lucky to be alive. The 11.6 petaflop supercomputer, commissioned by the University and the National Science Foundation (NSF), will rank in the upper echelon of the world's fastest machines—its compute power would place it third on the current list, just above Japan's K Computer. However, the system will not be submitted to the TOP500 list because of concerns with the way the list is calculated, officials said. University officials and the NSF are lucky to have a machine at all. That's due in part to IBM, which reportedly backed out of the contract when the company determined that it couldn't make a profit. The university then turned to Cray, which would have had to replace what was presumably a POWER or Xeon installation with the current mix of AMD CPUs and Nvidia GPU coprocessors. Allen Blatecky, director of NSF's Division of Advanced Cyberinfrastructure, told Fox that pulling the plug was a 'real possibility.' And Cray itself had to work to find the parts necessary for the supercomputer to begin at least trial operations in the fall of 2012."
Bug

Too Much Gold Delays World's Fastest Supercomputer 111

Nerval's Lobster writes "The fastest supercomputer in the world, Oak Ridge National Laboratory's 'Titan,' has been delayed because an excess of gold on its motherboard connectors has prevented it from working properly. Titan was originally turned on last October and climbed to the top of the Top500 list of the fastest supercomputers shortly thereafter. Problems with Titan were first discovered in February, when the supercomputer just missed its stability requirement. At that time, the problems with the connectors were isolated as the culprit, and ORNL decided to take some of Titan's 200 cabinets offline and ship their motherboards back to the manufacturer, Cray, for repairs. The connectors affected the ability of the GPUs in the system to talk to the main processors. Oak Ridge Today's John Huotari noted the problem was due to too much gold mixed in with the solder."
Graphics

NVIDIA GeForce GTX TITAN Uses 7.1 Billion Transistor GK110 GPU 176

Vigile writes "NVIDIA's new GeForce GTX TITAN graphics card is being announced today and is utilizing the GK110 GPU first announced in May of 2012 for HPC and supercomputing markets. The GPU touts computing horsepower at 4.5 TFLOPS provided by the 2,688 single precision cores, 896 double precision cores, a 384-bit memory bus and 6GB of on-board memory doubling the included frame buffer that AMD's Radeon HD 7970 uses. With a make up of 7.1 billion transistors and a 551 mm^2 die size, GK110 is very close to the reticle limit for current lithography technology! The GTX TITAN introduces a new GPU Boost revision based on real-time temperature monitoring and support for monitor refresh rate overclocking that will entice gamers and with a $999 price tag, the card could be one of the best GPGPU options on the market." HotHardware says the card "will easily be the most powerful single-GPU powered graphics card available when it ships, with relatively quiet operation and lower power consumption than the previous generation GeForce GTX 690 dual-GPU card."
Math

New Largest Known Prime Number: 2^57,885,161-1 254

An anonymous reader writes with news from Mersenne.org, home of the Great Internet Mersenne Prime Search: "On January 25th at 23:30:26 UTC, the largest known prime number, 257,885,161-1, was discovered on GIMPS volunteer Curtis Cooper's computer. The new prime number, 2 multiplied by itself 57,885,161 times, less one, has 17,425,170 digits. With 360,000 CPUs peaking at 150 trillion calculations per second, GIMPS — now in its 17th year — is the longest continuously-running global 'grassroots supercomputing' project in Internet history."
Education

IBM's Watson Goes To College To Extend Abilities 94

An anonymous reader writes in with news that IBM's Jeopardy winning supercomputer is going back to school"A modified version of the powerful IBM Watson computer system, able to understand natural spoken language and answer complex questions, will be provided to Rensselaer Polytechnic Institute in New York, making it the first university to receive such a system. IBM announced Wednesday that the Watson system is intended to enable upstate New York-based RPI to find new uses for Watson and deepen the systems' cognitive computing capabilities - for example by broadening the volume, types, and sources of data Watson can draw upon to answer questions."
IBM

Stanford Uses Million-Core Supercomputer To Model Supersonic Jet Noise 66

coondoggie writes "Stanford researchers said this week they had used a supercomputer with 1,572,864 compute cores to predict the noise generated by a supersonic jet engine. 'Computational fluid dynamics simulations test all aspects of a supercomputer. The waves propagating throughout the simulation require a carefully orchestrated balance between computation, memory and communication. Supercomputers like Sequoia divvy up the complex math into smaller parts so they can be computed simultaneously. The more cores you have, the faster and more complex the calculations can be. And yet, despite the additional computing horsepower, the difficulty of the calculations only becomes more challenging with more cores. At the one-million-core level, previously innocuous parts of the computer code can suddenly become bottlenecks.'"
Supercomputing

DOE Asks For 30-Petaflop Supercomputer 66

Nerval's Lobster writes "The U.S. Department of Science has presented a difficult challenge to vendors: deliver a supercomputer with roughly 10 to 30 petaflops of performance, yet filled with energy-efficient multi-core architecture. The draft copy (.DOC) of the DOE's requirements provide for two systems: 'Trinity,' which will offer computing resources to the Los Alamos National Laboratory (LANL), Sandia National Laboratories (SNL), and Lawrence Livermore National Laboratory (LLNL), during the 2016-2020 timeframe; and NERSC-8, the replacement for the current NERSC-6 'Hopper' supercomputer first deployed in 2010 for the DOE facilities. Hopper debuted at number five in the list of Top500 supercomputers, and can crunch numbers at the petaflop level. The DOE wants a machine with performance at between 10 to 30 times Hopper's capabilities, with the ability to support one compute job that could take up over half of the available compute resources at any one time."
Supercomputing

Three-Mile-High Supercomputer Poses Unique Challenges 80

Nerval's Lobster writes "Building and operating a supercomputer at more than three miles above sea level poses some unique problems, the designers of the recently installed Atacama Large Millimeter/submillimeter Array (ALMA) Correlator discovered. The ALMA computer serves as the brains behind the ALMA astronomical telescope, a partnership between Europe, North American, and South American agencies. It's the largest such project in existence. Based high in the Andes mountains in northern Chile, the telescope includes an array of 66 dish-shaped antennas in two groups. The telescope correlator's 134 million processors continually combine and compare faint celestial signals received by the antennas in the ALMA array, which are separated by up to 16 kilometers, enabling the antennas to work together as a single, enormous telescope, according to Space Daily. The extreme high altitude makes it nearly impossible to maintain on-site support staff for significant lengths of time, with ALMA reporting that human intervention will be kept to an absolute minimum. Data acquired via the array is archived at a lower-altitude support site. The altitude also limited the construction crew's ability to actually build the thing, requiring 20 weeks of human effort just to unpack and install it."
Supercomputing

Supercomputer Repossessed By State, May Be Sold In Pieces 123

1sockchuck writes "A supercomputer that was the third-fastest machine in the world in 2008 has been repossessed by the state of New Mexico and will likely be sold in pieces to three universities in the state. The state has been unable to find a buyer for the Encanto supercomputer, which was built and maintained with $20 million in state funding. The supercomputer had the enthusiastic backing of Gov. Bill Richardson, who saw the project as an economic development tool for New Mexico. But the commercial projects did not materialize, and Richardson's successor, Susana Martinez, says the supercomputer is a 'symbol of excess.'"
Supercomputing

Einstein@Home Set To Break Petaflops Barrier 96

hazeii writes "Einstein@home, the distributed computing project searching for the gravitational waves predicted to exist by Albert Einstein, looks set to breach the 1 Petaflops barrier around midnight UTC tonight. Put into context, if it was in the Top500 Supercomputers list, it would be in at number 24. I'm sure there are plenty of Slashdot readers who can contribute enough CPU and GPU cycles to push them well over 1,000 teraflops — and maybe even discover a pulsar in the process." From their forums: "At 14:45 we had 989.2 TFLOPS with an increase of 1.3 TFLOPS/h. In principle that's enough to reach 1001.1 TFLOPS at midnight (UTC) but very often, like yesterday, between 22:45 and 22:50 there occurs a drop of about 5 TFLOPS. So we will have very likely hit 1 PFLOPS in the early morning tomorrow. "
Space

All Systems Go For Highest Altitude Supercomputer 36

An anonymous reader writes "One of the most powerful supercomputers in the world has now been fully installed and tested at its remote, high altitude site in the Andes of northern Chile. It's a critical part of the Atacama Large Millimeter/submillimeter Array (ALMA), the most elaborate ground-based astronomical telescope in history. The special-purpose ALMA correlator has over 134 million processors and performs up to 17 quadrillion operations per second, a speed comparable to the fastest general-purpose supercomputer in operation today."
Supercomputing

Supercomputers' Growing Resilience Problems 112

angry tapir writes "As supercomputers grow more powerful, they'll also grow more vulnerable to failure, thanks to the increased amount of built-in componentry. Today's high-performance computing (HPC) systems can have 100,000 nodes or more — with each node built from multiple components of memory, processors, buses and other circuitry. Statistically speaking, all these components will fail at some point, and they halt operations when they do so, said David Fiala, a Ph.D student at the North Carolina State University, during a talk at SC12. Today's techniques for dealing with system failure may not scale very well, Fiala said."
Supercomputing

Titan Tops Top500 Supercomputing List 52

miller60 writes "The new Top500 list of the world's most powerful supercomputers is out, and the new champion is Titan, the new and improved system that previously ruled the Top500 as Jaguar. Oak Ridge Labs' Titan knocked Livermore Labs' Sequoia system out of the top spot, with a Linpack benchmark of more than 17 petaflops. Check out the full list, or an illustrated guide to the top 10."
Intel

Cray Unveils XC30 Supercomputer 67

Nerval's Lobster writes "Cray has unveiled a XC30 supercomputer capable of high-performance computing workloads of more than 100 petaflops. Originally code-named 'Cascade,' the system relies on Intel Xeon processors and Aries interconnect chipset technology, paired with Cray's integrated software environment. Cray touts the XC30's ability to utilize a wide variety of processor types; future versions of the platform will apparently feature Intel Xeon Phi and Nvidia Tesla GPUs based on the Kepler GPU computing architecture. Cray leveraged its work with DARPA's High Productivity Computing Systems program in order to design and build the XC30. Cray's XC30 isn't the only supercomputer aiming for that 100-petaflop crown. China's Guangzhou Supercomputing Center recently announced the development of a Tianhe-2 supercomputer theoretically capable of 100 petaflops, but that system isn't due to launch until 2015. Cray also faces significant competition in the realm of super-computer makers: it only built 5.4 percent of the systems on the Top500 list, compared to IBM with 42.6 percent and Hewlett-Packard with 27.6 percent."
China

China Building a 100-petaflop Supercomputer Using Domestic Processors 154

concealment writes "As the U.S. launched what's expected to be the world's fastest supercomputer at 20 petaflops, China is building a machine that is intended to be five times faster when it is deployed in 2015. China's Tianhe-2 supercomputer will run at 100 petaflops (quadrillion floating-point calculations per second), according to the Guangzhou Supercomputing Center, where the machine will be housed. Tianhe-2 could help keep China competitive with the future supercomputers of other countries, as industry experts estimate machines will start reaching 1,000-petaflop performance by 2018." And, naturally, it's planned to use a domestically developed MIPS processor
Supercomputing

Titan Supercomputer Debuts for Open Scientific Research 87

hypnosec writes "The Oak Ridge National Laboratory has unveiled a new supercomputer – Titan, which it claims is the world's most powerful supercomputer, capable of 20 petaflops of performance. The Cray XK7 supercomputer contains a total of 18,688 nodes and each node is based on a 16-core AMD Opteron 6274 processor and a Nvidia Tesla K20 Graphical Processing Unit (GPU). To be used for researching climate change and other data-intensive tasks, the supercomputer is equipped with more than 700 terabytes of memory."

Slashdot Top Deals