×
Cloud

1.21 PetaFLOPS (RPeak) Supercomputer Created With EC2 54

An anonymous reader writes "In honor of Doc Brown, Great Scott! Ars has an interesting article about a 1.21 PetaFLOPS (RPeak) supercomputer created on Amazon EC2 Spot Instances. From HPC software company Cycle Computing's blog, it ran Professor Mark Thompson's research to find new, more efficient materials for solar cells. As Professor Thompson puts it: 'If the 20th century was the century of silicon materials, the 21st will be all organic. The question is how to find the right material without spending the entire 21st century looking for it.' El Reg points out this 'virty super's low cost.' Will cloud democratize access to HPC for research?"
Supercomputing

Scientists Using Supercomputers To Puzzle Out Dinosaur Movement 39

Nerval's Lobster writes "Scientists at the University of Manchester in England figured out how the largest animal ever to walk on Earth, the 80-ton Argentinosaurus, actually walked on earth. Researchers led by Bill Sellers, Rudolfo Coria and Lee Margetts at the N8 High Performance Computing facility in northern England used a 320 gigaflop/second SGI High Performance Computing Cluster supercomputer called Polaris to model the skeleton and movements of Argentinosaurus. The animal was able to reach a top speed of about 5 mph, with 'a slow, steady gait,' according to the team (PDF). Extrapolating from a few feet of bone, paleontologists were able to estimate the beast weighed between 80 and 100 tons and grew up to 115 feet in length. Polaris not only allowed the team to model the missing parts of the dinosaur and make them move, it did so quickly enough to beat the deadline for PLOS ONE Special Collection on Sauropods, a special edition of the site focusing on new research on sauropods that 'is likely to be the "de facto" international reference for Sauropods for decades to come,' according to a statement from the N8 HPC center. The really exciting thing, according to Coria, was how well Polaris was able to fill in the gaps left by the fossil records. 'It is frustrating there was so little of the original dinosaur fossilized, making any reconstruction difficult,' he said, despite previous research that established some rules of weight distribution, movement and the limits of dinosaurs' biological strength."
Cloud

Qcloud Puts Quantum Chip In the Cloud For Coders To Experiment 73

hypnosec writes "Quantum computers are currently available in very few labs, usually bankrolled by major organizations like Google and NASA. However, a new project called 'Qcloud' aims to break those barriers by making quantum computing available to everyone. The University of Bristol announced the launch of Qcloud today at the British Science Festival 2013, with the goal of making quantum computing resources available to researchers across the globe. Claimed to be the first open-access system of its kind, the quantum chip is located at the Center for Quantum Photonics at the University of Bristol. Researchers can remotely access the processor over the internet for their computational needs. Those looking to test their ideas on the processor would be required to first practice and hone their skills using an online simulator. The university has made tutorials available to researchers so they can learn how to tune the processor and change its output as required. Once they are confident in their skills, researchers can ask for permission to access the real quantum photonic chip."
AI

IBM Devises Software For Its Experimental Brain-Modeling Chips 33

alphadogg writes "Following up on work commissioned by the U.S. Defense Advanced Research Projects Agency (DARPA), IBM has developed a programming paradigm, and associated simulator and basic software library, for its experimental SyNAPSE processor. The work suggests the processors could be used for extremely low-power yet computationally powerful sensor systems. 'Our end goal is to create a brain in a box,' said Dharmendra Modha, and IBM Research senior manager who is the principal investigator for the project. The work is a continuation of a DARPA project to design a system that replicates the way a human processes information." Also at SlashBI.
Supercomputing

US Intel Agencies To Build Superconducting Computer 73

dcblogs writes "The Director of National Intelligence is soliciting help to develop a superconducting computer. The goal of the government's solicitation is 'to demonstrate a small-scale computer based on superconducting logic and cryogenic memory that is energy efficient, scalable, and able to solve interesting problems.' The NSA, in particular, has had a long interest in superconducting technology, but 'significant technical obstacles prevented exploration of superconducting computing,' the government said in its solicitation. Those innovations include cryogenic memory designs that allow operation of memory and logic in close proximity within the cold environment, as well as much faster switching speeds. U.S. intelligence agencies don't disclose the size of their systems, but the NSA is building a data center in Utah with a 65 MW power supply."
Supercomputing

National Weather Service Upgrades Storm-Tracking Supercomputers 34

Nerval's Lobster writes "Just in time for hurricane season, the National Weather Service has finished upgrading the supercomputers it uses to track and model super-storms. 'These improvements are just the beginning and build on our previous success. They lay the foundation for further computing enhancements and more accurate forecast models that are within reach,' National Weather Service director Louis W. Uccellini wrote in a statement. The National Weather Service's 'Tide' supercomputer — along with its 'Gyre' backup — are capable of operating at a combined 213 teraflops. The National Oceanic and Atmospheric Administration (NOAA), which runs the Service, has asked for funding that would increase that supercomputing power even more, to 1,950 teraflops. The National Weather Service uses that hardware for projects such as the Hurricane Weather Research and Forecasting (HWRF) model, a complex bit of forecasting that allows the organization to more accurately predict storms' intensity and movement. The HWRF can leverage real-time data taken from Doppler radar installed in the NOAA's P3 hurricane hunter aircraft."
Earth

Same Programs + Different Computers = Different Weather Forecasts 240

knorthern knight writes "Most major weather services (US NWS, Britain's Met Office, etc) have their own supercomputers, and their own weather models. But there are some models which are used globally. A new paper has been published, comparing outputs from one such program on different machines around the world. Apparently, the same code, running on different machines, can produce different outputs due to accumulation of differing round-off errors. The handling of floating-point numbers in computing is a field in its own right. The paper apparently deals with 10-day weather forecasts. Weather forecasts are generally done in steps of 1 hour. I.e. the output from hour 1 is used as the starting condition for the hour 2 forecast. The output from hour 2 is used as the starting condition for hour 3, etc. The paper is paywalled, but the abstract says: 'The global model program (GMP) of the Global/Regional Integrated Model system (GRIMs) is tested on 10 different computer systems having different central processing unit (CPU) architectures or compilers. There exist differences in the results for different compilers, parallel libraries, and optimization levels, primarily due to the treatment of rounding errors by the different software systems. The system dependency, which is the standard deviation of the 500-hPa geopotential height averaged over the globe, increases with time. However, its fractional tendency, which is the change of the standard deviation relative to the value itself, remains nearly zero with time. In a seasonal prediction framework, the ensemble spread due to the differences in software system is comparable to the ensemble spread due to the differences in initial conditions that is used for the traditional ensemble forecasting.'"
Supercomputing

Supercomputer Becomes Massive Router For Global Radio Telescope 60

Nerval's Lobster writes "Astrophysicists at MIT and the Pawsey supercomputing center in Western Australia have discovered a whole new role for supercomputers working on big-data science projects: They've figured out how to turn a supercomputer into a router. (Make that a really, really big router.) The supercomputer in this case is a Cray Cascade system with a top performance of 0.3 petaflops — to be expanded to 1.2 petaflops in 2014 — running on a combination of Intel Ivy Bridge, Haswell and MIC processors. The machine, which is still being installed at the Pawsey Centre in Kensington, Western Australia and isn't scheduled to become operational until later this summer, had to go to work early after researchers switched on the world's most sensitive radio telescope June 9. The Murchison Widefield Array is a 2,000-antenna radio telescope located at the Murchison Radio-astronomy Observatory (MRO) in Western Australia, built with the backing of universities in the U.S., Australia, India and New Zealand. Though it is the most powerful radio telescope in the world right now, it is only one-third of the Square Kilometer Array — a spread of low-frequency antennas that will be spread across a kilometer of territory in Australia and Southern Africa. It will be 50 times as sensitive as any other radio telescope and 10,000 times as quick to survey a patch of sky. By comparison, the Murchison Widefield Array is a tiny little thing stuck out as far in the middle of nowhere as Australian authorities could find to keep it as far away from terrestrial interference as possible. Tiny or not, the MWA can look farther into the past of the universe than any other human instrument to date. What it has found so far is data — lots and lots of data. More than 400 megabytes of data per second come from the array to the Murchison observatory, before being streamed across 500 miles of Australia's National Broadband Network to the Pawsey Centre, which gets rid of most of it as quickly as possible."
Supercomputing

Adapteva Parallella Supercomputing Boards Start Shipping 98

hypnosec writes "Adapteva has started shipping its $99 Parallella parallel processing single-board supercomputer to initial Kickstarter backers. Parallella is powered by Adapteva's 16-core and 64-core Epiphany multicore processors that are meant for parallel computing unlike other commercial off-the-shelf (COTS) devices like Raspberry Pi that don't support parallel computing natively. The first model to be shipped has the following specifications: a Zynq-7020 dual-core ARM A9 CPU complemented with Epiphany Multicore Accelerator (16 or 64 cores), 1GB RAM, MicroSD Card, two USB 2.0 ports, optional four expansion connectors, Ethernet, and an HDMI port." They are also releasing documentation, examples, and an SDK (brief overview, it's Free Software too). And the device runs GNU/Linux for the non-parallel parts (Ubuntu is the suggested distribution).
Biotech

Sculpting Nanoflows With Supercomputers 11

aarondubrow writes "Researchers reported results in Nature Communications on a new way of sculpting tailor-made fluid flows by placing tiny pillars in microfluidic channels [abstract; article is paywalled]. The method could allow clinicians to better separate white blood cells in a sample, increase mixing in industrial applications, and more quickly perform lab-on-a-chip-type operation. Using the Ranger and Stampede supercomputers, the researchers ran more than 1,000 simulations representing combinations of speeds, thicknesses, heights or offsets that produce unique flows. This library of transformations will help the broader community design and use sculpted fluid flows."
Supercomputing

Video Meet the Stampede Supercomputing Cluster's Administrator (Video) Screenshot-sm 34

UT Austin tends not to do things by half measures, as illustrated by the Texas Advanced Computing Center, which has been home to an evolving family of supercomputing clusters. The latest of these, Stampede, was first mentioned here back in 2011, before it was actually constructed. In the time since, Stampede has been not only completed, but upgraded; it's just successfully completed a successful six months since its last major update — the labor-intensive installation of Xeon Phi processors throughout 106 densely packed racks. I visited TACC, camera in hand, to take a look at this megawatt-eating electronic hive (well, herd) and talk with director of high-performance computing Bill Barth, who has insight into what it's like both as an end-user (both commercial and academic projects get to use Stampede) and as an administrator on such a big system.
Space

With Catastrophes In Mind, Supercomputing Project Simulates Space Junk Collision 15

aarondubrow writes "Researchers at The University of Texas at Austin developed a fundamentally new way of simulating fabric impacts that captures the fragmentation of the projectiles and the shock response of the target. Running hundreds of simulations on supercomputers at the Texas Advanced Computing Center, they assisted NASA in the development of ballistic limit curves that predict whether a shield will be perforated when hit by a projectile of a given size and speed. The framework they developed also allows them to study the impact of projectiles on body armor materials and to predict the response of different fabric weaves upon impact." With thousands of known pieces of man-made space junk, as well plenty of natural ones, it's no idle concern.
Virtualization

Cray X-MP Simulator Resurrects Piece of Computer History 55

An anonymous reader writes "If you have a fascination with old supercomputers, like I do, this project might tickle your interest: A functional simulation of a Cray X-MP supercomputer, which can boot to its old batch operating system, called COS. It's complete with hard drive and tape simulation (no punch card readers, sorry) and consoles. Source code and binaries are available. You can also read about the journey that got me there, like recovering the OS image from a 30 year old hard drive or reverse-engineering CRAY machine code to understand undocumented tape drive operation and disk file-systems."
Supercomputing

Breaking Supercomputers' Exaflops Barrier 96

Nerval's Lobster writes "Breaking the exaflops barrier remains a development goal for many who research high-performance computing. Some developers predicted that China's new Tianhe-2 supercomputer would be the first to break through. Indeed, Tianhe-2 did pretty well when it was finally revealed — knocking the U.S.-based Titan off the top of the Top500 list of the world's fastest supercomputers. Yet despite sustained performance of 33 petaflops to 35 petaflops and peaks ranging as high as 55 petaflops, even the world's fastest supercomputer couldn't make it past (or even close to) the big barrier. Now, the HPC market is back to chattering over who'll first build an exascale computer, and how long it might take to bring such a platform online. Bottom line: It will take a really long time, combined with major breakthroughs in chip design, power utilization and programming, according to Nvidia chief scientist Bill Dally, who gave the keynote speech at the 2013 International Supercomputing Conference last week in Leipzig, Germany. In a speech he called 'Future Challenges of Large-scale Computing' (and in a blog post covering similar ground), Dally described some of the incredible performance hurdles that need to be overcome in pursuit of the exaflops barrier."
AI

The Men Trying To Save Us From the Machines 161

nk497 writes "Are you more likely to die from cancer or be wiped out by a malevolent computer? That thought has been bothering one of the co-founders of Skype so much he teamed up with Oxbridge researchers in the hopes of predicting what machine super-intelligence will mean for the world, in order to mitigate the existential threat of new technology – that is, the chance it will destroy humanity. That idea is being studied at the University of Oxford's Future of Humanity Institute and the newly launched Centre for the Study of Existential Risk at the University of Cambridge, where philosophers look more widely at the possible repercussions of nanotechnology, robotics, artificial intelligence and other innovations — and to try to avoid being outsmarted by technology."
The Military

Fear of Thinking War Machines May Push U.S. To Exascale 192

dcblogs writes "Unlike China and Europe, the U.S. has yet to adopt and fund an exascale development program, and concerns about what that means to U.S. security are growing darker and more dire. If the U.S. falls behind in HPC, the consequences will be 'in a word, devastating,' Selmer Bringsford, chair of the Department. of Cognitive Science at Rensselaer Polytechnic Institute, said at a U.S. House forum this week. 'If we were to lose our capacity to build preeminently smart machines, that would be a very dark situation, because machines can serve as weapons.' The House is about to get a bill requiring the Dept. of Energy to establish an exascale program. But the expected funding level, about $200 million annually, 'is better than nothing, but compared to China and Europe it's at least 10 times too low,' said Earl Joseph, an HPC analyst at IDC. David McQueeney, vice president of IBM research, told lawmakers that HPC systems now have the ability to not only deal with large data sets but 'to draw insights out of them.' The new generation of machines are being programmed to understand what the data sources are telling them, he said."
Intel

Intel Announces New Enterprise Xeons, More Powerful Xeon Phi Cards 57

MojoKid writes "Intel announced a set of new enterprise products today aimed at furthering its strengths in the TOP500 supercomputing market. As of today, the Chinese Tiahne-2 supercomputer (aka Milky Way 2) is now the fastest supercomputer on the planet at roughly ~54PFLOPs. Intel is putting its own major push behind heterogeneous computing with the Tianhe-2. Each node contains two Ivy Bridge sockets and three Xeon Phi cards. Each node, therefore, contains 422.4GFLOP/s in Ivy Bridge performance — but 3.43TFLOPs/s worth of Xeon Phi. In addition, we'll see new Xeons based on this technology later this year, in the 22nm E5-2600 V2 family, with up to 12 cores. The new chips will be built on Ivy Bridge technology and will offer up to 12 cores / 24 threads. The new Xeons, however, aren't really the interesting part of the story. Today, Intel is adding cards to the current Xeon Phi lineup — the 7120P, 3120P, 3120A, and 5120D. The 3120P and 3120A are the same card — the 'P' is passively cooled, while the "A" integrates a fan. Both of these solutions have 57 CPUs and 6GB of RAM. Intel states that they offer ~1TFLOP of performance, which puts them on par with the 5110P that launched last year, but with slightly less memory and presumably a lower price point. At the top of the line, Intel is introducing the 7120P and 7120X — the 7120P comes with an integrated heat spreader, the 7120X doesn't. Clock speeds are higher on this card, it has 61 cores instead of 60, 16GB of GDDR5, and 352GBps of memory bandwidth. Customers who need lots of cores and not much RAM can opt for one of the cheaper 3100 cards, while the 7100 family allows for much greater data sets."
Supercomputing

China Bumps US Out of First Place For Fastest Supercomptuer 125

An anonymous reader writes "China's Tianhe-2 is the world's fastest supercomputer, according to the latest semiannual Top 500 list of the 500 most powerful computer systems in the world. Developed by China's National University of Defense Technology, the system appeared two years ahead of schedule and will be deployed at the National Supercomputer Center in Guangzho, China, before the end of the year."
Databases

A Database of Brains 25

aarondubrow writes "Researchers recently created OpenfMRI, a web-based, supercomputer-powered tool that makes it easier for researchers to process, share, compare and rapidly analyze fMRI brain scans from many different studies. Applying supercomputing to the fMRI analysis allows researchers to conduct larger studies, test more hypotheses, and accommodate the growing spatial and time resolution of brain scans. The ultimate goal is to collect enough brain data to develop a bottom-up understanding of brain function."
AI

When Will My Computer Understand Me? 143

aarondubrow writes "For more than 50 years, linguists and computer scientists have tried to get computers to understand human language by programming semantics as software, with mixed results. Enabled by supercomputers at the Texas Advanced Computing Center, University of Texas researchers are using new methods to more accurately represent language so computers can interpret it. Recently, they were awarded a grant from DARPA to combine distributional representation of word meanings with Markov logic networks to better capture the human understanding of language."

Slashdot Top Deals