IBM's Blue Gene Runs Continuously At 1 Petaflop 231
An anonymous reader writes "ZDNet is reporting on IBM's claim that the Blue Gene/P will continuously operate at more than 1 petaflop. It is actually capable of 3 quadrillion operations a second, or 3 petaflops. IBM claims that at 1 petaflop, Blue Gene/P is performing more operations than a 1.5-mile-high stack of laptops! 'Like the vast majority of other modern supercomputers, Blue Gene/P is composed of several racks of servers lashed together in clusters for large computing tasks, such as running programs that can graphically simulate worldwide weather patterns. Technologies designed for these computers trickle down into the mainstream while conventional technologies and components are used to cut the costs of building these systems. The chip inside Blue Gene/P consists of four PowerPC 450 cores running at 850MHz each. A 2x2 foot circuit board containing 32 of the Blue Gene/P chips can churn out 435 billion operations a second. Thirty two of these boards can be stuffed into a 6-foot-high rack.'"
Re:I'm ignorant. (Score:3, Informative)
For those keeping score at home... (Score:2, Informative)
google calculator (Score:1, Informative)
1.5 mile = 2.414016 kilometers
2 "foot" = 0.6096 meters
6 feet = 1.8288 meters
How high? (Score:4, Informative)
Depends on what you mean by real world. (Score:5, Informative)
How many of these are "real world"? Well, medical and CFD applications are significant, but hardly what you'd call mainstream, and the raytracing may have been used in Titanic on a smaller scale, but IMAX is under no threat at this time.
Re:I'm ignorant. (Score:3, Informative)
The Dawn of Petaflop Computing! (Score:5, Informative)
While the new IBM Blue Gene/P system is impressive, I'm more curious to see what sort of new supercomputer Andreas Bechtolsheim [nytimes.com] of Sun Microsystems has put together.
Here's an interesting quote about Bechtolsheim from the article:
Re:I'm waiting for the next generation (Score:3, Informative)
THAT is what we can't do. Even if we could mount instrumentation in every square meter of the earth AND its atmosphere to get our current status map and we configured the machine to predict the interactions of those currents we would still be lost. Aside from tracking the output of the sun, the weather system would need to account for ocean currents, tides, bonfires and heating systems, volcanoes, body heat, pig sex, etc.
That is right my friend, every time you pull out and shoot a load on her stomach the weather system would have to take it into account, because the air disturbed might be the first of a chain of complex interactions that leads to a hurricane that devestates louisana... again (because there are actually people so ignorant that they are going to rebuild a city in the same bad location).
Re:But are they availble on the market (Score:1, Informative)
It is petaflops not petaflop. (Score:2, Informative)
Re:What about Memory? (Score:3, Informative)
Re:Depends on what you mean by real world. (Score:5, Informative)
Let's start with a Slashdotting of NASA...
Re:The Dawn of Petaflop Computing! (Score:5, Informative)
Sun's design is affordable, and probably has a pretty decent max performance, and pretty reasonable power/memory per node. However, it's not as exotic as IBM's design. The IBM design has fantastic flops/watt and flops/square-foot performance. However, each node is really wimpy, which forces you to use a LOT of nodes for any problem, which inreases the necessary amount of communication. Some problems work really well, others, not so much.
IBM has limited blue gene to a small number of customers, all with fairly large systems. I suspect that's because it's very difficult to port an application to the system, and get good performance.
Re:How far behind are desktops from super-computer (Score:5, Informative)
You'll notice, that 98% of the supercomputers, sold in the last 10 years, all use server processors. (Blue Gene actually uses an embedded systems processor, but it's the same idea) However, in the late 80's putting 256 processors in a super was cutting edge. In the 90's, a few thousand. Soon you'll see a quarter million cores. So supers are actually getting faster at a higher rate than are desktops, at least by most measures.
Re:Depends on what you mean by real world. (Score:3, Informative)
Re:But are they availble on the market (Score:1, Informative)
There is also OpenMP (more of an extension to C/C++ than just a header, you need pragmas and stuff to use it); I find it easy to fall into race conditions in that library because you really need to think about what you are doing.
Technically, pthreads ought to be able to provide enough functionality to get up and running if your environment acts as a single machine, or even the System.Threading namespace if you have the ability to run managed code. However, you don't have control then over if your thread gets its own cpu or not (unless it is guaranteed by the OS). In most cases that isn't actually necessary, your algorithm can be written in such a way that it doesn't matter if it is running on a 8 cpu system or a 2^32 cpu system (with exception to the fact that time to completion will vary); the troubles come in with optimizations.
Recently I have been experimenting with simple web services on a server to farm out pieces of the solution in a distributed fashion for attempting a brute force on a salted sha1 hash in a database situation where you know the salt:
on server: on clients: