Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Supercomputing IBM Hardware

IBM's Blue Gene Runs Continuously At 1 Petaflop 231

An anonymous reader writes "ZDNet is reporting on IBM's claim that the Blue Gene/P will continuously operate at more than 1 petaflop. It is actually capable of 3 quadrillion operations a second, or 3 petaflops. IBM claims that at 1 petaflop, Blue Gene/P is performing more operations than a 1.5-mile-high stack of laptops! 'Like the vast majority of other modern supercomputers, Blue Gene/P is composed of several racks of servers lashed together in clusters for large computing tasks, such as running programs that can graphically simulate worldwide weather patterns. Technologies designed for these computers trickle down into the mainstream while conventional technologies and components are used to cut the costs of building these systems. The chip inside Blue Gene/P consists of four PowerPC 450 cores running at 850MHz each. A 2x2 foot circuit board containing 32 of the Blue Gene/P chips can churn out 435 billion operations a second. Thirty two of these boards can be stuffed into a 6-foot-high rack.'"
This discussion has been archived. No new comments can be posted.

IBM's Blue Gene Runs Continuously At 1 Petaflop

Comments Filter:
  • Re:I'm ignorant. (Score:3, Informative)

    by pytheron ( 443963 ) on Tuesday June 26, 2007 @01:14PM (#19651837) Homepage
    If you have a large dataset or input domain to perform work upon, split it into X chunks, each chunk processed on a CPU. Hence supercomputers usually being useful for problems that have large datasets/input domains
  • by Chysn ( 898420 ) on Tuesday June 26, 2007 @01:16PM (#19651865)
    ...the next step (10**18) is the "exaflop."
  • google calculator (Score:1, Informative)

    by Anonymous Coward on Tuesday June 26, 2007 @01:24PM (#19651975)
    I wonder if I will ever be able to read slashdot articles without using the google calculator...

    1.5 mile = 2.414016 kilometers
    2 "foot" = 0.6096 meters
    6 feet = 1.8288 meters

  • How high? (Score:4, Informative)

    by Anonymous Coward on Tuesday June 26, 2007 @01:32PM (#19652135)
    Well the the stack of laptops might be tall, but even the 216 racks would stack up to 1/5 of a mile high.
  • by jd ( 1658 ) <imipak@ y a hoo.com> on Tuesday June 26, 2007 @01:36PM (#19652209) Homepage Journal
    If you include medical imaging, then computed tomography and computational fluid dynamics are heavily dependent on 3D FFTs, which are in turn heavily parallelizable. In extreme cases (raytracing, for example) where there is next to zero communication between nodes, you get linear scaling with the number of nodes for as many nodes as you like. Well, in the case of raytracing, up to the resolution your "camera" works at. On a modern display, you may be talking one million or so distinct originating points at three colours, typically using "bundles" of rays to eliminate effects, which would normally be 64 rays in size. With something like 250 million cores, you could actually generate an animated feature film from raw data files at the time of showing.

    How many of these are "real world"? Well, medical and CFD applications are significant, but hardly what you'd call mainstream, and the raytracing may have been used in Titanic on a smaller scale, but IMAX is under no threat at this time.

  • Re:I'm ignorant. (Score:3, Informative)

    by jellomizer ( 103300 ) * on Tuesday June 26, 2007 @01:42PM (#19652323)
    Sure you can sort in O(1/(n^(1/2))) time. By Using a Shear Sort Algroithm [chula.ac.th].
  • by i_like_spam ( 874080 ) on Tuesday June 26, 2007 @01:47PM (#19652399) Journal
    This announcement is part of the International Supercomputing Conference [supercomp.de], which just kicked off today. The new Top500 list [top500.org] will also be announced shortly.

    While the new IBM Blue Gene/P system is impressive, I'm more curious to see what sort of new supercomputer Andreas Bechtolsheim [nytimes.com] of Sun Microsystems has put together.

    Here's an interesting quote about Bechtolsheim from the article:

    'He's a perfectionist,' said Eric Schmidt, Google's chief executive, who worked with Mr. Bechtolsheim beginning in 1983 at Sun. 'He works 18 hours a day and he's very disciplined. Every computer he has built has been the fastest of its generation.'
  • by shaitand ( 626655 ) on Tuesday June 26, 2007 @02:19PM (#19652881) Journal
    Even with the computing power weather would be impossible to calculate. It isn't because of a lack of understanding either. In order to calculate weather you don't just need to know how weather works, you need to have precise data on every variable across the globe and these measurments would need to be taken to a resolution that is simply insane. If you had a fast enough machine, it could even catch up with current weather from that point, but your snapshot would have to be exact and all measurements would have to be taken simultaneously.

    THAT is what we can't do. Even if we could mount instrumentation in every square meter of the earth AND its atmosphere to get our current status map and we configured the machine to predict the interactions of those currents we would still be lost. Aside from tracking the output of the sun, the weather system would need to account for ocean currents, tides, bonfires and heating systems, volcanoes, body heat, pig sex, etc.

    That is right my friend, every time you pull out and shoot a load on her stomach the weather system would have to take it into account, because the air disturbed might be the first of a chain of complex interactions that leads to a hurricane that devestates louisana... again (because there are actually people so ignorant that they are going to rebuild a city in the same bad location).
  • by Anonymous Coward on Tuesday June 26, 2007 @03:01PM (#19653505)
    When the previous generation (BG/L) was released, a rack (1024 nodes, 2048 cores) would cost about US$1.5m. Apparently IBM sells them considerably cheaper now, with BG/P around the corner...
  • by bommai ( 889284 ) on Tuesday June 26, 2007 @04:08PM (#19654439)
    Contrary to most people that think a singular way of representing floating point speed is FLOP, it is FLOPS because FLOPS is not plural. FLOPS is Floating Point Operations Per Second. So, I chuckle everytime I read 1 PETAFLOP. Guys, just turn off your singular/plural alarm and say with me 1 and only 1 PETAFLOPS.
  • by Anonymous Coward on Tuesday June 26, 2007 @04:27PM (#19654725)
    BG/P will support 2 GB standard for each compute node. A compute node has 4 core processors. An option for 4 GB of memory is also available. On BG/L the initial memory configuration at Livermore was 512 MB per compute node which consisted of 2 core processors. Since 2007 BG/L has offered 1 GB memory as the standard configuration.
  • by jd ( 1658 ) <imipak@ y a hoo.com> on Tuesday June 26, 2007 @04:28PM (#19654751) Homepage Journal
    Thank you for the compliment. It's equally nice to know that there are active questioners on Slashdot determined to stretch the quality to the limits. In the spirit of providing information, though, I'll add a few links for the perusal and amusement of all. I'm hard on some of the software, but that's not because I could do better. If anything, it's because I have confidence the authors could.

    Let's start with a Slashdotting of NASA...

    • Kerrighed [kerrighed.org] is an up-and-coming clustering system for Linux. I saw it demonstrated at SC|05 - and was less than impressed. It needed a lot of work at that point. However, it looks like it has improved a lot since then, and it would be unreasonable to not mention it.
    • MOSIX [mosix.org] is the second-oldest clustering technology to gain a fan following to rival Star Trek. It's very good, though hard to get if you're not in academia. Arguably for entirely fair reasons.
    • OpenMOSIX [sourceforge.net] was originally a fork from MOSIX but is now essentially its own clustering technology. Development is nowhere near the speed I'd like, it does need far more eyes, but is well-known and highly regarded. Moshe Bar is also one of the coolest developers I've encountered.
    • DAKOTA [sandia.gov] is a program for profiling parallel applications and should be useful in telling you where you are gaining and losing.
    • HPC Toolkit [rice.edu] is another toolkit for profiling HPC applications.
    • is yet another profiler for parallel software. Between this and the others I've listed, you should have more information than sequential programmers ever get to work with.
    • Performance API [utk.edu] is a facility used by most of the profiling software to provide an architecture-independent view of performance counters. I have it on good authority that some (now former)
  • by flaming-opus ( 8186 ) on Tuesday June 26, 2007 @04:35PM (#19654819)
    It appears that Sun's design is less revolutionary. It's just a bunch of off-the-shelf blade servers strung together with infinaband. They use the same cabinets, powersupplies, etc as the regular blade server offerings for non-technical computing. It also runs as a regular linux OS, clustered, rather than a supercomputer specific OS, as the Blue Gene does. The big differentiator of the Sun system is the massive 3000 port infinaband switch. I'm sure it's not actually a 3000-port switch, but a bunch of small switches packed together, running over printed circuit boards, rather than cables.

    Sun's design is affordable, and probably has a pretty decent max performance, and pretty reasonable power/memory per node. However, it's not as exotic as IBM's design. The IBM design has fantastic flops/watt and flops/square-foot performance. However, each node is really wimpy, which forces you to use a LOT of nodes for any problem, which inreases the necessary amount of communication. Some problems work really well, others, not so much.

    IBM has limited blue gene to a small number of customers, all with fairly large systems. I suspect that's because it's very difficult to port an application to the system, and get good performance.
  • by flaming-opus ( 8186 ) on Tuesday June 26, 2007 @04:47PM (#19655013)
    A tricky question, but not all that interesting. A fast server processor is within a factor of 4 of the fastest supercomputer processor in the world. That does not mean that you can do equivalent work with the server processor. Among other things, processing performance (gigaflops) of a CPU, is no longer the interesting part of a supercomputer. (It never really was) memory bandwidth, interconnect bandwidth and latency, and I/O performance are the more interesting features of supers. 12 year old Cray processors still have five times the memory bandwidth of modern PC processors, and twenty times the I/O bandwidth.

    You'll notice, that 98% of the supercomputers, sold in the last 10 years, all use server processors. (Blue Gene actually uses an embedded systems processor, but it's the same idea) However, in the late 80's putting 256 processors in a super was cutting edge. In the 90's, a few thousand. Soon you'll see a quarter million cores. So supers are actually getting faster at a higher rate than are desktops, at least by most measures.

  • by Goalie_Ca ( 584234 ) on Tuesday June 26, 2007 @04:48PM (#19655019)
    One of the problem working with, say 3D mri data, is that for various reasons the FFT just can't be broken up into chunks of arbitrary sizes. I think at most I've broken a data set up 24 times, but then padding etc. become a worry. Also, you to pretty much avoid all IPC or amdahl's law kicks in fast and hard. Ironically some of the easiest algorithms to break up into several cpu's are things like convolution. The irony is that these are also computed faster on a single cpu than it takes to load and store the file.
  • by Anonymous Coward on Tuesday June 26, 2007 @06:28PM (#19656281)
    Basically, the easiest way to do it (speaking as someone who has done it) is MPICH (http://www-unix.mcs.anl.gov/mpi/mpich/). Anyone familiar with C or C++ can use it in a relatively simple manner (it is after all, just another header file). You can set up a rather simple beowolf style cluster and run the environment in a linux network without much trouble.

    There is also OpenMP (more of an extension to C/C++ than just a header, you need pragmas and stuff to use it); I find it easy to fall into race conditions in that library because you really need to think about what you are doing.

    Technically, pthreads ought to be able to provide enough functionality to get up and running if your environment acts as a single machine, or even the System.Threading namespace if you have the ability to run managed code. However, you don't have control then over if your thread gets its own cpu or not (unless it is guaranteed by the OS). In most cases that isn't actually necessary, your algorithm can be written in such a way that it doesn't matter if it is running on a 8 cpu system or a 2^32 cpu system (with exception to the fact that time to completion will vary); the troubles come in with optimizations.

    Recently I have been experimenting with simple web services on a server to farm out pieces of the solution in a distributed fashion for attempting a brute force on a salted sha1 hash in a database situation where you know the salt:
    on server:

    class infoBlock {string hash, string salt, string prefix, bool finished}
     
    infoBlock getWorkItem() {
      if success return new infoBlock{finished = true}
      else return new infoBlock{hash, salt, next prefix}
    }
     
    void finish(string password) { success = true; store plaintext password with hash and salt; }
    on clients:

    infoBlock wi;
    wi = getWorkItem();
    while (!wi.finished()) {
      resultBlock results = processWorkItem(wi);
      if(results.success) {
        finish(results.plaintext);
      }
      wi = getWorkItem();
    }

For God's sake, stop researching for a while and begin to think!

Working...