Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Transmeta Technology

Efficient Supercomputing with Green Destiny 193

gManZboy writes: "Is it an oxymoron to have an efficient supercomputer? Wu-Chun Feng (Los Alamos National Laboratory) doesn't believe so - Green Destiny and its children are Transmeta-based supercomputers that Wu thinks are fast enough, at a fraction of the heat/energy/cost, according to ACM Queue." 240 processors running under 5.2kW (or less!) is nothing to sneeze at. The article offers up this question: might there be other metrics that might be important to supercomputing, rather than relying solely on processing speed?
This discussion has been archived. No new comments can be posted.

Efficient Supercomputing with Green Destiny

Comments Filter:
  • Holy crap! (Score:5, Funny)

    by Anonymous Coward on Wednesday November 19, 2003 @11:43PM (#7517181)
    I knew that sword was beefy, but that's insane!
  • Most definitely nothing to sneeze at. I have to ask, though: how long ago was it insane for a supercomputer to put out as much heat as the average enthusiast PC puts out today?
    • Re:Indeed... (Score:2, Informative)

      by ergo98 ( 9391 )
      While obviously there is a bit of hyperbole in your statement (I highly doubt there are many systems defined as "supercomputers" that consume less than 3 digits or so of kW...certainly not 400-500W that a worst case enthusiast consumes), I really wonder if home computing has really gotten that much worse. Around 11 years ago I remember getting a 350W power supply for my 386-33 (with Diamond Speedstar 24x!), and this was pretty much par for the course - of course the CPU itself consumed much less power (I th
      • I most definitely concede that supercomputers have most definitely consumed a far greater amount of power traditionally than modern enthusiast machines. I'm more curious about the heat output (I know the figures cited in the original article referred to power consumption, but heat was mentioned). My post was a bit vague. A clarification might help. I wonder how long ago that type of heat output would have been considered extreme in a supercomputer, or perhaps at least in one node making up a supercomputer.
        • Worst case enthusiast? Hell, my bedroom draws about 3kw... lets not even talk about the living room!
        • Easy answer, Never In 1976 Cray's were water cooled [tmeg.com], and later they were cooled with freon [thocp.net]. So no your little desktop has nothing on super computers when it comes to cooling requirements.
        • Re:Indeed... (Score:2, Interesting)

          by sam0ht ( 46606 )
          I'm more curious about the heat output (I know the figures cited in the original article referred to power consumption, but heat was mentioned)

          The heat output is precisely the same as the power input. All electrical power used by the PC is eventually converted into heat in the room, so a 450W PC consumes 450W of electricity and provides 450W of heat.

          Incidentally, if you have a 500W heater in your room, you could replace it with a 500W PC for no extra electrical cost, and the same effect in terms of keep

          • To my knowledge, that's incorrect. Let's say you want to build a router from PC-parts. Via Eden with fanless cooling, no CD/DVD, no HD, vid-card is integrated, system runs from a floppy. It would require under 100W. If you installed a 400W power-supply, the system would not eat 400W of power. It would eat only as much as the system requires (under 100W)
    • how long ago was it insane for a supercomputer to put out as much heat as the average enthusiast PC puts out today?

      Oh, wait. I thought you said as much heat as the average PC enthusiast. Never mind.

  • Perhaps.... (Score:3, Insightful)

    by whiteranger99x ( 235024 ) on Wednesday November 19, 2003 @11:44PM (#7517198) Journal
    How much of a footprint and weight they take up as a metric to consider? ;)
  • by mrnick ( 108356 ) on Wednesday November 19, 2003 @11:46PM (#7517207) Homepage
    The MHZ war has been going on for soooo long that everyone just excepted that faster MHZ related to faster machines. Well, 64Bit computers are placing chip manufactures in a position where they have to market on a platform that declares that MHZ doesn't really matter.

    I think the question is a bit naive though as everyone knows a hundred software tools to rate performance of CPUs rather than just relying on MHZ.

    Nick Powers
    • I think it is about time that we start thinking - is this one fast than this one? to: Each of these is fast enough for what I do - what other features matter?

    • by Alpha State ( 89105 ) on Thursday November 20, 2003 @12:08AM (#7517328) Homepage

      Just look at cars - time was the only thing many people would look at is cubic inches or horsepower. Now most people who buy a car are more concerned with other features - passenger comfort, style, efficiency. I would guess this is a shift from car-oriented people buying cars to everyone buying cars as they became more of a necessity.

      Computer manufacturers are only just starting to see this, making smaller, quieter, cooler-running machines. Hopefully they'll continue to look at what their customers actually need rather than simply putting out chips with higher clock speeds.

      • Now most people who buy a car are more concerned with other features - passenger comfort, style, efficiency.

        What ever happened to safety?


        Computer manufacturers are only just starting to see this, making smaller, quieter, cooler-running machines. Hopefully they'll continue to look at what their customers actually need rather than simply putting out chips with higher clock speeds.

        You are talking about computer manufacturers as if they are all in the same business. It's the chipset manufacturers that
      • Now most people who buy a car are more concerned with other features - passenger comfort, style, efficiency.

        I don't know anything about you, but I'm now absolutely certain you're not a resident of the United States.

      • Exactly.

        Looking for cars, of course I can't help but look first at "The One Number", how many horsepower (BTW, OT, does the rest of the world measure automobile performance in Watts?)

        These days, though, I'm very impressed with cars that have a high ratio of horsepower to gas mileage.

        In the metric system, I guess that would reduce to kilograms per cubic second.

        Maybe there's a similar ranking for peak torque vs gas mileage; it would be interesting to see a ranking of cars on this basis...

  • a (kinder, gentler) Beowulf cluster of these!

    Speaking of which, why hasn't anyone made an OpenMosix cluster-in-a-box yet?
  • WTF? (Score:3, Insightful)

    by Anonymous Coward on Wednesday November 19, 2003 @11:50PM (#7517230)
    Why bother? If you have to sacrifice computational power for energy efficiency, then what is the point of having a supercomputer? Isn't compute power the whole purpose of having a supercomputer?
    • Re:WTF? (Score:1, Informative)

      by Anonymous Coward
      the premise of the article was that compute power is NOT the only consideration of operating a supercomputer.
    • Re:WTF? (Score:5, Interesting)

      by Anonymous Coward on Thursday November 20, 2003 @12:39AM (#7517479)
      No, supercomputers that can do a lot of image processing cannot waste power simply because it might be available.

      Modest supercomputers are used in the military on airframes. Power consumption is important for at least two reasons. First is the wattage and power draw. Second, and more subtle, it that the cooling requirements while flying at high altitude become more important than simple fan noise. Pentiums burn up no matter what you do. PowerPCs@10Watts with conduction cooling will survive.

      • ...cooling requirements while flying at high altitude become more important than simple fan noise

        I would have thought that abundant cool air supply wouldn't be a problem for high-flying aircraft. Duct in outside air and use a simple heat exchanger if worried about airborne contaminants.

        What's the outside air temp at 10,000 feet? Manage it with hot exhaust gas if a steady temperature is required. Where's the worry, freezing the chips?
    • You are absolutely right that people do not need to worry about power consumption AS LONG AS THE DAMNED THING DOES NOT MELT! ;-) (and it looks like they are going to be melting pretty soon now).

      Seriously, the power processors dissipate determines how close together they can be packed, and that average distance DOES set an ultimate limit on the latency of communications between the processors (speed of light, you know... ? ;-) ), and, of course, latency means efficiency, or how much a given task can be para
  • For this specific application, scientific supercomputing with a blade architecture, would native VLIW offer any performance benefits?

    If it would only be a few percent, it wouldn't be worth it. Transmeta has picked the game they want to play, and it would be a big deal at this point to engineer a special version of their chips that make it possible to run native VLIW code.

    I'm guessing that typical scientific processing involves a lot of loops that run many iterations, which is the ideal situation for the
  • by toddestan ( 632714 ) on Wednesday November 19, 2003 @11:53PM (#7517256)
    I was talking to a friend the other day about a bunch of lab computers that my school is getting rid of - a bunch of old Pentium MMX's. He suggested turning them into a cluster. But after thinking about it, I realized that the group of about 10 old computers we had would consume more power - and would likely be considerably slower than a single one of the 2.4Ghz Dell's that replaced them. "What's the point?" I said.

    Applying that here, the little VIA chips run at roughly the speed of a Celeron 500 or so, I'd say something like an AMD Athlon 3GHz would be just about as fast as about 6 of the VIA chips. So you are still saving some power, but as not as much as it would seem as first, as you need many low power chips to equal the speed of one faster chip. Not to mention power consumed by having more motherboards, network cards, switches, and other associated hardware.

    Something to really look at is the cluster of G5's. The G5 chips use a lot less power than their x86 counterparts. I bet that cluster of G5's is probably right up there in terms of processing power per watt as this VIA super computer. And it's way more cool to boot.
    • Well this article was essentially trying to prove that the transmeta's did have a high processing power for their heat output. It would seem you'd be right at first, thats what makes this idea interesting.
    • by Svartalf ( 2997 ) on Thursday November 20, 2003 @12:29AM (#7517432) Homepage
      You're comparing apples to oranges, not to mention that your info's a little off...

      1) A Nehemiah core C3 runs really close to the same performance of a comparably clocked Celeron, with the same general power consumption of a Samuel2 core (For those that don't know, part of how VIA's chip originally got it's low power is that the FPU was underclocked by a factor of 1/2). It's a nice chip overall, but it's not really intended (nor are they USING it that way) for scientific or gaming applications even though you can use it for that. The C3's winning usages is in something like a media PC, workgroup servers, and embedded systems where you need low power consumption, relatively low cost, and relatively high performance compared to other x86 embedded solutions.

      2) The Crusoe and similar chips are very fast executing VLIW CPUs (very much like the Itanium...) that have code morphing that translates x86-32 instructions into comparable sets of instructions for the VLIW CPU- in fact it's very good at doing this sort of thing. The reason it's less desirable with a desktop or gaming application is that you're exceeding the VLIW code cache regularly, meaning you have to keep recompiling the x86 instructions into the native VLIW ones. For a scientific application, the same task gets executed time and time again and usually ends up with most, if not all, of the code in the pre-morphed code cache. At that point, you're now in the high-performance domain with very little power consumption. The Crusoe in this application would consume less power than the G5 and run just as fast. (Check the article that you're commenting on...)

      Do some thinking outside of the box here, what's good or great on a desktop machine isn't always the optimal choice for supercomputing clusters or HA clusters. Depends on a bunch of factors, including what you're going to be running on the systems in question and what kind of environmental conditions you're going to be facing.
      • When you say "A Nehemiah core C3 runs really close to the same performance of a comparably clocked Celeron" you gloss over the fact that having "the FPU was underclocked by a factor of 1/2" ends up giving you far less floating-point performance than an equivalently-clocked Celeron; a difficult feat.

        Considering that FPU performance is particularly useful in scientific apps, this makes VIA chips nearly worthless there. Epia boards are cute & fit in toasters, but not really practical or cost effective.
    • But if you do the math (clipped from another post of mine)
      If you do the math with X (10,280 instead of 13,880 performance, 1000sq instead of 21,000sw, and 800kw instead of 3,000kw) you get a 337 fold increase in performance per square foot, rather than 65, and an 832 fold increase in performance per Watt, rather than 300 fold, vs the Cray.

      And I don't know what the numbers for the Transmeta solution is.
    • G5 = HOT (Score:3, Interesting)

      by TubeSteak ( 669689 )
      Article [bbc.co.uk]

      Running 1,100 computers in a 3,000-square-foot (280-sq-metres) area sends the air temperature well over 100 degrees Fahrenheit (38 Celsius).

      The heat is so intense that ordinary air conditioning units would have resulted in 60-mph (95 km/h) winds. Specialised heat exchange cooling units were built that pipe chilled water into the facility.

      "There are two chillers for this project," explained Kevin Shinpaugh, Director of Cluster Computing.

      "They're rated 125 tonnes each in cooling capacity, and they

    • The IBM PowerPC 970 (aka G5) does not use "a lot less power" than similar x86 chips. They're well within the same order of magnitude, and in fact the G5 and the AMD Athlon64/Opteron consume basically the same amount of power at the same clock speed. The Pentium4 consumes a bit more, but not by much. IBM doesn't bother documenting the power consumption of their processors (actually they don't bother publicly providing much useful documentation at all), but they listed the "typical" power of the 1.8GHz G5 a
  • While the points that the author makes are true about the "frugal consumer", those aspects are not applicable to supercomputing.

    Overall performance is much more important than efficiency. While efficiency is commendable at all computing levels, if efficiency is a very important aspect, then a supercomputer is probably not for you.
  • Do the math (Score:5, Interesting)

    by InodoroPereyra ( 514794 ) on Wednesday November 19, 2003 @11:55PM (#7517262)
    This is very, very cool. For one thing, a bottleneck in supercomputers is in most cases the network. In this regard, dropping some per/node performance might not affect the overall performance for applications that need intensive interprocess communication.

    The other point is: how expensive it is to support a cluster ? Not only the energy consumption, but also the infraestructure. It is pretty darn difficult to keep a thousand processors cold. You may need a special building, special power supply for it, etc.

    A final point: as far as I know, the rule of thumb is that the floating point performance with these energy efficient processors is of the same order of magnitude as regular processor, may be a factor 2 difference.

    You do the math ... :-)

    • Re:Do the math (Score:3, Interesting)

      by Jeff DeMaagd ( 2015 )
      My question is whether this is more efficient per TFLOP than IBM's PPC unit, which is IIRC smaller than a rack and houses 1024 PPC chips.

      I really can't tell now that the site is slashdotted. The CPU in this case can't be that much of a burden if they run around five watts. I am curious if 80% of the heat generated here is simply networking.
    • Can't swear to it but I have heard here on slashdot from others that Green destiny sits in one rack and requires no special climate control. Since these morph x86 code and retain that translation they are very efficient when they run the same loops over and over as in scientific computing

    • For one thing, a bottleneck in supercomputers is in most cases the network. In this regard, dropping some per/node performance might not affect the overall performance for applications that need intensive interprocess communication.


      OTOH, with faster nodes you need fewer nodes and thus also less network traffic.

      IIRC, the green destiny uses plain ethernet for networking, so it won't be able to compete with higher end cluster interconnects anyway (ethernet latency kills performance for many applications).
    • It is pretty darn difficult to keep a thousand processors cold

      You're saying it. At the (undisclosed southwestern desert location) high-performance computing center on campus they have problems during the summer. On some of the hottest days, they have to stand outside with a garden hose & spray down the cooling-coils on the main AC unit, or else the whole system goes down. The system can handle the load, it just can't get the heat out of the system when it's 100F outside.
  • If your supercomputer cluster nodes are cheaper/more energy efficient, then given a fixed budget, your supercomputer can be bigger!
  • This makes me wonder if such a configuration might find its way into ordinary extreme performance desktop/laptop computers.

    Especially with the new wave of Media Center based PCs...small small machines that are very powerful....is THIS the future of servers? Perhaps in a few years my web pages will all be served up from something like a handheld PC, with several processors and always-on WiFi? The possibilities are endless, but I see this DEFINITELY making it into laptops of some creed...those ultra-high-p
  • by WiPEOUT ( 20036 ) on Thursday November 20, 2003 @12:02AM (#7517294)
    Why are supercomputers primarily benchmarked by their speed? The answer comes when you consider that almost all labour-saving devices are measured in the work they perform in a given period of time.

    Time is the only truly finite resource from a human perspective. As technology has progressed, distances have been conquered, vast energies harnessed, but old Father Time is still inescapable.

    As a result, we place great value on just how much time is taken to accomplish anything.
    • Time is the only truly finite resource from a human perspective. As technology has progressed, distances have been conquered, vast energies harnessed, but old Father Time is still inescapable.

      With advances in quantum physics, who knows how long this will be the case?

    • by kimbly ( 26131 )
      Money is usually finite, too. Especially in research. Power costs money. Cooling also costs money.
      • But time is money, Mr. Redundant.
        moving on...
      • Commercial, Industrial & Institutional entities usually get a nice big fat discount on power and water. That said, your comments dovetail nicely with this article [salon.com] I just read.

        At Penn State University, electrical consumption in October was 33 million kilowatt hours, up from 27 million in October 1996. The school's electric bill is about $1 million a month. Paul Ruskin, with the university's physical-plant office, said power use by the 13,000 student residents contributed to the increase. Some officials

    • Time is the only truly finite resource from a human perspective. As technology has progressed, distances have been conquered, vast energies harnessed, but old Father Time is still inescapable.

      It seems that at least one person [johntitor.com] claims to have conquered time.

  • Nano-ITX (Score:3, Interesting)

    by PureFiction ( 10256 ) on Thursday November 20, 2003 @12:05AM (#7517310)
    with the centaur C5P processor core. Draws about 8W for the chip @ 1Ghz. Lets assume 12W total for network boot.

    [ see image here: peertech.org/hardware/viarng/image/nano-itx-c5p.jp g [peertech.org] ]

    With 5,200 Watts for Green Destiny, you could use 433 boards these boards for the same power consumption.

    The on chip AES is clocked at 12.5Gbps, Entropy at 10Mbps (whitened). Thus you would have

    422Ghz of C5 processor power
    5.412TB/s of AES (yes, terabytes)
    4.22Gbps of true random number generation.

    Yeah, these are really rough estimates, but that is a long of bang for your kilowatt buck no matter how you slice it.

    With a cutting edge P4 approaching 100W the efficiency of these less powerful but fully capable systems will become increasingly attractive.

    I would not be surprised to find bleeding edge processors relegated to gamers and workstations as most computing tasks start migrating towards small, silent, low power systems that simply *work* without eating up desk space, filling a room with fan noise and driving the electricity bill higher with continuous 100's of W draw.
  • might there be other metrics that might be important to supercomputing, rather than relying solely on processing speed?

    Yes, people often consider flops/watt to operate, and flops/dollar to buy.

    Speed alone means nothing. All these atoms in my apartment can do billions of operations per second, but they can't even play mp3s.
  • remember when computers were big enough to be in warehouses? and 10 years ago, people theorized that computers would be small enough to fit in your watch or hand? and that was just theory and considered fiction.

    now we have palm pilots and watches that can store data (see the usb wrist watch)

    so, really, a supercomputer that doesnt use that much energy isnt impossible.
    anything's possible, one just has to break through the set barriers technology has made. if no one did that, we still would be sitting around
  • Times change (Score:3, Insightful)

    by m8te ( 725552 ) on Thursday November 20, 2003 @12:16AM (#7517366)
    Lottsa years ago I used to maintain a CDC 7600, not only did it need full refrigeration, but it's original design spec was for an MTBF of 15 hours! The designers reckoned that it was so fast that the biggest job imaginable could be run in that time. Of course it did better than that in the end, but it was a bugger of a job to fix, and the backplane was 6 inches deep in twisted pair wires. Just imagine making wiring changes.
  • by Tom7 ( 102298 ) on Thursday November 20, 2003 @12:19AM (#7517385) Homepage Journal
    The article offers up this question: might there be other metrics that might be important to supercomputing, rather than relying solely on processing speed?

    Um, yes?

  • by 2nd Post! ( 213333 ) <gundbear@pacbe l l .net> on Thursday November 20, 2003 @12:24AM (#7517412) Homepage
    If you do the math with X (10,280 instead of 13,880 performance, 1000sq instead of 21,000sw, and 800kw instead of 3,000kw) you get a 337 fold increase in performance per square foot, rather than 65, and an 832 fold increase in performance per Watt, rather than 300 fold, vs the Cray.

    Of course I dunno the numbers for the Transmeta solution yet!
  • . . . might there be other metrics that might be important to supercomputing, rather than relying solely on processing speed?

    No. I have a rock that can sit and do nothing, consuming considerably less than even 5.2kW. You can talk efficiency and bang-for-buck all you like, but if you don't benchmark faster than (roughly) 100 common desktop machines, you don't get to call yourself a supercomputer.


    • That only shows how timely the definition of a supercomputer is. 100 common desktop machines are very uncommon and obsolete 3 years from now.

      I think energy efficiency (MOPS/Watt) is a very relevant metric. The reason why my PDA cannot do wideband software radio or anything that needs lots of GOPS is energy-efficiency. If the same PDA could carry 100 XScale processors instead of 1 with the same battery lifetime, I'm sure we'll have applications for it in no time.
      • That only shows how timely the definition of a supercomputer is. 100 common desktop machines are very uncommon and obsolete 3 years from now.

        Right. I recall Apple making a big stink a few years back about being a desktop supercomputer when they hit 1 GFLOPS, or whatever the benchmark was that initially established the first supercomputers. What makes a computer "super" while Moore's law is still being met will definitely change over time.

        I think energy efficiency (MOPS/Watt) is a very relevant m

  • The article offers up this question: might there be other metrics that might be important to supercomputing, rather than relying solely on processing speed?

    Yeah. Does the supercomputer do what the customer needed it to do? Nobody in the world lays down money for a "supercomputer" these days so that they can be the fastest kid on the block ... or at least they shouldn't. Ostensibly, there are massive amounts of computing work that they need done, and they need something that can do it in a reasonable amo

  • by Anonymous Coward
    Anyone interested in this sort of thing should check out the Beowulf mailing list - go to www.beowulf.org, and read through the (recent) archives. There's been some talk lately on different metrics.
  • Memory Speed (Score:3, Interesting)

    by rf0 ( 159958 ) <rghf@fsck.me.uk> on Thursday November 20, 2003 @12:32AM (#7517444) Homepage
    Its not CPU speed that is important in supercomputer/clusters it is the speed at which you can get data from one node to esp memory access. If you havea 512 node system and node 3 needs a copy of node 40's memory it has to copy it over.

    If its even just 512Mb of Gigabit ethernet and assuming 100% performace it would still take 5 seconds which is many orders of magniture. Just look at SGI machines which use NUMA and their Cray-Linux are 3.2 TeraBytes (bytes not bits). Now thats how you want to shift data

    Rus
  • Is it an oxymoron to have an efficient supercomputer?

    I thought heat was the real poison of "ultimate" computing...
    So it seems likely computers will move towards those limits and become 'greener' with respect to how much energy they use...

    To do otherwise would be counterproductive in terms of both efficiency and ecology.
    That's needed given how much energy the US is using [cornell.edu]

    Not a bad thing - but I wonder when green will move towards a technology that means less polluting in terms of hardware that gets
  • by shaneb11716 ( 451351 ) on Thursday November 20, 2003 @12:40AM (#7517483)
    Especially when simulating nuclear weapons.

    -Shane
  • by TheSHAD0W ( 258774 ) on Thursday November 20, 2003 @12:52AM (#7517524) Homepage
    It's still valuable to have one or a few really friggin' fast processors versus a whole lot of smaller processors if you're running tasks that can't easily be subdivided. This is why people are still buying single processor PCs rather than multiprocessor boxen. If you're buying the setup for a specific purpose and multiple slower CPUs will do the job for you, then that's great; but you'll get more flexibility with speedy processors.
  • If one can pack the processors more densely, it would cut down on the wiring etc, or allow much shorter paths between nodes (better still, one might be able to stuff many processors on the same board or something), thereby increasing bandwidths (when you try to increase bus speed, path length and related current leakages etc do pose problems). This in turn means computations that require more 'random' communication between nodes can speed up. I suppose that's definitely worth pursuing for the more fine-grai
  • Wu-Chun Feng (Los Alamos National Laboratory) doesn't believe so - Green Destiny and its children are Transmeta-based supercomputers that Wu thinks are fast enough, at a fraction of the heat/energy/cost, according to ACM Queue.

    Yes, yes, those numbers are impressive but can it be used to destroy other weapons and conquer the Chinese underworld in the hands of a rebellious Manchurian girl? (Reference: Crouching Tiger Hidden Dragon)

  • by hoof ( 448202 ) on Thursday November 20, 2003 @01:00AM (#7517563)
    That is the only advantage of using a Transmeta CPU. Wouldn't it be more efficient to just use a regular VLIW CPU without all the x86 code morphing stuff?
  • Comment removed based on user account deletion
  • No.

    Faster, ever faster!
  • Full Formatted Text (Score:5, Informative)

    by TubeSteak ( 669689 ) on Thursday November 20, 2003 @01:38AM (#7517717) Journal
    Sorry, no Tables and no Pictures

    Making a case for Efficient Supercomputing
    From Power [acmqueue.com]
    Vol. 1, No. 7 - October 2003
    by Wu-Chun Feng, Los Alamos National Laboratory It's time for the computing community to use alternative metrics for evaluating performance.Motivation

    A supercomputer evokes images of big iron and speed; it is the Formula 1 racecar of computing. As we venture forth into the new millennium, however, I argue that efficiency, reliability, and availability will become the dominant issues by the end of this decade, not only for supercomputing, but also for computing in general.

    Over the past few decades, the supercomputing industry has focused on and continues to focus on performance in terms of speed and horsepower, as evidenced by the annual Gordon Bell Awards for performance at Supercomputing (SC). Such a view is akin to deciding to purchase an automobile based primarily on its top speed and horsepower. Although this narrow view is useful in the context of achieving performance at any cost, it is not necessarily the view that one should use to purchase a vehicle. The frugal consumer might consider fuel efficiency, reliability, and acquisition cost. Translation: Buy a Honda Civic, not a Formula 1 racecar. The outdoor adventurer would likely consider off-road prowess (or off-road efficiency). Translation: Buy a Ford Explorer sport-utility vehicle, not a Formula 1 racecar. Correspondingly, I believe that the supercomputing (or more generally, computing) community ought to have alternative metrics to evaluate supercomputersspecifically metrics that relate to efficiency, reliability, and availability, such as the total cost of ownership (TCO), performance/power ratio, performance/space ratio, failure rate, and uptime.

    Motivation

    In 1991, a Cray C90 vector supercomputer occupied about 600 square feet (sf) and required 500 kilowatts (kW) of power. The ASCI Q supercomputer at Los Alamos National Laboratory will ultimately occupy more than 21,000 sf and require 3,000 kW. Although the performance between these two systems has increased by nearly a factor of 2,000, the performance per watt has increased only 300-fold, and the performance per square foot has increased by a paltry factor of 65. This latter number implies that supercomputers are making less efficient use of the space that they occupy, which often results in the design and construction of new machine rooms, as shown in figure 1, and in some cases, requires the construction of entirely new buildings. The primary reason for this less efficient use of space is the exponentially increasing power requirements of compute nodes, a phenomenon I refer to as Moore's law for power consumption (see figure 2)that is, the power consumption of compute nodes doubles every 18 months. This is a corollary to Moore's law, which states that the number of transistors per square inch on a processor doubles every 18 months [1]. When nodes consume and dissipate more power, they must be spaced out and aggressively cooled.

    Figure 1

    Without the exotic housing facilities in figure 1, traditional (inefficient) supercomputers would be so unreliable (due to overheating) that they would never be available for use by the application scientist. In fact, unpublished empirical data from two leading vendors corroborates that the failure rate of a compute node doubles with every 10-degree C (18-degree F) increase in temperature, as per Arrenhius' equation when applied to microelectronics; and temperature is proportional to power consumption.

    We can then extend this argument to the more general computing community. For example, for e-businesses such as Amazon.com that use multiple compute systems to process online orders, the cost of downtime resulting from the unreliability and unavailability of computer systems can be astronomical, as shown in table 1millions of dollars per hour for brokerages an


  • yes, cost...

  • by zymano ( 581466 ) on Thursday November 20, 2003 @02:23AM (#7517872)
    Has anyone else noticed that vector processing is gaining momentum ? Some array processing links .
  • It would seem to make more sense to shut down and start up individual nodes for power saving. A supercomputer has relatively little down time and most of the time jobs come in large batches, you can shut down or heat up additional nodes as needed. It should be relatively easy to implement this kind of power saving inside single "computers" with many processors, and of course absolutely trivial to do it with a cluster. Especially if you use WoL NICs.

  • My desktop machine is faster than a Cray 1 [ed-thelen.org], and it'll never be labelled "Supercomputer" by any rational being.

    Unless their architecture actually hits the Top Ten [top500.org], I'm not going to be impressed that it's overcoming its handicap. Unless you're running a Special Olympics [specialolympics.org] for computers and "everyone's a winner."
  • If MIPS/Watt is the focus, why not use Intel's StrongARM, XScale or other ARM [arm.com] based cores rather than Transmeta's stuff. Afterall, ARM was designed specifically with the MIPS/Watt ratio as objective, starting a whole new architecture from scratch. Whereas Transmeta has focussed on effectient x86 "emulation".

    --
    Real computer scientists despise the idea of actual hardware.
    Hardware has limitations, software doesn't.
    It's a real shame that Turing machines are so poor at I/O.
  • This is a metric that could be used. How much performance can you get per watt?

    For supercomputing, I would imagine that something like SPECfp/watt or SPECrate/watt would be a decent metric.

    If your limitation is a finite power budget, then you pick the most highest perf/watt CPUs.

    P4 3.2 EE = 18.44 SPECfp/Watt (80 watts)
    Crusoe = ?? No performance numbers published, but I'll bet you it's lower

    Building larger caches (which can be made low power) is a good way to acheive high power/performance efficiency.
  • The article offers up this question: might there be other metrics that might be important to supercomputing, rather than relying solely on processing speed?

    Not to sound flippant, but...duh. Okay, I know I sound flippant. But seriously, why has it taken so long to realize that processing speed is not of the utmost importance? It's like saying one car is better than another because has a top speed of 180MPH and the other 174MPH, ignore that the "slower" car gets 30% better mileage. There's such a thing
  • Yeah, just wait until Li Mu Bai finds out, he is going to be pissed.
  • Although the Transmeta processor is significantly more reliable than a conventional mobile processor, its Achilles' heel is its floating-point performance. Consequently, we modified the CMS to create a "high-performance CMS" that improves floating-point performance by nearly 50 percent and ultimately matches the performance of the conventional mobile processor on a clock-cycle-by-clock-cycle basis.

    That's not exactly trivial. Does anyone know wnything more about this? What are the trade-offs?
    And, will thi

An Ada exception is when a routine gets in trouble and says 'Beam me up, Scotty'.

Working...