Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Supercomputing Upgrades

Jaguar, World's Most Powerful Supercomputer 154

Protoclown writes "The National Center for Computational Sciences (NCCS), located at Oak Ridge National Labs (ORNL) in Tennessee, has upgraded the Jaguar supercomputer to 1.64-petaflops for use by scientists and engineers working in areas such as climate modeling, renewable energy, materials science, fusion and combustion. The current upgrade is the result of an addition of 200 cabinets of the Cray XT5 to the existing 84 cabinets of the XT4 Jaguar system. Jaguar is now the world's most powerful supercomputer available for open scientific research."
This discussion has been archived. No new comments can be posted.

Jaguar, World's Most Powerful Supercomputer

Comments Filter:
  • Economics? (Score:4, Interesting)

    by thedonger ( 1317951 ) on Friday November 14, 2008 @06:47PM (#25766471)
    How about economic modeling?
    • Re: (Score:1, Redundant)

      From the minimum-requirements-for-crysis dept.

      How about running Crysis...on VISTA!

    • How about economic modeling?

      Nothing beats a pair of dice.

    • They tried, but the computer crashed.

    • How do you model PILLAGE and THEFT and CONFIDENCE GAMES?

  • by Gizzmonic ( 412910 ) on Friday November 14, 2008 @06:49PM (#25766487) Homepage Journal

    But I really got it to play Tempest 2000.

  • So, like, how do so many people use the computer effectively? Do they have a sign-in sheet? I bet there's a long line. :p~

    • by Entropius ( 188861 ) on Friday November 14, 2008 @07:05PM (#25766613)

      There is a queueing system. If you want to run a job on a machine like this, you log into the control node (which is just a linux box) and submit your job to the queue, including how many CPU's you need for it and how much time you need on them.

      A scheduling algorithm then determines when the various jobs waiting in the queue get to run, and sends mail to their owners when they start and stop.

      On many machines there is a debug queue with low limits for number of CPU's and runtime, and thus fast turnover; this is used to run little jobs to ensure everything is working right before you submit the big job to the main queue.

      Each project has an al

      • Wow, I was actually joking, but ... cool. I actually thought it worked more like a mainframe/*nix terminal server.

      • Do you need special permission to request ALL of the processors for your job?
        • Re: (Score:1, Informative)

          by Anonymous Coward

          NCCS is a capability site, so no. You just need to be willing to wait for your job to bubble up to the top of the queue. In fact, as a capability site, the whole point is to develop codes that can run on the entire machine. Now, once your job runs, you will have to wait a while to get another opportunity, as the queues are set up to provide an 8 week moving average of "fairness".

  • probably not, its not designed for that sort of thing so its a stupid question to ask, nd probably uses linux anyway

    well that got that one out the way
    • It can't, because Crysis is not multithreaded. If you can figure out a way to parallelize it, then you certainly could run Crysis on it.

      • Hell, you could *simulate* an x86_64 CPU on that thing, and it would not even hiccup on Crysis on Vista with every setting set to "Extreme".
        I wonder if someone could replace the engine by a full global illumination raytracer first. :D

        • I don't think you have a clue how such a machine works. To simulate a non-trivially-parallelizable system like an x86_64 on it you could run your simulation on exactly the same number of cpu's as you have cores in your simulated cpu, or less (for a much easier to program simulation).

          Clusters such as these are *built* using standard hardware, in this case a (rather large) bunch of AMD opterons.

          If you can show me how to run 'Crysis on Vista' faster on rack with 10 pcs without rewriting either (and I did not s

  • by account_deleted ( 4530225 ) on Friday November 14, 2008 @06:54PM (#25766527)
    Comment removed based on user account deletion
    • by Z80a ( 971949 )
      and the hardware itself its quite buggy,as you need to align the jumps with 64 bits words but at least there is tempest for it
    • So, I'm standing in front of my new Jag, with my WOW CD in my trembling hands. Where do I plug in my Game Keyboard, and Mouse? There's nothing in the owners manual about where the plug is to connect to my Cable Box!? How much was this thing again?

      And another thing, where is the Cup Holder?

  • by rgo ( 986711 ) on Friday November 14, 2008 @06:57PM (#25766563)
    I always knew you could pull it off!!
    • Re: (Score:2, Funny)

      by ari wins ( 1016630 )
      Dude, Altered Beast on this thing is SICK!
    • I always knew you could pull it off!!

      That's nothing. I'll have the world's most powerful supercomputer in a month. This is what I do... See.. they took 200 cabinets and added that to the already present 84....

      This is how I win. I add 400 cabinets!

      AHAHAHHAHAa!!! I GOT THE MOST POWERFUL ONE NOW! MUAHAHAHAHAH 200 ain't shit on my 400! ...... I just hope noone else can imagine a number greater than 400 and beat me....

  • good upgrade path (Score:3, Insightful)

    by Jodka ( 520060 ) on Friday November 14, 2008 @07:02PM (#25766589)

    The current upgrade is the result of an addition of 200 cabinets of the Cray XT5 to the existing 84 cabinets of the XT4 Jaguar system.

    That sounds like Cray engineered this to aggregate components across product generations. For short product life cycles that seems like a great idea, not throwing out the old system when you get the new one but combining the two systems instead. Though obviously for long product life cycles it would be a losing proposition; The space and power requirements of inefficient older components would be greater than the space and power savings of upgrading to the latest model + the expense of the upgrade.

    • by bmwm3nut ( 556681 ) on Friday November 14, 2008 @07:19PM (#25766713)
      Then when you get new cabinets you just decommission the oldest ones. Keep rotating the old ones out once they fall below some flop/dollar threshold.
    • These systems are not so tightly integrated as you may imagine. True, many size a full-speed fabric just-right, each little bit costs a ton. However, commonly at scale, you only have full-speed fabric in large subsections anyway, and oversubscribe between the subsections. Jobs tend to be scheduled within subsections as they fit, though the subsection interconnects are no slouch.

      This is particularly popular as the authortitative Top500 benchmark is not too badly impacted by such a network topology, and re

      • i can easily imagine the hardware people making the interconnect lanes be backward compatible, but i believe there has been some endpoint capability changes between the two generations.

        whats difficult to believe is that they actually managed to encorporate the necessary changes in the resource allocator

  • Can someone please translate the performance into people/hand held calculator/time and space into number of libraries of congress? I am not sure what the numbers they're talking about mean.
    Thanks.

  • Atari [wikipedia.org] would make a comeback!!

    Tm

  • Will environmentalists ever stop trying to reverse the second law of thermodynamics?

    "Insufficient data for a meaningful answer."

    Damn.

  • There are much bigger computers around in Los Alamos and Lawrence Livermore that are used to model thermonuclear weapons sitting around in storage to see if they'll still go pop when we push the Big Red Button.

    The scientific community would like to use these machines for something useful, and in fact the scientists at Los Alamos have allowed some folks from my lattice QCD group to use a bit of spare time on it. Unfortunately the UNIX security features aren't enough; they weren't allowed to ftp our data out,

    • Re: (Score:2, Informative)

      You, sir, are an idiot.

      LANL, LLNL, and SNL are all weapons labs. ORNL is primarily a science lab.

      I myself have worked at three of these labs and held an account on an earlier iteration of Jaguar as well as some of LANL's other supercomputing clusters, so I ought to know.

      ORNL's Jaguar cluster, although parts of it are I think "controlled" rather than open so that it can run export-controlled code, is not at all classified. It's used for biology, astronomy, physics, CFD, etc.

      Also, if you knew the first thing

      • by caerwyn ( 38056 )

        You, sir, can't read.

        He was specifically talking about LANL and LLNL rather than ORNL.. that was the entire point.

        Granted, yes, his description of disallowing classified non-classified connectivity as "ludicrous" is a little off-base, although writing things down by hand really is stupid- there are plenty of procedures in place for putting data on transportable media and then arranging to declassify that media once it has been verified so that it can be used elsewhere.

        That doesn't change his fundamental po

        • In that case, pardon my misunderstanding in thinking that his post was at all related to the posted article.

          His title, "Used for open science...", was a quotation from the summary specifically about the ORNL computer. His rant about "much bigger computers around" was plausibly interpreted as the biggest one, the new ORNL cluster. I certainly must have been misled.

          • So, let me get this straight: If someone writes something you think is not true he or she is an 'idiot', whereas if you write something that you admit wasn't true you 'have been misled' ?

  • They should really upgrade, Jaguar is ancient!

  • 64-bit Jaguar (Score:1, Informative)

    by Anonymous Coward

    remember that its not a true 64-bit multimedia system its two 32-bit systems connected together XD

  • by CodeBuster ( 516420 ) on Friday November 14, 2008 @07:59PM (#25766991)
    It would be nice on these sorts of systems to have recurring, perhaps low priority, jobs issued by worthy outside distributed computing projects. Depending upon how busy the system is with other jobs it could make regular contributions to drug research and especially to AIDS research. To have complete and accurate pre-computed models of all steps in the protein folding process for all possible mutations of the AIDS virus, for example, would be a technological triumph and of potentially great benefit to humanity in the development of new drugs and possibly even an effective vaccine.
    • To have complete and accurate pre-computed models of all steps in the protein folding process for all possible mutations of the AIDS virus

      1. Each trajectory would be several terabytes (possibly verging on petabypes).

      2. The largest simulation I know of is this one: http://www.ks.uiuc.edu/Research/STMV/ [uiuc.edu] they simulated for 50ns and it's 10 times smaller than HIV. Protein folding takes milliseconds, not nanoseconds... it's not really tractable right now. I don't know how much cpu time the simulation took but it would have been a lot.

      3. Clusters like these are rarely idle, jobs are queued up to run when the cpus become available.

    • There's no reason, whatsoever, to use a highly-connected, high-bandwidth HPC machine, like Jaguar, on distributed computing jobs. There are other very worthy jobs that can be run on such a system, that can't be run on a pile of desktops all over the internet. Use the real supercomputers for real supercomputer jobs. There are plenty of idle xbox in the world for distributed computing.

  • And I downloaded the summary and all the comments too! wtf?

    • Awww shit dude, your day is totally ruined! Don't worry, I think Crysis and Beowulf made it in. Phew! I thought /. was losing its edge!

  • Love the paint job! (Score:3, Informative)

    by neowolf ( 173735 ) on Friday November 14, 2008 @08:40PM (#25767271)

    Check out the gallery if you haven't.

    I've always wanted to get some custom graphics like that on my server racks. Maybe a penguin, a butterfly, and a can of Raid. :)

    Supercomputers definitely don't look as exciting as they did in the "old days".

    • Yea man, that thing is fucking awesome. I love the graphics.

      Who said super computing had to be boring?

      That's the kind of work I wish I could be doing.. building and configuring those monsters.

      That being said, look at the size of that room! If half of it is the computer, then that's one big open space left doing nothing.

      I'm sure they'll use it for something eventually.
  • It's been offline a lot [ornl.gov] this month...

  • I wonder how much they paid for all those Opterons. I wonder what kind of volume discount is typical for these kinds of supercomputers.

    • The real trickery is in the interconnects and the cooling, the cpu's are probably discounted quite a bit but I doubt they're the biggest item on the tab.

  • If we can get fusion energy working cheap, we won't need the climate modeling. Not only that we can build a hundred of these things cheaper with the technology advances.

    Climate change is gradual .. the need for new drugs and fusion energy is more pressing.

    • Re: (Score:3, Interesting)

      Climate change is gradual, but the emissions we put into the atmosphere today will last for centuries. Even if we switched over to all fusion power tomorrow, we'd still see more climate change, and the longer we wait to replace fossil fuels, the more we will see. Realistically, it takes a long time to widely deploy a new energy technology. Fusion isn't even feasible in the lab, let alone ready for deployment, let alone widely deployed.

      Also, even if fusion were widely deployed, that doesn't mean we'd nece

  • I noticed this a short time ago, but have yet to see the 'Rmax' performance. They speak to Rpeak, which does beat out the current Rpeak by 23%, though the Rpeak by itself is even more uninformative than Rmax, which is already quite synthetic. Assuming the current #1 hasn't managed tuning or upgrades, this will have to beat 65% efficiency to technically win. 65% is likely an acheiveable goal, though the larger the run, the more difficult to extract a reasonable efficiency number, so it's not certain. I w

  • Hmmm...the ORNL web site lists the phone number for the Help Line. I think someone should call them up and ask them to reboot the server because the Internet is running slow.
  • SETI (Score:2, Funny)

    by madcat2c ( 1292296 )
    That is a lot of SETI@home power!
  • I think I'll wait for Leopard.

  • I bet the guys at... (Score:2, Informative)

    by glitch23 ( 557124 )
    Los Alamos are jealous since they just got a 1.026 petaflop supercomputer installed earlier this year by IBM called RoadRunner. It was featured in last month's Linux Journal.
  • In the June 2008 Top 500 list, the Cray XT Jaguar was number 5 with 205 teraflop/s. By comparison, the number 1 was an IBM Roadrunner Bladecentre, with a mix of 6,562 Dual Core Opterons and 12,240 PowerXCell8i Cell Processors, housed in 278 cabinets. That got up to 1.026 petaflop/s.

    In June the Jaguar had 30,000 Quad Core Opterons, and now it has 45,000. The previous machine was an XT4, but the most recent update shows that 200 XT5 cabinets have been added to it. I have been unable to find how many cabinet

  • Beep Beep (Score:3, Interesting)

    by PingPongBoy ( 303994 ) on Saturday November 15, 2008 @04:17AM (#25769277)

    In the not too distant future, we shall see a new Top 500 list. It just seems like yesterday that RoadRunner cracked the Petaflops barrier, and the whole world seems to have fallen on its ass in the interim. Banking failures, government bailouts, people losing their retirement portfolios. The irony is too much. Even as the computers get better, the answers that people need don't come fast enough.

    Then the light turned on for me. People in general, the people you see on the street going on their busy way to whatever, are mostly relying on "someone else" to come up with the answers. Most people have little confidence in their own ability to answer hard questions.

    Well, maybe things will turn around because of the power of supercomputers. It would be about time, wouldn't it? Here's how it may play out. Supercomputers so far, good as they are, serve up expensive results, so they are applied to difficult problems that are useful but far removed from everyday life.

    As supercomputer clock cycles become more abundant, researchers can apply them to do more mundane things that the unwashed can relate to. The result could be revolutionary. People who have always aspired to some inconsequential achievement that requires some expertise or training may suddenly have access to highly instructive supercomputer-generated procedures that explain both how and why. Not only will people become more expert do-it-yourselfers, but robots will become far versatile, with amazing repertoires.

    Crossing the petaflops barrier may be sufficient psychological incentive for people to request that governments begin to make supercomputing infrastructure available for public consumption, like roads and other services. Certainly, exciting times are comiing.

  • Will it run Cybermorph?

It is easier to change the specification to fit the program than vice versa.

Working...