SGI to Scale Linux Across 1024 CPUs 360
im333mfg writes "ComputerWorld has an article up about an upcoming SGI Machine, being built for the National Center for Supercomputing Applications, "that will run a single Linux operating system image across 1,024 Intel Corp. Itanium 2 processors and 3TB of shared memory.""
Press Release (Score:4, Informative)
CC.
Re:Why gaming? (Score:2, Informative)
Re:What happened to RISC? (Score:3, Informative)
Re:What happened to RISC? (Score:3, Informative)
Re:Solaris (Score:5, Informative)
http://top500.org/list/2004/06/
There's no "stronghold" for Sun to lose.
Sun != scientific computing (Score:5, Informative)
Scientific computing means data crunching (floating point). Complex, powerful processors are needed. The "stupider, but more" tradeoff doesn't work anymore. Sun processors have fallen behind in this respect.
It became obsolete (Score:4, Informative)
Re:In other news... (Score:5, Informative)
Re:What happened to RISC? (Score:2, Informative)
Sun and/or IBM zseries hardware (Score:4, Informative)
The Sun hardware is more difficult to deal with, since there isn't a virtual machine abstraction. You can't do everything below the OS. Still, Linux 2.6 has hot-plug CPU support that will do the job without help from a virtual machine. Hot-plug memory patches were posted a day or two ago. Again, this is NOT required for hot-plug on the zSeries. IBM whips Sun.
I'd trust the zSeries hardware far more than Sun's junk. A zSeries CPU has two pipelines running the exact same operations. Results get compared at the end, before committing them to memory. If the results differ, the CPU is taken down without corrupting memory as it dies. This lets the OS continue that app on another CPU without having the app crash.
it would make 2d place (Score:1, Informative)
Re:Similar software available? (Score:5, Informative)
Impressive... (Score:2, Informative)
Re:Advantages...? (Score:5, Informative)
The purpose of that computer is to solve complex scientific problems such as weather simulations, high-energy particle simulations, protine folding, etc. Many of these simulations involve iterated systems of equations that can take decades to solve on the fastest CPU's we have today.
The only way to get meaningful results in a meaningful amount of time is to break the problem apart into smaller problems and solve them in parallel.
Some projects, such as Folding@Home [stanford.edu] and Find-A-Drug [find-a-drug.org] go the distributed computing route -- use many disconnected systems to solve the problem.
The downside to that approach is that not all problems can be easily broken apart -- and some classes of problems can exist without tight coupling but they loose efficiency. The impressive thing about this particular super computer is that it has a single, unified memory image.
This is very useful for some classes of simulation problems when the entire simulation must be present for each iteration.
RISC overrated (Score:4, Informative)
It's like having on-the-fly instruction decompression. e.g. CISC programs tend to be smaller in main memory+cache, and they travel in CISC/"compressed" form taking up less memory bandwidth over the memory/cache buses to the CPU instruction decoder where they are "decompressed" to RISC micro-ops to be executed.
Look at the mainstream desktop/workstation/server CPUs. Only the SPARC is RISC. IBM POWER/PowerPC is barely RISC[1], some people think it's more CISC than RISC. Itanium isn't RISC. x86 isn't. The rest (Alpha, MIPS, PA-RISC) are either out of the market or on their way out.
As long as CPUs are fast and much faster than RAM (and cache remaining expensive), it's often worth doing the compression/decompression thing.
[1] I believe IBM's POWER chips actually decode their "RISC" instructions to simpler instructions, some of their "RISC" instructions are pretty complex- kinda oxymoronic... But as I mentioned, that may not be such a bad thing.
Actually.. (Score:2, Informative)
Let me clue you in on a few things (Score:5, Informative)
The UNIX made by SGI (the company making the machine referenced in the article) is more scalable than Solaris. Remember, IRIX was the first OS to scale a single Unix OS image across 512 CPUs. And now they've eclipsed that, with Linux.
None of that is unique to Sun.
Better than what? And says who? They've never decisively convinced the market that they're beter at this than HP, SGI, IBM or Compaq.
In addition to ignoring the other good Unix architectures out there in a dumb way with this comparison, you're also totally missing the point of the article. Linux supercomputing isn't just about cheap clusters anymore. Expensive UNIX machines on one side and cheap Linux clusters on the other is a false dichotomy.
Re:It became obsolete (Score:5, Informative)
A CISC instruction could do things like: take the value in register BP, add 4, get the value from the memory at the address you just computed, add the value in the register AX, and put the result back at the same memory location. Execution would take several clock-ticks.
To do the same in RISC, you would need several instructions (add 4, get from memory, add ax, store to memory). The execution of the individual instructions would take one tick each, so the sequence would take several. But on average RISC was a bit faster.
CISC was invented in a time that the memory was small, in the CISC way you could store larger programs in the same amount of memory.
RISC was invented when memory-size was not limited anymore, and looked to displace CISC in the long run.
CISC was still around when the memory bandwidth became a limiting factor. And since fewer instructions needed to be fetched from memory, more bandwidth was left for other data traffic. RISC lost some of it's speed advantage.
Modern CISC processors, get CISC instructions from memory, chop them up in smaller instructions, and executes those smaller instructions really fast. So in fact they can be seen as RISC processors, posing as CISC processors, ie the best of both worlds.
So CISC is a way of compressing RISC instructions, so they take up less memory/bandwidth.
Re:It became obsolete (Score:4, Informative)
Technically, RISC chips were supposed to execute all instructions in ONE cycle. This simplified the chip architecture, allowing it to scale up much farther. The downside was that it put the onus on the compiler writer to produce efficient code. (MIPS is a perfect example of this architecture.) All he had to do was make sure that fewer instructions were executed per task, and the code would run faster.
That is, until the chip designers started introducing SuperScaler and Out of Order execution. You see, simplifying the chip design provided chip designers with a way to add new optimizations in how instructions were loaded and executed. Unfortunately, this again meant more work for the compiler writer. Now he not only had to optimize the number of instructions, but he also had to optimize the ordering so that multiple instructions could be executed simultaneously or out of order.
Re:from MPI to multithreaded ? (Score:5, Informative)
Does this mean that the applications running on the "old" clusters, presumably using some flavor of MPI to communicate between nodes, will have to be ported somehow to become multithreaded applications ?
NCSA still has plenty of "old" style clusters around. Two of the more aging clusters, Platinum [uiuc.edu] and Titan [uiuc.edu] are being retired, to make room for newer systems like Cobalt. Indeed, the official notice [uiuc.edu] was made just recently--they're going down tommorrow. However, as the retirement notice points out, we still have Tungsten [uiuc.edu], Copper [uiuc.edu], and Mercury (Terragrid) [uiuc.edu]. Indeed, Tungsten is number 5 on the Top 500 [top500.org], so it should provide more than enough cycles for any message-passing jobs people require.
So, anyone has any insights as to why/how this matters for the programmers ?
What it means is that programming big jobs is easier. You no longer need to learn MPI, or figure out how to structure your job so that individual nodes are relatively loosely-coupled. Also, jobs that have more tightly-coupled parallelism are now possible. The older clusters used high-speed interconnects like Myrinet or Infiniband (NCSA doesn't own any Infiniband AFAIK, but we're looking at it for the next cluster supercomputer). Although they provided really good latency and bandwidth, they aren't as high-performing as shared memory. Also, Myrinet's ability to scale to huge numbers of nodes isn't all that great--Tugsten may have 1280 compute nodes, but a job that uses all 1280 nodes isn't practical. Indeed, untill recently the Myrinet didn't work at all, even after partitioning the cluster into smaller subclusters.
This new shared-memory machine will be more powerful, more convienient, and easier to maintain than the cluster-style supercomputers. Hopefully it will allow better scheduling algorithms than on the clusters too--an appaling number of cycles get thrown away because cluster scheduling is non-preemptive.
I'd also like to point out some errors in the Computerworld article. NCSA is *currently* storing 940 TB in near-line storage (Legato DiskXtender running on an obscenely big tape library), and growing at 2TB a week. The DiskXtender is licenced for up to 2 petabytes--we're coming close to half of that now. The article therefore vastly understates our storage capacity. On the other hand, I'd like to know where we're hiding all those teraflops of compute--35 TFLOPS after getting 6 TFLOPS from Cobalt sounds more than just a little high. That number smells of the most optimistic peak performance values of all currently connected compute nodes. I.e. - how many single-precision operations could the nodes do if they didn't have to communicate, everything was in L1 cache, we managed to schedule something on all of them, and they were all actually functioning. Realistically, I'd guess that we can clear maybe a quarter of that figure, given machines being down, jobs being non-ideal, etc. etc. etc.
As a disclaimer, I do work at NCSA, but in Security Research, not High-Performance Computing.
Re:Scalability of applications (Score:5, Informative)
With a single memory image system the computation can easily repartition dynamically as the computation proceeds. Its very costly (never say impossible!) to do this on a cluster because you have to physically move memory segments from one machine to another. On the NUMA system you just change a pointer. The hardware is good enough that you don't really have to worry about memory latency.
And let's not forget io. Folks seem to forget that you can dump any interesting section of the computation to/from the file system with a single io command. On these systems the io bandwidth is limited only by the number of parallel disk channels - a system like the one mentioned in the article can probably sustain a large number of GBytes/sec to the file system.
Let's not forget page size. The only way you can traverse a few TB of memory without TLB-faulting to death is to have multi-MByte-size pages (because TLB size is limited). SGI allowed a process to map regions of main memory with different page sizes (upto 64 MB I think) at least 10 years ago in order to support large image data base and compute apps.
When I used to work at SGI (5 years ago) the memory bandwidth at one cpu node was about 800 MBytes/s. My understanding is that the Altix compute nodes now deliver 12 GBytes/s at each memory controller. Although I haven't had a chance to test drive one of these new systems, it sounds like they have gradually been porting well-seasoned Irix algorithms to Linux. It is unlikely that a commodity computer really needs all of this stuff, but I'm looking at a 4-cpu Opteron that could really use many of the memory management improvements.
g
Re:it would make 2d place (Score:3, Informative)
Re:Sun and/or IBM zseries hardware (Score:3, Informative)
sort of error correction. The cache generally
has ECC for this. Since L1 is innermost and small,
it may well be duplicated along with the pipelines,
but I think they use ECC for that as well.
This is full-path protection. Cables have ECC
and/or a protocol with checksums. Disks are RAID.
Methods of error correction vary by component,
but nowhere are they missing.
Re:from MPI to multithreaded ? (Score:4, Informative)
It's a tradeoff. MPI is "preferred" because a properly written MPI program will run on both clusters and shared-memory equally fast, because all communication is explicit. It's also much harder to program, because all communication must be made explicit.
Shared-memory (e.g. pthreads) is easier to program in the first place (since you don't have to think about as many sharing issues) and more portable. However, it is very error-prone - get a little bit off on the cache alignment or contend too much for a lock, and you've lost much of the performance gain. And it can't run it on a cluster without horrible performance loss.
If it's the difference between spending two months writing the shared-memory sim and four months writing the message-passing sim that runs two times faster on cheaper hardware, well, which would you choose? Is the savings in CPU time worth the investment in programmer time?
Alas, the latencies on a 1024-way machine are pretty bad anyway. If they use the same interconnect as the SGI Origin, it's 300-3000 cycles for each interconnect transaction (depending on distance and number of hops in the transaction). Technically that's low-latency... but drop below 32 processors or so, and the interconnect is a bus with 100 cycle latencies, so those extra processors cause a lot of lost cycles.
Re:What happened to RISC? (Score:3, Informative)
No. This is exactly wrong. G5s are a good example of this. They easily outperform P4s at the same clock speed, and it's the P4 which must run at the higher speed to compensate.
The overhead of supporting all the various instructions and adressing modes, as well as being able to fit the whole CPU in one die were what made RISC a good choice in the past. Now, that overhead is dwarfed by other parts of the chip, and they're all running weird u-ops internally, so it makes little difference.
"RISC is easier to program? Depends on the person. A compiler can take advantage of large instructions very well which are hardware optimized."
Compilers are notorious for not utilizing esoteric opcodes. And when they do, there's almost never a significant performance advantage in doing so.
For example, none of the code I've ever tested with icc (one of the only compilers that can use weird opcodes on i386) has been more than about 5% faster than "gcc -Os -msse2", and a lot of it has been slower.
"RISC is physically smaller? No. RISC needs a higher clock frequency because many more instructions need to be executed. The result of this is that a much larger instruction cache is needed on chip.
RISC does generally need a larger cache, but it does not need a higher frequency.
"I don't remember every comparison but it pretty much comes out that neither is better than the other. That being said RISC is better than x86. Everything is better than x86. However CISC vs RISC is much harder to judge. Having done x86, 68k, and MIPS I must say that RISC is a pleasure."
Just use a compiler. Anything with a proper MMU will be good enough.
Re:Sun and/or IBM zseries hardware (Score:3, Informative)
A few questions:
Re:What happened to RISC? (Score:3, Informative)
That was my point. A shitty compiler with moderate optimization settings is very close in performance to one of the top compilers out there.
"The top compiler is infact the Intel compiler in part because it knows about unpublished instructions. Have fun reading the code it generates."
Yes, this was the example I used. The vectorized loops are a bitch to read.
"On the subject of G5s being faster, there are a whole host of differences between G5's and P4's. You can't just pick one difference and claim that's the reason."
That's true. However, I never gave a reason for the performance difference, so I'm not sure why you're saying this.
You said that RISC CPUs needed to run at a higher frequency to get the same performance as a CISC CPU. Since you're wrong, I gave an example to prove you wrong.
There is basically only one RISC CPU architechture that has the benefit of a really large R&D effort these days, and that's POWER/PowerPC. Itanium is not strictly RISC, and nothing else has the benefit of such a huge R&D effort.
Thus, the only RISC CPUs that can be fairly compared to x86 are the POWER/PowerPC chips from IBM. The only two x86 CPUs that have a really huge R&D effort behind them are the Athlons from AMD and the Pentiums from Intel.
They all have relatively similar performance (with advantages going to one or the other in a few niches). PowerPC chips are shipped at similar clock speeds to the Athlons and much lower clock speeds than the Pentiums.
Therefore, your statement that RISC CPUs need higher clock speeds to get the same performance has been demonstrated to be false in a comparison between the only 3 large chip makers in operation.
Further comparisons, such as those between Sparc and the VIA C3, which are smaller but significant efforts, show the RISC CPU getting more done per clock cycle, again demonstrating your statement to be false.
Not sure if your serious but lets explain. (Score:4, Informative)
I will avoid the tech terms (partly because they would confuse you, partly because I don't know them all but mostly because they ain't needed.
A single CPU computer can execute ONE instruction at the time. Meaning one program thread running at the time. But wait you say, my OS can run multiple programs at the same time. WRONG. It can't. It is a trick. It is running one program at the time but it is switching the program it is running really fast. There is however a problem with this. When it has switched to a program all the other programs are effectevily at the the mercy of the program now running INCLUDING the OS. Wich is why DOS and Windows and Linux and Mac OS and all the others had "hangups". With an extremely well written OS these hangups (when a program doesn't switch back to the OS) can be avoided but it still remains a case that all the programs and the OS are fighting for time on 1 single cpu.
So what happens when you add a cpu? Well a lot less switching PLUS if a program for whatever reason does not switch properly the OS can still be run on the other processor. Just making a windows box a dual CPU instantly makes it far more robuust. I encountered this myself with an old dell P3 that had a dual board but no dual CPU installed. Before I added a second CPU it was the usual windows crap of hangs and reboots and BSoD. Afterwards it ran as stable as a unix machine. Simple things like openeing a complex folder in exploder no longer "froze" the desktop as it could simple run exploder on one CPU and say word or my mp3 player on the other.
Don't forget too that there think like ATA harddrives and CD-ROM need the cpu to drive them. This takes a lot of long cycles and a lot of waiting, not so much CPU power as just time on the CPU. With a second one to do all the other tasks this makes everything run far smoother.
So what is better? Running 1 2ghz cpu or 2 1ghz cpu's? Depends. If you are running 1 program thread go with the 1 cpu. It will take all the cpu time but will not need to share it. If however you are running countless small threads go with the 2 or more solution. Threads will have access faster and you will loose less cpu time on the time needed to execute switches.
Oh yeah that is another problem. Switching between programs takes cpu time as well. It is not unknown for single CPU systems to spend so much time on switching they don't have time to run anything anymore. The old to many running programs problem known from windows but wich affects every OS.
Lastly there is a simple problem. Say you want real power do you go for a quad 2ghz or a single 8ghz. Answer? It is a trick, no such thing as a 8ghz cpu.
If you get the chance buy a second hand dual P3 and install windows 2000+ or Linux on it and be amazed. That old system will respond a lot faster underload then your 3ghz monster.
Re:Similar software available? (Score:2, Informative)
Scalability of sorts (Score:4, Informative)
Scalability is a complex issue. SGI has put a whole lot of processors together and put a single Linux image on it (so that a single program can use all memory), but this says nothing about how that setup will actually perform for general purpose use. Just because the hardware allows threads on hundreds of processors to make calls into a single Linux kernel, does not mean that there will not be major performance issues if this actually happens.
There are performance issues with memory even on single processor systems with nominally a single large address space, and a developer may need to put a lot of work into ensuring that data is arranged to make best use of the various levels of cache.
Many of the multi-processor architectures require even greater care to ensure that the processors are actually used effectively.
The fact that a single Linux image has been attached to hundreds of processors is no indication of scalability. A certain program may scale well, or not.
Correctable RAM and L2 errors? (Score:3, Informative)
UPDATE: I looked. Itanium 2's L2 cache is ECC. It'll correct a 1 bit failure, detect and die on a 2 bit failure. Believe it or not, on a large number of CPUs running over a long period of time, it happens more often than you think. It also says it has an L3. No idea on the L3 cache protection method used. Because they don't say, I'd also guess ECC. Wheee! Lots of high speed RAM around the CPU with ECC protection. Well, nobody called this an enterprise solution, so I guess its okay.
Also, you're going to have regular issues with soft ECC errors on that many TB of RAM. And then your eventual outright failures that'll bring down the whole image of the OS. (An OS could potentially handle it 'gracefully' by seeing if there is a userspace process on that page and killing/segfaulting it, but that's more of an advanced OS feature.)
Boy, I'd really hate to be the guy in charge of hardware maintenance on THAT platform.
Current State of the Art - 2 TB mem and 256 cpus (Score:2, Informative)
http://www.ccs.ornl.gov/Ram/Ram.html [ornl.gov]
A few notes:
Linux kernel: 2.4.21-sgi240rp04051808_10074
From df, a 1 TB ram disk:
none 1023700704 0 1023700704 0%
From
Red Hat Linux Advanced Server release 2.1AS (Derry)
The machine is actually not nice to work on. It is prone to frequent short freezes (2-15 seconds long; about one every 2-3 minutes, although not evenly spaced out).
Re:Another thing Sun does well.... (Score:3, Informative)
Read the Sun Blueprints (http://www.sun.com/blueprints/browsesubject.html# cluster) for how a real cluster works - actaully caring about data integrity. That is the crux with clustered systems: What happens if one node "goes mad" even though it's no longer a "valid" part of the cluster?
Look into Sun's dealing with failure-fencing; it's drastic (PANIC a node if it can't be sure it's a cluster member) but it works.
By contrast, Linux clustering seems to be at the level of "let's share an IP address, we can balance the load". Great for DNS (but -oh, DNS has that built-in) or Apache read-only servers (assuming no session-management, static-only pages).
Digital had an excellent cluster package last decade; Sun seem to be getting to that level now. Linux, sorry to say, is years behind.
Re:Not sure if your serious but lets explain. (Score:3, Informative)
Incorrect, a modern superscalar CPU can execute several instructions at the same time potentially. The pentium was the first Intel CPU able (very crudely) to do this, the P6 was 3-way superscalar (iirc - there was an article linked to on
Meaning one program thread running at the time.
It does not mean that at all.
Most CPUs only support a single context of execution, however some CPUs support multiple execution contexts, intel "HyperThreading" would be one example. So a superscalar CPU with multiple execution contexts could have many instructions in several stages of execution from multiple programme contexts at any given point in time.
When it has switched to a program all the other programs are effectevily at the the mercy of the program now running INCLUDING the OS. Wich is why DOS and Windows and Linux and Mac OS and all the others had "hangups".
You're describing co-operatively multi-tasking operating systems, which linux is not. Ie systems like Windows 3 and MacOS 9 and earlier.
Linux is a preemptive multi-tasking OS, as is MacOS X, WinNT/2k and (partly) Win9x. Under such a system programmes are given only limited periods of time to run, a programme which does not yield control of the system by itself will be suspended eventually. Typically this is done by the OS setting a hardware timer, upon the expiration of which the hardware forcibly returns control to the OS, where upon it can elect to give control of the system to another process (setting that timer again, if needs be). On most hardware this is done with a timer interrupt, eg IRQ 0 on PC class machines, which fires at a preprogrammed interval (100 (older linux) or 1000HZ (2.6) or 1024HZ (Linux or Digital Unix on Alpha)), when the interrupt goes off, the CPU saves the state of the process (as it always does to handle interrupts) and runs the appropriate interrupt vector as installed by the operating system, which can then elect to run another process (usually requiring the OS to save some state of current running process which CPU hadnt saved or which is OS dependent and then restore state of another process).
Anyway, a process on Linux can *NOT* "hang" the system by refusing to yield control. The OS (with help from hardware) will intervene.
still remains a case that all the programs and the OS are fighting for time on 1 single cpu
This isnt really a good mental image to have of a modern OS. The OS does not "fight for time". The OS only ever runs because:
1. A process calls the OS to perform some service on that processes behalf.
Eg, to do work on that processes behalf such as IO (read/write from disk/network/whatever or IPC IO and deliver it to process/destination), or to setup the OS abstractions needed for IO (filehandles typically on Unix) or to interact with OS abstractions, eg to list a directory or running processes or send a signal to a process, etc.
1a. A subset of 1, where a process calls the OS to voluntarily yield control of the CPU it is executing on. The OS potentially can do some housekeeping here before restoring state of another process and allowing it to run.
2. The hardware directly intervenes and executes OS installed functions, typically in response to an interrupt generated by a timer or other hardware or else some exceptional event (typically a memory fault where a memory address is referenced that does not "exist").
Operating systems will typically try to do as little as possible work in the latter case and will try defer as much as p
Re:Another thing Sun does well.... (Score:3, Informative)
Now sure, some careful planning can take an OLTP system and make it more cleanly distributed, but at that point it isn't OLTP, because all the nasty bits that made it a hard workload are washed out. Running a constantly-changing database (e.g. financial market?) on a cluster is hard; running a mostly static database (e.g. shopping cart?) is easy.
However, I agree with your point. Very few people need the 32-cpu monster (although there are a few!). Handling transaction volume can be done two ways: buy a big general-purpose machine that can handle the volume, or buy a cheaper cluster that more closely matches the workload. And today, the cluster is the right answer.
I think the difference between then and now is that before, we didn't know what the workload was supposed to be. In that case, a big general-purpose monster server is the most flexible solution. But now, we know what workload we want, and it's cheaper to design a cluster for that workload.