SGI to Scale Linux Across 1024 CPUs 360
im333mfg writes "ComputerWorld has an article up about an upcoming SGI Machine, being built for the National Center for Supercomputing Applications, "that will run a single Linux operating system image across 1,024 Intel Corp. Itanium 2 processors and 3TB of shared memory.""
Whoa! (Score:5, Funny)
Ok (Score:5, Funny)
Damn you SGI!
Fine... (Score:2)
Re:Ok (Score:2)
Re:Ok (Score:2, Funny)
ha!
Re:Ok (Score:5, Funny)
Re:Ok (Score:2, Interesting)
Alright, pass around the hat.
Longhorn (Score:3, Funny)
Re:Longhorn (Score:2, Funny)
No, but it will be used to [cross-]compile it!
Re:Longhorn (Score:4, Funny)
Re:Longhorn (Score:2)
In other news... (Score:5, Funny)
HP Overstock (Score:2, Funny)
Solaris (Score:3, Insightful)
Re:Solaris (Score:5, Informative)
http://top500.org/list/2004/06/
There's no "stronghold" for Sun to lose.
Sun != scientific computing (Score:5, Informative)
Scientific computing means data crunching (floating point). Complex, powerful processors are needed. The "stupider, but more" tradeoff doesn't work anymore. Sun processors have fallen behind in this respect.
Re:Solaris (Score:5, Interesting)
Solaris scales to hundreds of processors out-of-the-box. Until the vanilla Linux kernel accepts these changes and scale, Solaris still has a big edge in this area.
Lame analogy: many people have demonstrated that they can hack their Honda Civic to outperform a Corvette, however I can walk into a dealership and purchase the latter which performs quite well without mods.
Re:Solaris (Score:5, Interesting)
I wouldn't be surprised to see these changes in the 2.8 kernel. And what will people do until then I hear some people ask. I can tell you that right now it is very few people that actually have the need to scale to 1024 CPUs. And that will probably also be true by the time Linux 2.8.0 is released. AFAIK Linux 2.6 does scale well to 128 CPUs, but I don't have hardware to test it, neither does any of my friends. So I'd say there is no need for a rush to get this in mainstream, the few people that need this can patch their kernels. My guess is that in the time from now until 2.8.0 is released, we will see less than 1000 such machines worldwide.
Re:Solaris (Score:4, Funny)
640 CPUs are enough for anyone? :)
Re:Solaris (Score:3, Interesting)
A better retort would be "There's a world market for maybe 5 computers" by the IBM dude.
Claims are very difficult to make, and impossible to proove. However putting a time limit on a claim is easy. 2.8.0 will be released in 05 or 06, maybe we'll all have 1024CPU boxes in 20 years, but in 20 months?
Re:Solaris (Score:3, Insightful)
Re:Solaris (Score:3, Interesting)
If someone buys one of these clusters from SGI, then it does scale "out of the box" as far as they're concerned.
Sun does more than that (Score:5, Insightful)
I don't work for Sun, I'm just an SA that deals with both Solaris and Linux boxes. You don't pick sun for just "lots of cpus", you pick it for a very scalable OS and amazing hardware that allows for a very, very solid datacenter. If downtime costs a lot (ie. you lose a lot of money for being down), you should have Sun and/or IBM zseries hardware. Unfortunately those features cost a lot and most times you can use Linux clustering instead for a fraction of the cost and a high percentage of the availability.
Sun and/or IBM zseries hardware (Score:4, Informative)
The Sun hardware is more difficult to deal with, since there isn't a virtual machine abstraction. You can't do everything below the OS. Still, Linux 2.6 has hot-plug CPU support that will do the job without help from a virtual machine. Hot-plug memory patches were posted a day or two ago. Again, this is NOT required for hot-plug on the zSeries. IBM whips Sun.
I'd trust the zSeries hardware far more than Sun's junk. A zSeries CPU has two pipelines running the exact same operations. Results get compared at the end, before committing them to memory. If the results differ, the CPU is taken down without corrupting memory as it dies. This lets the OS continue that app on another CPU without having the app crash.
Re:Sun and/or IBM zseries hardware (Score:2)
Du-uh (Score:2, Interesting)
Re:Du-uh (Score:2)
I think it's funny how Sun is either far too expensive and we're being told to run everything on a few old 486s from the back of the office cupboard, or that Linux on a mainframe is the way to go.
Re:Sun and/or IBM zseries hardware (Score:3, Informative)
A few questions:
Re:Sun and/or IBM zseries hardware (Score:3, Informative)
sort of error correction. The cache generally
has ECC for this. Since L1 is innermost and small,
it may well be duplicated along with the pipelines,
but I think they use ECC for that as well.
This is full-path protection. Cables have ECC
and/or a protocol with checksums. Disks are RAID.
Methods of error correction vary by component,
but nowhere are they missing.
Another thing Sun does well.... (Score:5, Insightful)
I have replaced Sun Hardware/Software combo's in the core datacenter for many of our customers, and I can tell you that yes - Sun brings some amazing features to the table - most of which are there to serve old technology. Linux on simple CPU's delivers such an amazing price performance (depending on the job, we see an average of 3x to 4x performance increase for 25% of the cost. That means that if I were to spend the same, lifecycle-wise, on a Linux cluster as I would on a big Sun box like the 10k or 15k, I'd end up with 12x to 16x the performance of the Sun solution.
The same functionality in terms of cpu and ram (and other hardware) failure is available on the Linux cluster, albeit in less graceful form - the magic spell to invoke goes like this: if I have 300 machines crunching my data, I can afford to lose a couple, and can afford to have a few hot-standby's.
Of course, the massively parrallel architecture does not work for all applications, and in those cases you would look to use either OpenMOSIX [openmosix.org] or of course the (relatively expensive) SGI box mentioned in this article.
Re:Another thing Sun does well.... (Score:3, Informative)
Read the Sun Blueprints (http://www.sun.com/blueprints/browsesubject.html # cluster) for how a real cluster works - actaully caring about data integrity. That is the crux with clustered systems: What happens if one node "goes mad" even though it's no longer a "valid" part of the cluster?
Look into Sun's dealing with failure-fencing; it's drastic (PANIC a node if it can't be sure it's a cluster member) but
Re:Another thing Sun does well.... (Score:3, Informative)
Now sure, some careful planning can take an OLTP system and make it more cleanly distributed, but at that point it isn't OLTP, because all the nasty bits that made it a hard workload are washed out. Running a constantly-changing database (e.g. financial marke
Re:Sun does more than that (Score:2)
Does that happen in real life?
Hot swapping components sounds great, but what if the screwdriver slips out of the finger of the engineer and causes a short?
Who has seen that a memory chip or a cpu was hot-swapped in a pro
Re:Sun does more than that (Score:4, Interesting)
The systems I've seen that have hot-swap PCI cards have plastic partitions between the slots to prevent the cards from touching each other when hot swapping them.
I'm not sure why the hypothetical screwdriver in such a tech's hands. Many systems have non-screw means of retaining memory, PCI cards, CPUs and such.
Re:Sun does more than that (Score:2)
Let me clue you in on a few things (Score:5, Informative)
The UNIX made by SGI (the company making the machine referenced in the article) is more scalable than Solaris. Remember, IRIX was the first OS to scale a single Unix OS image across 512 CPUs. And now they've eclipsed that, with Linux.
None of that is unique to Sun.
Better than what? And says who? They've never decisively convinced the market that they're beter at this than HP, SGI, IBM or Compaq.
In addition to ignoring the other good Unix architectures out there in a dumb way with this comparison, you're also totally missing the point of the article. Linux supercomputing isn't just about cheap clusters anymore. Expensive UNIX machines on one side and cheap Linux clusters on the other is a false dichotomy.
Scalability of sorts (Score:4, Informative)
Scalability is a complex issue. SGI has put a whole lot of processors together and put a single Linux image on it (so that a single program can use all memory), but this says nothing about how that setup will actually perform for general purpose use. Just because the hardware allows threads on hundreds of processors to make calls into a single Linux kernel, does not mean that there will not be major performance issues if this actually happens.
There are performance issues with memory even on single processor systems with nominally a single large address space, and a developer may need to put a lot of work into ensuring that data is arranged to make best use of the various levels of cache.
Many of the multi-processor architectures require even greater care to ensure that the processors are actually used effectively.
The fact that a single Linux image has been attached to hundreds of processors is no indication of scalability. A certain program may scale well, or not.
Re:Sun does more than that, but SGI always has (Score:2)
The flip-side of this is that SGI has been in decline for several years longer than Sun and ma
Re:Sun does more than that (Score:3, Interesting)
Re:Solaris (Score:2)
Why gaming? (Score:2, Funny)
Re:Why gaming? (Score:2, Informative)
Re:Why gaming? (Score:2)
In about the same way that a Boeing 747 is overkill as a suburban/city commuting vehicle.
Re:Why gaming? (Score:2)
It's used by marketing-types who don't understand that 24/7 already means every day of the year.
The same marketdriods that need to be reminded that free is always 100%.
BTW, it's 365.2462
Press Release (Score:4, Informative)
CC.
The big question is... (Score:4, Funny)
Re:The big question is... (Score:5, Funny)
"Windows has detected 1024 new sound cards and is installing them..."
and then the inevitable..
"Windows needs to restart your computer. Click OK to restart"
and then on system restart
1024 sound control apps in the system tray! =)
In other news... (Score:5, Funny)
When shown the report about Linux running on 1024 processors, Gates purportedly responded, "32 processors ought to be enough for anybody."
Re:In other news... (Score:5, Informative)
Re:In other news... (Score:5, Funny)
Please stop letting facts get in the way of a good MS bashing session.
Minister for Dis-Information.
Re:In other news... (Score:3, Insightful)
Still though the fact that linux can scale to 1024 processors while windows can only scale to 64 is enough reason to bash windows isn't it? I mean wasn't bill gates recently bashing l
Similar software available? (Score:2, Interesting)
Re:Similar software available? (Score:5, Informative)
Re:Similar software available? (Score:3, Insightful)
Any SGI customer can then contribute the changes back to the kernel long before a year is up.
from MPI to multithreaded ? (Score:4, Interesting)
Re:from MPI to multithreaded ? (Score:2)
This article is news to me. My impression was that HPC programmers preferred mpi over shared memory multi-threading because they found
Re:from MPI to multithreaded ? (Score:4, Informative)
It's a tradeoff. MPI is "preferred" because a properly written MPI program will run on both clusters and shared-memory equally fast, because all communication is explicit. It's also much harder to program, because all communication must be made explicit.
Shared-memory (e.g. pthreads) is easier to program in the first place (since you don't have to think about as many sharing issues) and more portable. However, it is very error-prone - get a little bit off on the cache alignment or contend too much for a lock, and you've lost much of the performance gain. And it can't run it on a cluster without horrible performance loss.
If it's the difference between spending two months writing the shared-memory sim and four months writing the message-passing sim that runs two times faster on cheaper hardware, well, which would you choose? Is the savings in CPU time worth the investment in programmer time?
Alas, the latencies on a 1024-way machine are pretty bad anyway. If they use the same interconnect as the SGI Origin, it's 300-3000 cycles for each interconnect transaction (depending on distance and number of hops in the transaction). Technically that's low-latency... but drop below 32 processors or so, and the interconnect is a bus with 100 cycle latencies, so those extra processors cause a lot of lost cycles.
Re:from MPI to multithreaded ? (Score:2)
Sharing memory between processes running on different machines that are indeed separate machines is not that easy. Often requires fancy hardware and software.
while the SGI solution also involves fancy hardware and software, I believe a single process gets to have terabytes of memory, which is rather different from the common cluster ar
Re:from MPI to multithreaded ? (Score:5, Informative)
Does this mean that the applications running on the "old" clusters, presumably using some flavor of MPI to communicate between nodes, will have to be ported somehow to become multithreaded applications ?
NCSA still has plenty of "old" style clusters around. Two of the more aging clusters, Platinum [uiuc.edu] and Titan [uiuc.edu] are being retired, to make room for newer systems like Cobalt. Indeed, the official notice [uiuc.edu] was made just recently--they're going down tommorrow. However, as the retirement notice points out, we still have Tungsten [uiuc.edu], Copper [uiuc.edu], and Mercury (Terragrid) [uiuc.edu]. Indeed, Tungsten is number 5 on the Top 500 [top500.org], so it should provide more than enough cycles for any message-passing jobs people require.
So, anyone has any insights as to why/how this matters for the programmers ?
What it means is that programming big jobs is easier. You no longer need to learn MPI, or figure out how to structure your job so that individual nodes are relatively loosely-coupled. Also, jobs that have more tightly-coupled parallelism are now possible. The older clusters used high-speed interconnects like Myrinet or Infiniband (NCSA doesn't own any Infiniband AFAIK, but we're looking at it for the next cluster supercomputer). Although they provided really good latency and bandwidth, they aren't as high-performing as shared memory. Also, Myrinet's ability to scale to huge numbers of nodes isn't all that great--Tugsten may have 1280 compute nodes, but a job that uses all 1280 nodes isn't practical. Indeed, untill recently the Myrinet didn't work at all, even after partitioning the cluster into smaller subclusters.
This new shared-memory machine will be more powerful, more convienient, and easier to maintain than the cluster-style supercomputers. Hopefully it will allow better scheduling algorithms than on the clusters too--an appaling number of cycles get thrown away because cluster scheduling is non-preemptive.
I'd also like to point out some errors in the Computerworld article. NCSA is *currently* storing 940 TB in near-line storage (Legato DiskXtender running on an obscenely big tape library), and growing at 2TB a week. The DiskXtender is licenced for up to 2 petabytes--we're coming close to half of that now. The article therefore vastly understates our storage capacity. On the other hand, I'd like to know where we're hiding all those teraflops of compute--35 TFLOPS after getting 6 TFLOPS from Cobalt sounds more than just a little high. That number smells of the most optimistic peak performance values of all currently connected compute nodes. I.e. - how many single-precision operations could the nodes do if they didn't have to communicate, everything was in L1 cache, we managed to schedule something on all of them, and they were all actually functioning. Realistically, I'd guess that we can clear maybe a quarter of that figure, given machines being down, jobs being non-ideal, etc. etc. etc.
As a disclaimer, I do work at NCSA, but in Security Research, not High-Performance Computing.
Its and experiment..... (Score:2)
3TB of memory? (Score:3, Funny)
Coincidence? (Score:2, Redundant)
The solution! (Score:5, Funny)
"The National Center for Supercomputing Applications will use it for research"
1. Make a system that generates more heat than a supernova.
2.Research a solution to global warming.
3. Profit!
In other Headlines (Score:5, Funny)
Wow (Score:2, Funny)
Uh oh (Score:2)
Impressive... (Score:2, Informative)
Scalability of applications (Score:3, Insightful)
Re:Scalability of applications (Score:5, Informative)
With a single memory image system the computation can easily repartition dynamically as the computation proceeds. Its very costly (never say impossible!) to do this on a cluster because you have to physically move memory segments from one machine to another. On the NUMA system you just change a pointer. The hardware is good enough that you don't really have to worry about memory latency.
And let's not forget io. Folks seem to forget that you can dump any interesting section of the computation to/from the file system with a single io command. On these systems the io bandwidth is limited only by the number of parallel disk channels - a system like the one mentioned in the article can probably sustain a large number of GBytes/sec to the file system.
Let's not forget page size. The only way you can traverse a few TB of memory without TLB-faulting to death is to have multi-MByte-size pages (because TLB size is limited). SGI allowed a process to map regions of main memory with different page sizes (upto 64 MB I think) at least 10 years ago in order to support large image data base and compute apps.
When I used to work at SGI (5 years ago) the memory bandwidth at one cpu node was about 800 MBytes/s. My understanding is that the Altix compute nodes now deliver 12 GBytes/s at each memory controller. Although I haven't had a chance to test drive one of these new systems, it sounds like they have gradually been porting well-seasoned Irix algorithms to Linux. It is unlikely that a commodity computer really needs all of this stuff, but I'm looking at a 4-cpu Opteron that could really use many of the memory management improvements.
g
The real test (Score:5, Funny)
1024 cpus and 3 TB memory (Score:4, Funny)
So, when will Jeff Dike have UML ported to this? (Score:2, Funny)
In my garage ... (Score:3, Funny)
Correctable RAM and L2 errors? (Score:3, Informative)
UPDATE: I looked. Itanium 2's L2 cache is ECC. It'll correct a 1 bit failure, detect and die on a 2 bit failure. Believe it or not, on a large number of CPUs running over a long period of time, it happens more often than you think. It also says it has an L3. No idea on the L3 cache protection method used. Because they don't say, I'd also guess ECC. Wheee! Lots of high speed RAM around the CPU with ECC protection. Well, nobody called this an enterprise solution, so I guess its okay.
Also, you're going to have regular issues with soft ECC errors on that many TB of RAM. And then your eventual outright failures that'll bring down the whole image of the OS. (An OS could potentially handle it 'gracefully' by seeing if there is a userspace process on that page and killing/segfaulting it, but that's more of an advanced OS feature.)
Boy, I'd really hate to be the guy in charge of hardware maintenance on THAT platform.
Re:Will it be done in time for Quake 3? (Score:3, Funny)
Hmm, quite possibly.
Re:Will it be done in time for Quake 3? (Score:2, Funny)
Re:really fast? (Score:5, Funny)
No, you're going to need quantum computing for that.
Re:really fast? (Score:4, Funny)
Re:really fast? (Score:2, Funny)
Re:What happened to RISC? (Score:4, Funny)
Re:What happened to RISC? (Score:3, Informative)
Re:What happened to RISC? (Score:3, Informative)
Re:What happened to RISC? (Score:2, Informative)
It became obsolete (Score:4, Informative)
Actually.. (Score:2, Informative)
Re:It became obsolete (Score:5, Informative)
A CISC instruction could do things like: take the value in register BP, add 4, get the value from the memory at the address you just computed, add the value in the register AX, and put the result back at the same memory location. Execution would take several clock-ticks.
To do the same in RISC, you would need several instructions (add 4, get from memory, add ax, store to memory). The execution of the individual instructions would take one tick each, so the sequence would take several. But on average RISC was a bit faster.
CISC was invented in a time that the memory was small, in the CISC way you could store larger programs in the same amount of memory.
RISC was invented when memory-size was not limited anymore, and looked to displace CISC in the long run.
CISC was still around when the memory bandwidth became a limiting factor. And since fewer instructions needed to be fetched from memory, more bandwidth was left for other data traffic. RISC lost some of it's speed advantage.
Modern CISC processors, get CISC instructions from memory, chop them up in smaller instructions, and executes those smaller instructions really fast. So in fact they can be seen as RISC processors, posing as CISC processors, ie the best of both worlds.
So CISC is a way of compressing RISC instructions, so they take up less memory/bandwidth.
Re:It became obsolete (Score:4, Informative)
Technically, RISC chips were supposed to execute all instructions in ONE cycle. This simplified the chip architecture, allowing it to scale up much farther. The downside was that it put the onus on the compiler writer to produce efficient code. (MIPS is a perfect example of this architecture.) All he had to do was make sure that fewer instructions were executed per task, and the code would run faster.
That is, until the chip designers started introducing SuperScaler and Out of Order execution. You see, simplifying the chip design provided chip designers with a way to add new optimizations in how instructions were loaded and executed. Unfortunately, this again meant more work for the compiler writer. Now he not only had to optimize the number of instructions, but he also had to optimize the ordering so that multiple instructions could be executed simultaneously or out of order.
Re:What happened to RISC? (Score:5, Interesting)
Quick examples: RISC use less power because it has less logic? No, it needs to run at a higher frequency to maintain the same speed as a slower CISC.
RISC is easier to program? Depends on the person. A compiler can take advantage of large instructions very well which are hardware optimized.
RISC easier to develop/manage? I'll say yes for RISC on this one. There's simply less logic on the chip so less logical errors possible. There's plenty more cache which can break but broken parts can be fused off.
RISC is physically smaller? No. RISC needs a higher clock frequency because many more instructions need to be executed. The result of this is that a much larger instruction cache is needed on chip.
I don't remember every comparison but it pretty much comes out that neither is better than the other. That being said RISC is better than x86. Everything is better than x86. However CISC vs RISC is much harder to judge. Having done x86, 68k, and MIPS I must say that RISC is a pleasure.
Re:What happened to RISC? (Score:3, Informative)
No. This is exactly wrong. G5s are a good example of this. They easily outperform P4s at the same clock speed, and it's the P4 which must run at the higher speed to compensate.
The overhead of supporting all the various instructions and adressing modes, as well as being able to fit the whole CPU in one die were what made RISC a good choice in the past. Now,
Re:What happened to RISC? (Score:3, Informative)
That was my point. A shitty compiler with moderate optimization settings is very close in performance to one of the top compilers out there.
"The top compiler is infact the Intel compiler in part because it knows about unpublished instructions. Have fun reading the code it generates."
Yes, this was the example I used. The vectorized loops are a bitch to read.
"On the subject of G5s being faste
Re:What happened to RISC? (Score:2)
RISC overrated (Score:4, Informative)
It's like having on-the-fly instruction decompression. e.g. CISC programs tend to be smaller in main memory+cache, and they travel in CISC/"compressed" form taking up less memory bandwidth over the memory/cache buses to the CPU instruction decoder where they are "decompressed" to RISC micro-ops to be executed.
Look at the mainstream desktop/workstation/server CPUs. Only the SPARC is RISC. IBM POWER/PowerPC is barely RISC[1], some people think it's more CISC than RISC. Itanium isn't RISC. x86 isn't. The rest (Alpha, MIPS, PA-RISC) are either out of the market or on their way out.
As long as CPUs are fast and much faster than RAM (and cache remaining expensive), it's often worth doing the compression/decompression thing.
[1] I believe IBM's POWER chips actually decode their "RISC" instructions to simpler instructions, some of their "RISC" instructions are pretty complex- kinda oxymoronic... But as I mentioned, that may not be such a bad thing.
Re:What happened to RISC? (Score:2)
The Itanium 2 is based on what Intel calls the EPIC architecture: Explicit Parallel Instruction Computing. Basically the CPU fetches 128-bit 3-instruction bundles. The instructions themselvs are somewhat simple and all three are executed simultaneously by parallel execution units.
Like some RISC architectures the Itanium 2 instruction set includes predication, register windows (lots of them -
Well it turned out that RISC.... (Score:2)
Re:in time for.... (Score:2)
Re:Advantages...? (Score:5, Informative)
The purpose of that computer is to solve complex scientific problems such as weather simulations, high-energy particle simulations, protine folding, etc. Many of these simulations involve iterated systems of equations that can take decades to solve on the fastest CPU's we have today.
The only way to get meaningful results in a meaningful amount of time is to break the problem apart into smaller problems and solve them in parallel.
Some projects, such as Folding@Home [stanford.edu] and Find-A-Drug [find-a-drug.org] go the distributed computing route -- use many disconnected systems to solve the problem.
The downside to that approach is that not all problems can be easily broken apart -- and some classes of problems can exist without tight coupling but they loose efficiency. The impressive thing about this particular super computer is that it has a single, unified memory image.
This is very useful for some classes of simulation problems when the entire simulation must be present for each iteration.
Not sure if your serious but lets explain. (Score:4, Informative)
I will avoid the tech terms (partly because they would confuse you, partly because I don't know them all but mostly because they ain't needed.
A single CPU computer can execute ONE instruction at the time. Meaning one program thread running at the time. But wait you say, my OS can run multiple programs at the same time. WRONG. It can't. It is a trick. It is running one program at the time but it is switching the program it is running really fast. There is however a problem with this. When it has switched to a program all the other programs are effectevily at the the mercy of the program now running INCLUDING the OS. Wich is why DOS and Windows and Linux and Mac OS and all the others had "hangups". With an extremely well written OS these hangups (when a program doesn't switch back to the OS) can be avoided but it still remains a case that all the programs and the OS are fighting for time on 1 single cpu.
So what happens when you add a cpu? Well a lot less switching PLUS if a program for whatever reason does not switch properly the OS can still be run on the other processor. Just making a windows box a dual CPU instantly makes it far more robuust. I encountered this myself with an old dell P3 that had a dual board but no dual CPU installed. Before I added a second CPU it was the usual windows crap of hangs and reboots and BSoD. Afterwards it ran as stable as a unix machine. Simple things like openeing a complex folder in exploder no longer "froze" the desktop as it could simple run exploder on one CPU and say word or my mp3 player on the other.
Don't forget too that there think like ATA harddrives and CD-ROM need the cpu to drive them. This takes a lot of long cycles and a lot of waiting, not so much CPU power as just time on the CPU. With a second one to do all the other tasks this makes everything run far smoother.
So what is better? Running 1 2ghz cpu or 2 1ghz cpu's? Depends. If you are running 1 program thread go with the 1 cpu. It will take all the cpu time but will not need to share it. If however you are running countless small threads go with the 2 or more solution. Threads will have access faster and you will loose less cpu time on the time needed to execute switches.
Oh yeah that is another problem. Switching between programs takes cpu time as well. It is not unknown for single CPU systems to spend so much time on switching they don't have time to run anything anymore. The old to many running programs problem known from windows but wich affects every OS.
Lastly there is a simple problem. Say you want real power do you go for a quad 2ghz or a single 8ghz. Answer? It is a trick, no such thing as a 8ghz cpu.
If you get the chance buy a second hand dual P3 and install windows 2000+ or Linux on it and be amazed. That old system will respond a lot faster underload then your 3ghz monster.
Re:Not sure if your serious but lets explain. (Score:3, Informative)
Incorrect, a modern superscalar CPU can execute several instructions at the same time potentially. The pentium was the first Intel CPU able (very crudely) to do this, the P6 was 3-way superscalar (iirc - there was an article linked to on
Re:it would make 2d place (Score:2)
Re:it would make 2d place (Score:3, Informative)