Building A High End Quadro FX Workstation 89
An anonymous reader writes "FiringSquad has an article detailing some of the differences between building a high-end workstation and a high-end gaming system. They go into things like ECC memory, and the difference between professional and gaming 3D cards. The Quadro FX 2000 coverage is particularly interesting -- the system with the Quadro FX 2000 was never louder than 55 dB!"
interesting (Score:2, Interesting)
ECC Memory? (Score:3, Insightful)
I Am Not A Memory Expert though.
Re:ECC Memory? (Score:3, Interesting)
Either you just haven't recognized when it happened, you don't work with any significant number of computers, or you've been INCREDIBLY lucky.
Memory isn't perfect. If your uptime is important, you need ECC.
Re:ECC Memory? (Score:1)
Re:ECC Memory? (Score:5, Informative)
Re:ECC Memory? (Score:3, Insightful)
Re:ECC Memory? (Score:2)
If someone is dealing with critical numbers, I would hope that they have a lot more redundancy and comparison/verification in place, than just trusting the hardware of a single machine.
Re:ECC Memory? (Score:5, Informative)
RTFA Read The F**king Article!
"Two to twelve times each year, a bit in memory gets inappropriately flipped. This can be caused by cosmic rays flying through your RAM or a decay of the minute radioactive isotopes found in your RAM - the impurity need only be a single atom. Most of the time, this flipped bit is unimportant. Maybe it's a flipped bit in unallocated memory, or maybe it just altered the position of a pixel for a fraction of a second. If you're unlucky though, this flipped bit can alter critical data and cause your system to crash. In our situation, a flipped bit could potentially alter our results significantly."
Quoted from the second paragraph of the fourth page.
Re:ECC Memory? (Score:1)
With ECC RAM, you're just eliminating that *known* unlikely event. What about other *unknown* unlikely events? Those may have just as high a liklihood as a flipped memory bit.
Running the tests twice should eliminate the vast majority of these kinds of known and unknown rare glitches, no?
Re:ECC Memory? (Score:5, Insightful)
Large simulations (such as this, or car crash simulations, etc.) take days, if not weeks to run. Since the ECC ram isn't 100% slower (i.e. time of fast memory times two is more than time of ECC memory) there is no need to run it twice.
Anyhow, if the two simulations differ, you'll have to do it a third time to check if you get a match, and still you only know that you are *likely* to have gotten it right. With ECC the chance of getting it right increases.
Re:ECC Memory? (Score:2, Interesting)
Also, it's been a while but don't most non-ECC memory use parity bits? So that a single flipped bit will be noticed...hence the isolated blue screens of death/ kernel panic on very rare occasions. Or is a parity bit what passes for ECC these days?
Re:ECC Memory? (Score:1, Informative)
I believe SRAM cells are less likely to have bits flipping than DRAM cells (but don't take my word for it). That said, AMD's Hammers will have extensive error checking for cache. L1 data cache is ECC protected, L1 Instruction cache is parity protected. The unified L2 cache fully ECC protected, including separate ECC for L2 tags. The integrated memory controller supports Chipkill ECC RAM.
Re:ECC Memory? (Score:2, Insightful)
So, if it takes 4-6 DAYS for a test to run, you want to run it again to verify the results? They don't have the time to do it again. Take this from some one who manages a 190 node Linux cluster. We use this for seismic data processing. Our processing run times are 3 to 4 days, each and there are multiple runs for each job. We have project schedules that we need to meet, and running each step in the processing schedule twice is not an option.
Depending on what you are doing, the money is better spent on the front end for quality hardware, than to double the time for a project to process the data. You could double the initial cost of the hardware, have two clusters and run the tests in parallel and compair the results. This may be the best thing to do, depending on what you are modeling/processing, but its much cheaper to invest in the quality hardware up front.
Kent
Re:ECC Memory? (Score:1)
--This is pretty much what Mainframes do... Only they do it WHILE the test / application is running.
fourth page!?!? (Score:1)
can someone read it aloud and email me the mp3?
Re:ECC Memory? (Score:2)
Why have crashes? Even my Win2K machine stays up for months at a time.
Re:ECC Memory? (Score:2)
ECC is unneccessary if you use your computer to listen to mp3s, download porn and play counterstrike. If you're using your computer for important tasks however, ECC corrects single bit errors which occur more often than you realize (most of your bits aren't very important, so you don't usually notice), and it also detects multi-bit errors, thus preventing data corruption that could otherwise go unnoticed.
Multi-bit errors with ECC will generate a non-maskable interrupt, which will purposefully take your machine down, rather than allowing you to continue with unreliable memory.
On a high-end server, every single data path runs ECC, so no data can be accidentally modified, ever. On PCs, it's generally considered acceptable just to ECC the memory, since PCs rarely are engaged in ultra-critical applications.
P.S. You're a fucking retard.
Easy (Score:1, Interesting)
1. workstation == better processors
2. gaming system == better graphic cards
Re:Easy (Score:2, Interesting)
You may like to read the article. This is a scientific visualization workstation being built with a seriously nice Quatro FX graphics card.
The author even benchmarks UT2k3 on it, and the scores are.. umm.. impressive.
Re:Easy (Score:2)
Better processors how? Faster? better multiprocessing? Vector?
Better graphits cards how? cunning filtering? double whammy pipelines with 8x anti aliassing? fast 2D? accurate 3D?
Certain workstations require graphics cards which would make your Nvidia blahblah cry for mercy *in specific operations* This makes it a BETTER graphics card - FOR IT'S INTENDED USE. Yes, it'll be a crap gaming card. Likewise, it's possible a workstation will have a processor that's useless when it comes to running windows - and that threfore people would say their gaming machine had a *better* processor. But for the specific applications of that workstation it would be fine.
So there are lots of workstations with better graphics cards and worse processors and vice versa.
hohum
Troc
Re:Easy (Score:4, Insightful)
2. gaming system == better graphic cards
Not as simple as that. A games card will trade precision for speed, because precision is less important if you are updating the scene dozens of times a second anyway. If two walls don't meet perfectly for 1/60th of a second, who will even notice? A workstation card will trade speed for precision - you cannot risk a mechanical engineer missing an improperly aligned assembly because of an artifact created by the graphics card, or worse, breaking an existing design because an artifact shows a problem that doesn't exist in the underlying model.
Re:Easy (Score:2, Informative)
I just cant agree with that statement - its more a 'drivers written to function better in games' than a better graphics card. The one in the article uses a Quadro FX and I know lots of other people who use a 3dlabs Wildcat series - both of those cards wipe the floor with 'gaming' cards in 3d rendering for things like cad/3d studio/maya
Not entirely surprising (Score:5, Informative)
But "honest-to-goodness computation" (numerical analysis,
However, most if not all of the points in this article are quite informative - did YOU know the difference between Athlon XP and MP. I thought I mostly did.
And his choice of ECC RAM - Two to twelve times each year, a bit in memory gets inappropriately flipped
We come to the video card - a hacked GeForce isn't the same thing as a Quadro - bet some of the FPS freaks might be a little surprised, but the GeForces and Radeons aren't made for this sort of stuff. No real surprise, if you think about. But, as he says, why not a FireGL? Everything comes back to the lesson of the day: know your task. And boy, he certainly does.
Anyway, enough of regurgitating some of the finer points of this great article. Read it for yourself. And don't post comments about how 1337 your Radeon 9700 Pro or Ti4800 is. Know your task.
Re:Not entirely surprising (Score:1)
Uh. (Score:1)
The Unreal engine has never been multi-threaded(there was a RUMOR that a future build of UT2k3 would have it in for laughs, this has not happened yet). For Quake3, you can use the "r_smp" variable in a Q3 engine game, but this is more of a testament to Carmack than anything else(stability problems, here we come).
Speaking as an owner of a dual-Athlon system, buying a SMP machine entirely for gaming is a shootable offense--there's no viable reason. Most games really aren't bound by the CPU, they're very fill-rate and TnL dependent, and more likely to run into your video card, RAM or bus speed barriers first. More CPU helps if you're running a server or for some reason want to play with a ridiculous amount of bots, but a bus speedup or better video card will aid the client much more.
Where it DOES come in handy is if you do development work; you can launch the client without having to quit out of your editing environment, compile a level in the background, or encode MP3s without a single loss of frames...
Re:Not entirely surprising (Score:1)
My task: Running a console on the rare occasion that a monitor is plugged into my server at home.
My card: An S3 Trio32
Ph33r my 1337ness.
--saint
Re:Not entirely surprising (Score:2)
oh wait, it blew it's gain circuitry or something to bits........
Re:Not entirely surprising (Score:2)
wait a minute...
SGI should be put out of its misery (Score:4, Funny)
"These systems were around $40,000 when first released. Each R12000 400MHz has a SpecFP2000 of around 350-360 and so it's approximately equal to an Athlon 1.2GHz. The caveat is that the SpecFP2000 benchmark is actually made up of a bunch of other, smaller, tests. For computational fluid dynamics or neural network image recognition, the 400MHz SGI CPU is 2.5 to 5 times faster than the Athlon!"
WOW! 2.5 times faster than a 1.2Ghz Athlon!? Man, you'd almost need a $168 2.4 Ghz Athlon [pricewatch.com] to keep up! I wish they made them!
P.S. The 3.06 Ghz P4 is just under 1000 on the SpecFP benchmark [specbench.org].
Re:SGI should be put out of its misery (Score:2)
(2) Spreading the overhead and costs of R+D (which can be *huge*)
If everybody went with SGI instead of IBM, we'd all be buying R12K boxes (from clone manufacturers, no less
Shop eBay... best UNIX for your dollar.
Re:SGI should be put out of its misery (Score:2, Interesting)
"These systems were around $40,000 when first released. Each R12000 400MHz has a SpecFP2000 of around 350-360 and so it's approximately equal to an Athlon 1.2GHz. The caveat is that the SpecFP2000 benchmark is actually made up of a bunch of other, smaller, tests. For computational fluid dynamics or neural network image recognition, the 400MHz SGI CPU is 2.5 to 5 times faster than the Athlon!"
WOW! 2.5 times faster than a 1.2Ghz Athlon!? Man, you'd almost need a $168 2.4 Ghz Athlon [pricewatch.com] to keep up! I wish they made them!
P.S. The 3.06 Ghz P4 is just under 1000 on the SpecFP benchmark [specbench.org].
Lets see, the last generation that we have SPEC numbers on for SGI is the 600Mhz R14K. It clocks in at 529 PeakFP compared to 656 Peak FP for the 2.4 GHz MP that was used in the benchmark. That's about a 20% difference in Speed. The original CPU's that he was dealing with, the R12K 400 and the 1.2 K7 are 407 and 352 respectively. That actually gives the SGI a lead by about 15%. Now if the 2.5x increase in an application holds true, I'd say the SGI is still a good deal if you can afford it. Now granted I don't have $40,000 to spend on a workstation, but there are plenty of companies who are willing to spend the extra $30,000 once to get double the performance out of their $60,000 a year engineers for the next two or three years. Also, as is pointed out in the article, the P4 is insanely optimized for SPEC. It's numbers have no real meaning to most realworld applications. If you want to get right down to it, SGI can give you 512 CPU's run through a single Infinite Reality module. No one would actually do this, but it's nice to dream about it once in a while
Re:SGI should be put out of its misery (Score:1, Informative)
Biased? (Score:4, Interesting)
The article carefully explains the choices made. However, we find the following line at the end of it:
Special thanks to AMD, NVIDIA, TYAN, and Ryan Ku at Rage3D.com for helping me with this project.
Well, maybe they had no influence at all, but then how come that most of the chosen products match this 'special thanks' line?
Re:Biased? (Score:3, Insightful)
you dont order food or car parts without knowing what is there and what you want/need do you??
Oh, and if you also notice that the rest of the site is based on new hardware reviews and performance, you'd think that they would have good experiances with what works and what doesn't.
If you went out and researched companies or people for a project you where doing, would you not include them in a `special thanks to' section of the paper?
Re:Biased? (Score:3, Insightful)
ISV Certification (Score:5, Informative)
If it's not ISV certified it doesn't do you much good, as for as a workstation goes.
From Ace's Hardware:
When you look at the typical price ($4000-$6000) of a workstation built by one the big OEM's you might ask yourself why you or anyone would pay such a premium for a workstation.
In fact if you take a sneak peek at the benchmarks further you will see that a high-end PC, based upon a 1400MHz Athlon, can beat these expensive beasts in several very popular workstation applications like AutoCAD (2D), Microstation.
Yes, it is possible that you are better served by a high-end PC, assembled by a good local reseller. Still, there are good reasons to consider an OEM workstation.
Most of the time, a workstation is purchased for one particular task, and sometimes to run one particular application. Compaq, Dell and Fujitsu Siemens have special partnerships with the ISV's (Independent Software Vendor) who develop the most important workstation applications. In close co-operation with these ISV's, they verify if the workstation is capable of running each application stablely and fast. In other words, you can ask the OEM whether or he and the ISV can guarantee that your favorite application runs perfectly on the OEM's workstation. ISV certification is indeed one of the most critical factors that distinguishes a workstation from a high-end desktop.
Secondly, it is harder to assemble a good workstation than a high-end PC. Typically, a PC is built for the highest price/performance. A lot of hardware with an excellent price/performance ratio comes with drivers which do not adhere strictly to certain standards such as the PCI and AGP standards. Even if this kind of hardware might comprise stability in very rare cases, it is unacceptable for a workstation.
Last but not least, workstations come with high-end SCSI harddisks and OpenGL videocards which are seldom found in high-end PC's. Workstations are shipped with ECC (Error Checking and Correction code) memory and can contain 2GB to 4GB memory. High-end PC's typically ship with non-ECC memory and are - in practice - limited to 512MB (i815 chipset) - 2GB (AMD760).
Re:ISV Certification (Score:2)
Xeons have more L2 cache? (Score:2)
Perhaps the author felt that it goes without saying, but I'll say it. Regardless of theory, the choice of CPU would ideally be left until after some domain-specific benchmarks.
Only 55db (Score:1)
Re:Only 55db RTA? (Score:2)
The GeForce is clocked @ 500MHz. The Quadro is clocked @ 400MHz and doesn't need the hoover for cooling.
didja RT*A? From the horses mouth:
I've run benchmarks at high resolutions when possible to minimize the influence of the CPU. By default the Quadro FX 2000 operates at 300/600MHz in 2D mode, and 400/800MHz in 3D performance mode. The new Detonators allow "auto-detection" of the optimal overclocking speed. This was determined to be 468/937. The GeForce FX 5800 Ultra runs at 500/1000. Here are the results we obtained with the card overclocked to 468/937:
I'm getting solid performance with a GPU that never runs past 63C and enters into the "high fan speed mode."
Hmmm. So. You were... wrong. OK. Bye,
Re:MP3 Playback? (Score:1, Interesting)
And as far as "Is there something magical about MP3s?," I think he's talking about standard wave output support in linux instead of enabling 5.1 surround, midi, game-port, ect. for a minimal make-the-linux-user-happy driver support.
Re:MP3 Playback? (Score:3, Interesting)
Confirmed in my my experiences with an AWE64 and a dual 533Mhz Celeron setup. I moved to a Santa Cruz Turtle Beach - no problems.
And as far as "Is there something magical about MP3s?," I think he's talking about standard wave output support...
Many card/driver combinations are supposed to be able to recognise the kind of data put through them. The Santa Cruz, for example, had a 'Hardware MP3 accelerator' option in the control panel. I really don't know how they recognise it though - by instinct I'd agree that surely the waveform has been decoded by the main CPU anyway? Be interested to hear from anyone who knows more about this point.
Cheers,
Ian
Re:MP3 Playback? (Score:2)
(A fellow BP6er)
Re:MP3 Playback? (Score:2)
Re:MP3 Playback? (Score:1)
Yep - my first BP6 board died due to that. The replacement was always fine however.
Cheers,
Ian
Re:MP3 Playback? (Score:2)
Sadly no longer, as I've moved over to a Shuttle SB51G. But yes, it was the BP6 I ran - best value board for a long time. Excellent device in my opinion.
Cheers,
Ian
Re:MP3 Accelerator (Score:1)
What Changes for a Linux Math Machine? (Score:2, Interesting)
The information on GPU's was great, if your running in windows and doing visualizations, but most of science doesn't use Windows. They started their projects on Big Iron Unix and are now moving to linux.
Our current spec out looks like this:
2 Athlon MP 2400
Tyan Tiger MPX
We were using Thunder, but found we didn't need the onboard SCSI so moved to tiger. After the fits I've been having w/ Gigabit cards and the AMD MP chipset though I'm considering going back to the Thunder for built in gigabit.
2Gig Kingston ValueRam EEC RAM (its what tyan suggests)
120GB WD Spc. Ed. 8M cache HD
Additional Promise IDE controllers for new HD's when needed.
Generic TNT2 or Gforce2 Video. (they are just math boxes)
Plextor ide CDRW
Still looking for the prefect tower.
Extra case fans.
The CPU's have been changing over the last year or so as the MP's get faster, And we have moved from 1 to 2G of ram.
Biggest problem I'm still having is the system sounds like a 747 taking off and I've had official AMD CPU fans burn out on me. I would still love to get a bit more oomph out of this though if there are any suggestions.
Re:What Changes for a Linux Math Machine? (Score:1, Interesting)
Fortran 90 is still the main scientific programming langauge (along with c and matlab). Intel make a very good P4/Xeon compiler. Be interesting to compare it to say NAGs comiler or the Portland Group one that runs on my office machine on both Intel and AMD.
With matlab it depends on what you are doing (FPvsINT and memory bandwidth again)
Re:What Changes for a Linux Math Machine? (Score:1)
Your right, most of the apps my users run/write are fotran, but they generally use GCC for compiling, or pre compiled binaries from Fermilab. The experiements they are working on have some pre prepared software sets used throughout that they are loathe to change or recompile for fear of adding any additional factors (or so it was explained to me - I'm not a researcher, just the SysAdmin.)
I guess I should find out what Compiler they are using "upstream".
Re:What Changes for a Linux Math Machine? (Score:3, Informative)
If they had higher-end NVIDIA graphics cards, they could also be very good OpenGL development/visualization stations, using Linux. Port all that SGI code with very little effort...
Biggest problem I'm still having is the system sounds like a 747 taking off and I've had official AMD CPU fans burn out on me. I would still love to get a bit more oomph out of this though if there are any suggestions.
I'd use aftermarket fans, I thought AMD's fans were cheesy (to use a technical term;). If you want a good product, I recommend the PC Power and Cooling [pcpowerandcooling.com] Athlon CPU cooler. PCP&C generally has top-quality products (great choice for power supplies as well).
You should probably start going for DVD/RAM drives also, lots more capacity for backups...
One final thought on numerics - you might want to compare some of the commercial compilers with gcc. For instance, Microway resells [microway.com] a strong line of commercial compilers. The Portland Group compilers, in particular, look promising.
Re:What Changes for a Linux Math Machine? (Score:3, Informative)
See 2CoolTek [2cooltek.com] for this gear. I've been buying from them for years and highly recommend them.
You could go with one of those Vantec fan speed adjusters (handles 4 fans) instead of variable-speed fans... might be a better choice in your case.
Perfect tower: one of the Lian-Li aluminum cases, probably an extended length model (extra 10cm of space). See NewEgg [newegg.com], etc. Actually, they've got the cooling gear too.
"two sticks of RAM instead of one for Redundancy" (Score:3, Insightful)
Even worse, his choice of drive was a single WD 80GB IDE drive? WTF? There's a reason the warranties on those things just dropped to a year!
Re:"two sticks of RAM instead of one for Redundanc (Score:3, Interesting)
Also many scsi drives are less reliable then ides. Hu?? This is because scsi drives typically spin at higher revolutions so they tend to fail more. Higher capacity drives are more prone to defectives and data corruption. The lower capacities typically are more reliable. Ask any admin how often they replace scsi drives on various raids? The fastest and biggest ones from what I read here on slashdot fail every 2-6 months! Quantums I heard fail on a weekly basis on some of the more questionable units. The newer ones seem to be the worse.
I have been doing computers since 1991 and I have never seen a hard drive fail. I only use ide. I believe part of the reason is I use to upgrade my drives every 2 years and until recently did not run my systems 24x7 like servers do. For the last 2 years I have been running 24x7 without any problems. Like you I would still select scsi assuming its for critical level work and money isn't an issue. I would pick Ide if raid was not needed since scsi is not more reliable unless its in a raid-5 configuration. Most workstations use alot of graphics and cpu power. Server applications tend to bottleneck at the hard drive. So hard disk performance is not really a factor unless the application runs of memory and swaps to the drive. Scsi vs Ide benchmarks show that they are almost identical in speed unless lots of i/o requests go to the drive in parrellel. Most cad apps today easily stay within the 2 gigs of ram. I know exceptions exist but they are rare.
However I would try to stay within 7200 rpm and not go above 10,000 for the drive. Your asking for trouble with the higher speeds not to mention do not really provide an increase in performance more then single percentage points in alot of benchmarks. Another benefit also with going with slower rpm drives is that they are alot more quiet.
Scsi is nice because it offloads alot of i/o processing to the scsi card. For any database or crtical application where raid is needed its the only way. For a graphical workstation for non critical use (artist or grunt level engineer) price and huge storage might be a bigger factor as well as reliability. Scsi without raid is not more reliable. I know a few raid workstations exist but raid is almost exclusively used in servers and is expensive for a desktop. Most engineers save their work on a network share. I guess you have to take in the cost of a hard drive failure. Yes engineers are sometimes expensive but not more then any guy in sales or marketing in a big corporation. You might as well give everyone raid.
55 dB not loud? (Score:1, Troll)
Computers should be silent. Any noise at all is too much, and 55 dB is way too much.
Re:55 dB not loud? (Score:1)
At the rate we're going, this is the type of hardware [usgr.com] we'll need to dissipate heat in 5 years!
You'd think that at the rate the latest and greatest silicon is being churned out and running hotter and hotter, one of the brilliant minds of today could figure out a way to make quiet "stealth" cooling fans. Yep, I know there's liquid cooling for PC's, but even though it's "safe", the idea of liquid and 500 Watts flowing side by side is not appealing to me! Not to mention, are you gonna liquid cool your Powersupply too?
It's incredible that at all the web sites you see "ultra quiet" CPU cooling fans for sale... their decibel ratings are starting at 30! Of course, lots of them drop down to 30 only when their speed limiting systems kick in with the system idling. There is nothing quiet about that!!! You'd think there'd be some scientific solution to move air with a fan and not make such a racket!!! (If so, someone PLEASE point me in the direction of the CPU and Case fans that do this!!!)
Price? (Score:1)
high end workstation? (Score:3, Interesting)
Western Digital 80GB Caviar with 8MB Cache
Why would you use a single IDE HD when you have SCSI built in the motherboard? In my experience storage upgrades always provided tremendous speed improvements. Disk access is always a big bottleneck. If your going to have a "high-end" workstation, you need at least SCSI, preferably SCSI RAID. If you want to go barebones, at least have IDE-RAID with a really good backup plan.
And WTF do Quake 3 benchmarks have to do with a workstation?
Re:high end workstation? (Score:3, Insightful)
Scsi is not faster or more reliable then ide unless its in Raid. So if your going to do scsi then you might as well buy not 1 but 4 drives for raid. That adds up. If your doing alot of i/o requests in parrallel then scsi is faster because it can offload the tasks and que them from the controller. A single app will not do this unless its a database or other server oriented application. I notice a bigger increase in performance from a faster processor but this is because I do not run a server. A workstation with lots of ram has the bottleneck in memory, cpu, and graphics card. A server on the other hand is different.
More emphasis should be on the processor and video card for any workstation purchase.
I agree with IDE-RAID if the job can not be interrupted because of a failed drive but 4 drives are expensive but still alot cheaper then scsi. Also worth mentioning is using bigger storage capacities with the ide from the amount of money saved. Keeping critical jobs is not as important as it use to be because engineers like their other white collar associates never store the finished jobs on their own drives. They rather use a network share when they are done. You would be a fool to store your work on your own drive since the file server backs it up on tape. Workstations typically run win2k today rather then unix so this means they can use NT and Novell file servers.
Re:high end workstation? (Score:3, Informative)
A raid with 4 drives might be more usefull 4*30 = 120/MBsec which begans approaches the ATA limit in Eide. Newer drives comming out will probably hit the ata limit soon in raid and only scsi can keep up. For a single drive scsi is not worth it.
Its strength will not show unless you run very heavy i/o bound applications. I agree that SCSI is supperior. I can't picture an engineer swaping out his hard drive while rendering a scene so swapping support is important only in the server arena.
Your post just repeated mine in saying the emphasis on a workstation is not i/o bound and scsi is not worth it unless its in raid. Price is important in this day and age of shrinking IT budgets the scsi myth is being exposed. A sigle scsi drive is not that much faster or reliable then an IDE.
Well written, but weak article (Score:4, Interesting)
However, he pretty much dumps his chosen hardware in our laps by the end of the article without much explanation. It feels rushed almost.
There is way more out there than Tyan, who cares what google uses. What about dual channel DDR? What about the fact that Xeons and newer P4s have HyperThreading?
He starts slow, then in a few paragraphs blurts out some mystery hardware he decided to go with. Then babbled about Geforce VS Quadro for the rest of the article.
Oh well, he's a good writer. Better luck next time.
Re:Well written, but weak article (Score:2)
Re:Well written, but weak article (Score:2)
Him using multi-cpus denotes multi-threaded applications, therefore more cpus is better than less cpus. He might have found at LEAST a 20% increase in performance simply by using a p4 with HT.
Where are the unices? (Score:1)