Forgot your password?
typodupeerror
The Almighty Buck Technology Hardware

Building A High End Quadro FX Workstation 89

Posted by Hemos
from the making-it-faster-and-better dept.
An anonymous reader writes "FiringSquad has an article detailing some of the differences between building a high-end workstation and a high-end gaming system. They go into things like ECC memory, and the difference between professional and gaming 3D cards. The Quadro FX 2000 coverage is particularly interesting -- the system with the Quadro FX 2000 was never louder than 55 dB!"
This discussion has been archived. No new comments can be posted.

Building A High End Quadro FX Workstation

Comments Filter:
  • interesting (Score:2, Interesting)

    by Sh0t (607838)
    Compared to the 75 db GFFX, that's a whisper
  • ECC Memory? (Score:3, Insightful)

    by Proc6 (518858) on Monday February 03, 2003 @08:57AM (#5214712)
    Can someone tell me why ECC memory is a good idea? I don't think I can remember in all my years of computing a machine crashing due to a memory error, or a machine not crashing because ECC memory saved it. Maybe I wouldnt know if it did, but, I've always felt like ECC memory was slow, more expensive, and necessary about like UFO insurance. Personally Id rather have regular memory, that taco's the machine completely when there's a problem, so I know there's a problem, than I would ECC constantly correcting memory errors without my knowing, untill I go to leave on vacation, then the whole DIMM gives out.

    I Am Not A Memory Expert though.

    • Re:ECC Memory? (Score:3, Interesting)

      by evilviper (135110)
      I don't think I can remember in all my years of computing a machine crashing due to a memory error

      Either you just haven't recognized when it happened, you don't work with any significant number of computers, or you've been INCREDIBLY lucky.

      Memory isn't perfect. If your uptime is important, you need ECC.
      • you know, I always specify ECC for our Win2K workstations, but it rather strikes me that Winders will let the side down LONG before memory errors stop the show. Am I right or what?
        • Re:ECC Memory? (Score:5, Informative)

          by larien (5608) on Monday February 03, 2003 @09:33AM (#5214883) Homepage Journal
          OK, two points:
          1. If you're aiming for stability, you try to remove all such possible causes; even if Windows will crash once a week, there's no point making it worse by risking memory failure.
          2. Even if your machine doesn't crash, a flipped memory bit could invalidate your data results by altering a crucial figure. In some cases, it's not important, but a flipped bit at the higher range could alter a conclusion significantly and you wouldn't notice.
          Depending on your target audience, the latter may be more important than the former.
      • Re:ECC Memory? (Score:3, Insightful)

        by rtaylor (70602)
        It's subtle corruptions most people worry about. If you're doing financial transactions, you do everything you can to ensure that 4 doesn't turn into an 8 accidentally.
        • Yes, subtle problems are bad, but it's just a matter of percentages. It is far more likely that an instruction, or binary data will be corrupted, causing a crash or corruption. I've seen it happen with memory, I've seen it happen with CPUs, and I've even seen it happen with the system bus.

          If someone is dealing with critical numbers, I would hope that they have a lot more redundancy and comparison/verification in place, than just trusting the hardware of a single machine.
    • Re:ECC Memory? (Score:5, Informative)

      by e8johan (605347) on Monday February 03, 2003 @09:09AM (#5214766) Homepage Journal

      RTFA Read The F**king Article!

      "Two to twelve times each year, a bit in memory gets inappropriately flipped. This can be caused by cosmic rays flying through your RAM or a decay of the minute radioactive isotopes found in your RAM - the impurity need only be a single atom. Most of the time, this flipped bit is unimportant. Maybe it's a flipped bit in unallocated memory, or maybe it just altered the position of a pixel for a fraction of a second. If you're unlucky though, this flipped bit can alter critical data and cause your system to crash. In our situation, a flipped bit could potentially alter our results significantly."

      Quoted from the second paragraph of the fourth page.

      • Wouldn't running the tests twice be a better way to ensure this kind of thing doesn't happen?

        With ECC RAM, you're just eliminating that *known* unlikely event. What about other *unknown* unlikely events? Those may have just as high a liklihood as a flipped memory bit.

        Running the tests twice should eliminate the vast majority of these kinds of known and unknown rare glitches, no?
        • Re:ECC Memory? (Score:5, Insightful)

          by e8johan (605347) on Monday February 03, 2003 @10:46AM (#5215263) Homepage Journal

          Large simulations (such as this, or car crash simulations, etc.) take days, if not weeks to run. Since the ECC ram isn't 100% slower (i.e. time of fast memory times two is more than time of ECC memory) there is no need to run it twice.

          Anyhow, if the two simulations differ, you'll have to do it a third time to check if you get a match, and still you only know that you are *likely* to have gotten it right. With ECC the chance of getting it right increases.

          • Re:ECC Memory? (Score:2, Interesting)

            by still_nfi (197732)
            Um....since a large number of memory accesses come from cache, wouldn't it be more important to have an ECC cache than main memory? Certainly, that is where it is most likely that a flipped bit is going to cause a problem. I have doubts that any of the processors use ECC code in the L1 or L2 caches?

            Also, it's been a while but don't most non-ECC memory use parity bits? So that a single flipped bit will be noticed...hence the isolated blue screens of death/ kernel panic on very rare occasions. Or is a parity bit what passes for ECC these days?
            • Re:ECC Memory? (Score:1, Informative)

              by Anonymous Coward
              since a large number of memory accesses come from cache, wouldn't it be more important to have an ECC cache than main memory? Certainly, that is where it is most likely that a flipped bit is going to cause a problem. I have doubts that any of the processors use ECC code in the L1 or L2 caches?

              I believe SRAM cells are less likely to have bits flipping than DRAM cells (but don't take my word for it). That said, AMD's Hammers will have extensive error checking for cache. L1 data cache is ECC protected, L1 Instruction cache is parity protected. The unified L2 cache fully ECC protected, including separate ECC for L2 tags. The integrated memory controller supports Chipkill ECC RAM.

        • Re:ECC Memory? (Score:2, Insightful)

          by kperrier (115199)
          Wouldn't running the tests twice be a better way to ensure this kind of thing doesn't happen?

          So, if it takes 4-6 DAYS for a test to run, you want to run it again to verify the results? They don't have the time to do it again. Take this from some one who manages a 190 node Linux cluster. We use this for seismic data processing. Our processing run times are 3 to 4 days, each and there are multiple runs for each job. We have project schedules that we need to meet, and running each step in the processing schedule twice is not an option.

          Depending on what you are doing, the money is better spent on the front end for quality hardware, than to double the time for a project to process the data. You could double the initial cost of the hardware, have two clusters and run the tests in parallel and compair the results. This may be the best thing to do, depending on what you are modeling/processing, but its much cheaper to invest in the quality hardware up front.

          Kent
        • >> Wouldn't running the tests twice be a better way to ensure this kind of thing doesn't happen?

          --This is pretty much what Mainframes do... Only they do it WHILE the test / application is running.
      • Can I get this in "Articles on Tape" format?

        can someone read it aloud and email me the mp3?
    • Of course you get ECC memory. I've had ECC memory on all my machines for years. The price differential at the manufacturer level is small (go to Crucial and check).

      Why have crashes? Even my Win2K machine stays up for months at a time.

    • I Am Not A Memory Expert though.
      That's pretty damned obvious, but since your post was moderated as insightful, I'll reply.

      ECC is unneccessary if you use your computer to listen to mp3s, download porn and play counterstrike. If you're using your computer for important tasks however, ECC corrects single bit errors which occur more often than you realize (most of your bits aren't very important, so you don't usually notice), and it also detects multi-bit errors, thus preventing data corruption that could otherwise go unnoticed.

      Multi-bit errors with ECC will generate a non-maskable interrupt, which will purposefully take your machine down, rather than allowing you to continue with unreliable memory.

      On a high-end server, every single data path runs ECC, so no data can be accidentally modified, ever. On PCs, it's generally considered acceptable just to ECC the memory, since PCs rarely are engaged in ultra-critical applications.

      P.S. You're a fucking retard.

  • Easy (Score:1, Interesting)

    "differences between building a high-end workstation and a high-end gaming system."

    1. workstation == better processors
    2. gaming system == better graphic cards
    • Re:Easy (Score:2, Interesting)

      by Molt (116343)

      You may like to read the article. This is a scientific visualization workstation being built with a seriously nice Quatro FX graphics card.


      The author even benchmarks UT2k3 on it, and the scores are.. umm.. impressive.

    • by troc (3606)
      define *better*

      Better processors how? Faster? better multiprocessing? Vector?

      Better graphits cards how? cunning filtering? double whammy pipelines with 8x anti aliassing? fast 2D? accurate 3D?

      Certain workstations require graphics cards which would make your Nvidia blahblah cry for mercy *in specific operations* This makes it a BETTER graphics card - FOR IT'S INTENDED USE. Yes, it'll be a crap gaming card. Likewise, it's possible a workstation will have a processor that's useless when it comes to running windows - and that threfore people would say their gaming machine had a *better* processor. But for the specific applications of that workstation it would be fine.

      So there are lots of workstations with better graphics cards and worse processors and vice versa.

      hohum

      Troc
    • Re:Easy (Score:4, Insightful)

      by sql*kitten (1359) on Monday February 03, 2003 @10:09AM (#5215019)
      1. workstation == better processors
      2. gaming system == better graphic cards


      Not as simple as that. A games card will trade precision for speed, because precision is less important if you are updating the scene dozens of times a second anyway. If two walls don't meet perfectly for 1/60th of a second, who will even notice? A workstation card will trade speed for precision - you cannot risk a mechanical engineer missing an improperly aligned assembly because of an artifact created by the graphics card, or worse, breaking an existing design because an artifact shows a problem that doesn't exist in the underlying model.
    • Re:Easy (Score:2, Informative)

      by clarkc3 (574410)
      2. gaming system == better graphic cards

      I just cant agree with that statement - its more a 'drivers written to function better in games' than a better graphics card. The one in the article uses a Quadro FX and I know lots of other people who use a 3dlabs Wildcat series - both of those cards wipe the floor with 'gaming' cards in 3d rendering for things like cad/3d studio/maya

  • by kruetz (642175) on Monday February 03, 2003 @09:10AM (#5214775) Journal
    Let's face it - the main focus in a games PC is a blindingly fast GPU that can do umpteen hundred frames/sec at 1600x1200x32 or whatever, so you also need your system to be able to give the data to your video card as fast as possible. (Sound is another consideration, but not quite so major).

    But "honest-to-goodness computation" (numerical analysis, ...) doesn't use a GPU too intensively, except for displaying graphical data, for which the high-end OpenGL cards are ideal. The main focus here is CPU's performance in doing complex numerical tasks, not just passing data to the AGP slot. And let's face it, multiple-CPU PCs don't necessarily do anything for gaming, but they're great for this sort of stuff.

    However, most if not all of the points in this article are quite informative - did YOU know the difference between Athlon XP and MP. I thought I mostly did.

    And his choice of ECC RAM - Two to twelve times each year, a bit in memory gets inappropriately flipped ... If you're unlucky though, this flipped bit can alter critical data and cause your system to crash. In our situation, a flipped bit could potentially alter our results significantly. Geez.

    We come to the video card - a hacked GeForce isn't the same thing as a Quadro - bet some of the FPS freaks might be a little surprised, but the GeForces and Radeons aren't made for this sort of stuff. No real surprise, if you think about. But, as he says, why not a FireGL? Everything comes back to the lesson of the day: know your task. And boy, he certainly does.

    Anyway, enough of regurgitating some of the finer points of this great article. Read it for yourself. And don't post comments about how 1337 your Radeon 9700 Pro or Ti4800 is. Know your task.
    • Hehehe, actually, multi-cpu's in XP or 2K Pro help in gaming quite a bit. When you dont wanna shell out 500 bucks for a top of the line CPU, a dual athalon system is smokin fast for gaming, and lots of games like Unreal Tournament and Quake 3 are either allready setup to detect and use multi-cpu's, or you can tweak a config file to use it.
      • No, they don't.
        The Unreal engine has never been multi-threaded(there was a RUMOR that a future build of UT2k3 would have it in for laughs, this has not happened yet). For Quake3, you can use the "r_smp" variable in a Q3 engine game, but this is more of a testament to Carmack than anything else(stability problems, here we come).
        Speaking as an owner of a dual-Athlon system, buying a SMP machine entirely for gaming is a shootable offense--there's no viable reason. Most games really aren't bound by the CPU, they're very fill-rate and TnL dependent, and more likely to run into your video card, RAM or bus speed barriers first. More CPU helps if you're running a server or for some reason want to play with a ridiculous amount of bots, but a bus speedup or better video card will aid the client much more.
        Where it DOES come in handy is if you do development work; you can launch the client without having to quit out of your editing environment, compile a level in the background, or encode MP3s without a single loss of frames...
    • And don't post comments about how 1337 your Radeon 9700 Pro or Ti4800 is. Know your task.

      My task: Running a console on the rare occasion that a monitor is plugged into my server at home.

      My card: An S3 Trio32

      Ph33r my 1337ness.

      --saint
      • you should be running the trident svga-card we used to have.. it was sure groovy to get snow on the screen everytime the palette was altered!

        oh wait, it blew it's gain circuitry or something to bits........
      • Personally, I have great affection for my onboard S3 Trio 64V+. Sure, it isn't accelerated, and with only 2MB of VRAM you're not gonna get much performance... and there was that whole issue of pre-4.1.x X corrupting the display if you switch back and forth from a virtual console, not to mention the fact that it only became supported with Xfree86 not supporting the card until around 4.0.2...

        wait a minute...
  • by Proc6 (518858) on Monday February 03, 2003 @09:10AM (#5214776)
    Not intending to start a Holy War, I realize the 64 CPU monsters have their place but their workstations are just ignorant (this is coming from a previous SGI only owner)...

    "These systems were around $40,000 when first released. Each R12000 400MHz has a SpecFP2000 of around 350-360 and so it's approximately equal to an Athlon 1.2GHz. The caveat is that the SpecFP2000 benchmark is actually made up of a bunch of other, smaller, tests. For computational fluid dynamics or neural network image recognition, the 400MHz SGI CPU is 2.5 to 5 times faster than the Athlon!"

    WOW! 2.5 times faster than a 1.2Ghz Athlon!? Man, you'd almost need a $168 2.4 Ghz Athlon [pricewatch.com] to keep up! I wish they made them!

    P.S. The 3.06 Ghz P4 is just under 1000 on the SpecFP benchmark [specbench.org].

    • (1) Economies of scale (esp. with chip manufacture!)
      (2) Spreading the overhead and costs of R+D (which can be *huge*)
      If everybody went with SGI instead of IBM, we'd all be buying R12K boxes (from clone manufacturers, no less :) for $1500 apiece now.
      Shop eBay... best UNIX for your dollar.
    • Not intending to start a Holy War, I realize the 64 CPU monsters have their place but their workstations are just ignorant (this is coming from a previous SGI only owner)...
      "These systems were around $40,000 when first released. Each R12000 400MHz has a SpecFP2000 of around 350-360 and so it's approximately equal to an Athlon 1.2GHz. The caveat is that the SpecFP2000 benchmark is actually made up of a bunch of other, smaller, tests. For computational fluid dynamics or neural network image recognition, the 400MHz SGI CPU is 2.5 to 5 times faster than the Athlon!"

      WOW! 2.5 times faster than a 1.2Ghz Athlon!? Man, you'd almost need a $168 2.4 Ghz Athlon [pricewatch.com] to keep up! I wish they made them!

      P.S. The 3.06 Ghz P4 is just under 1000 on the SpecFP benchmark [specbench.org].


      Lets see, the last generation that we have SPEC numbers on for SGI is the 600Mhz R14K. It clocks in at 529 PeakFP compared to 656 Peak FP for the 2.4 GHz MP that was used in the benchmark. That's about a 20% difference in Speed. The original CPU's that he was dealing with, the R12K 400 and the 1.2 K7 are 407 and 352 respectively. That actually gives the SGI a lead by about 15%. Now if the 2.5x increase in an application holds true, I'd say the SGI is still a good deal if you can afford it. Now granted I don't have $40,000 to spend on a workstation, but there are plenty of companies who are willing to spend the extra $30,000 once to get double the performance out of their $60,000 a year engineers for the next two or three years. Also, as is pointed out in the article, the P4 is insanely optimized for SPEC. It's numbers have no real meaning to most realworld applications. If you want to get right down to it, SGI can give you 512 CPU's run through a single Infinite Reality module. No one would actually do this, but it's nice to dream about it once in a while :)
    • by Anonymous Coward
      Your pricewatch link links to an Athlon 2400+, which does not run at 2.4 Ghz, but at 1.93Ghz.
  • Biased? (Score:4, Interesting)

    by Gheesh (191858) on Monday February 03, 2003 @09:11AM (#5214778) Homepage Journal

    The article carefully explains the choices made. However, we find the following line at the end of it:

    Special thanks to AMD, NVIDIA, TYAN, and Ryan Ku at Rage3D.com for helping me with this project.

    Well, maybe they had no influence at all, but then how come that most of the chosen products match this 'special thanks' line?

    • Re:Biased? (Score:3, Insightful)

      by sweede (563231)
      perhaps the author of the article did research and picked out the componants of the system BEFORE contacting vendors and buying them.

      you dont order food or car parts without knowing what is there and what you want/need do you??

      Oh, and if you also notice that the rest of the site is based on new hardware reviews and performance, you'd think that they would have good experiances with what works and what doesn't.

      If you went out and researched companies or people for a project you where doing, would you not include them in a `special thanks to' section of the paper?

    • Re:Biased? (Score:3, Insightful)

      by vivIsel (450550)
      Welcome to the world of "hardware review" sites. Bias is their collective middle name.
  • ISV Certification (Score:5, Informative)

    by vasqzr (619165) <vasqzr@nets c a p e .net> on Monday February 03, 2003 @10:00AM (#5214985)

    If it's not ISV certified it doesn't do you much good, as for as a workstation goes.

    From Ace's Hardware:

    When you look at the typical price ($4000-$6000) of a workstation built by one the big OEM's you might ask yourself why you or anyone would pay such a premium for a workstation.

    In fact if you take a sneak peek at the benchmarks further you will see that a high-end PC, based upon a 1400MHz Athlon, can beat these expensive beasts in several very popular workstation applications like AutoCAD (2D), Microstation.

    Yes, it is possible that you are better served by a high-end PC, assembled by a good local reseller. Still, there are good reasons to consider an OEM workstation.

    Most of the time, a workstation is purchased for one particular task, and sometimes to run one particular application. Compaq, Dell and Fujitsu Siemens have special partnerships with the ISV's (Independent Software Vendor) who develop the most important workstation applications. In close co-operation with these ISV's, they verify if the workstation is capable of running each application stablely and fast. In other words, you can ask the OEM whether or he and the ISV can guarantee that your favorite application runs perfectly on the OEM's workstation. ISV certification is indeed one of the most critical factors that distinguishes a workstation from a high-end desktop.

    Secondly, it is harder to assemble a good workstation than a high-end PC. Typically, a PC is built for the highest price/performance. A lot of hardware with an excellent price/performance ratio comes with drivers which do not adhere strictly to certain standards such as the PCI and AGP standards. Even if this kind of hardware might comprise stability in very rare cases, it is unacceptable for a workstation.

    Last but not least, workstations come with high-end SCSI harddisks and OpenGL videocards which are seldom found in high-end PC's. Workstations are shipped with ECC (Error Checking and Correction code) memory and can contain 2GB to 4GB memory. High-end PC's typically ship with non-ECC memory and are - in practice - limited to 512MB (i815 chipset) - 2GB (AMD760).
    • While ISV certification is often very important, in this case of this article, where they were talking about a machine for a specific task, I don't believe certification is too important. MATLAB doesn't require certification (I don't think they even give certification to anyone), and I'm pretty sure that the Quadro viewer program mentioned also doesn't require certification other than that the machine have a proper quadro.
  • In the discussion of AMD vs. Intel, I was surprised to read the following:
    While both the P4 and XEON are based upon a similar cores, the XEON offers multiprocessor support and larger L2 caches.
    The Pentium III Xeon had larger L2 cache, but not the Pentium 4 Xeon. I just checked intel.com [intel.com], and there is a Xeon MP with a large L3 cache, but that only goes to 2 GHz, so I doubt that was under consideration.

    Perhaps the author felt that it goes without saying, but I'll say it. Regardless of theory, the choice of CPU would ideally be left until after some domain-specific benchmarks.

  • The GeForce is clocked @ 500MHz. The Quadro is clocked @ 400MHz and doesn't need the hoover for cooling.

    • Only 55db (Score:1) by scotay (195240) on Monday February 03, @06:20AM (#5215096)
      The GeForce is clocked @ 500MHz. The Quadro is clocked @ 400MHz and doesn't need the hoover for cooling.


      didja RT*A? From the horses mouth:
      I've run benchmarks at high resolutions when possible to minimize the influence of the CPU. By default the Quadro FX 2000 operates at 300/600MHz in 2D mode, and 400/800MHz in 3D performance mode. The new Detonators allow "auto-detection" of the optimal overclocking speed. This was determined to be 468/937. The GeForce FX 5800 Ultra runs at 500/1000. Here are the results we obtained with the card overclocked to 468/937:

      I'm getting solid performance with a GPU that never runs past 63C and enters into the "high fan speed mode."


      Hmmm. So. You were... wrong. OK. Bye,
  • Quite a Nice article, and useful to me since I'm consistantly building workstations for use in physics research, but what changes would be made for a linux based system?

    The information on GPU's was great, if your running in windows and doing visualizations, but most of science doesn't use Windows. They started their projects on Big Iron Unix and are now moving to linux.

    Our current spec out looks like this:
    2 Athlon MP 2400
    Tyan Tiger MPX
    We were using Thunder, but found we didn't need the onboard SCSI so moved to tiger. After the fits I've been having w/ Gigabit cards and the AMD MP chipset though I'm considering going back to the Thunder for built in gigabit.
    2Gig Kingston ValueRam EEC RAM (its what tyan suggests)
    120GB WD Spc. Ed. 8M cache HD
    Additional Promise IDE controllers for new HD's when needed.
    Generic TNT2 or Gforce2 Video. (they are just math boxes)
    Plextor ide CDRW
    Still looking for the prefect tower.
    Extra case fans.

    The CPU's have been changing over the last year or so as the MP's get faster, And we have moved from 1 to 2G of ram.

    Biggest problem I'm still having is the system sounds like a 747 taking off and I've had official AMD CPU fans burn out on me. I would still love to get a bit more oomph out of this though if there are any suggestions.
    • by Anonymous Coward
      One thing you will need to look at if you are doing physics research is the availablity of compilers and optimisation routines.

      Fortran 90 is still the main scientific programming langauge (along with c and matlab). Intel make a very good P4/Xeon compiler. Be interesting to compare it to say NAGs comiler or the Portland Group one that runs on my office machine on both Intel and AMD.

      With matlab it depends on what you are doing (FPvsINT and memory bandwidth again)

      • Xeon, is something else I've been considering, but the serious price jump per workstation has rather curtailed my chances to experiment.

        Your right, most of the apps my users run/write are fotran, but they generally use GCC for compiling, or pre compiled binaries from Fermilab. The experiements they are working on have some pre prepared software sets used throughout that they are loathe to change or recompile for fear of adding any additional factors (or so it was explained to me - I'm not a researcher, just the SysAdmin.)

        I guess I should find out what Compiler they are using "upstream".

    • (they are just math boxes)

      If they had higher-end NVIDIA graphics cards, they could also be very good OpenGL development/visualization stations, using Linux. Port all that SGI code with very little effort...

      Biggest problem I'm still having is the system sounds like a 747 taking off and I've had official AMD CPU fans burn out on me. I would still love to get a bit more oomph out of this though if there are any suggestions.

      I'd use aftermarket fans, I thought AMD's fans were cheesy (to use a technical term;). If you want a good product, I recommend the PC Power and Cooling [pcpowerandcooling.com] Athlon CPU cooler. PCP&C generally has top-quality products (great choice for power supplies as well).

      You should probably start going for DVD/RAM drives also, lots more capacity for backups...

      One final thought on numerics - you might want to compare some of the commercial compilers with gcc. For instance, Microway resells [microway.com] a strong line of commercial compilers. The Portland Group compilers, in particular, look promising.

    • Replace the AMD heatsink/fan kits with Thermalright SLK800's, YS Tech 80mm adjustable fans, and use Arctic Silver 3 thermal compound. The catch is that the pink crap AMD uses instead of proper thermal compound may be permanently attached at this point, though the right chemicals (Goof-Off cleaner followed up with rubbing alcohol) can probably remove it. I'm using SLK800's on my dual 2400+ ASUS A7M266-D board and with the fans adjusted to 2000RPM the system is very quiet, the most annoying noise is from the fan on the Ti4200 card and there's no room for one of those neato Zalman heatpipe GPU coolers. With this setup I'm getting lower CPU temps than I was with 1800+ chips and the retail box heatsink/fan kits (using AS3, scraped off the pink stuff).

      See 2CoolTek [2cooltek.com] for this gear. I've been buying from them for years and highly recommend them.

      You could go with one of those Vantec fan speed adjusters (handles 4 fans) instead of variable-speed fans... might be a better choice in your case.

      Perfect tower: one of the Lian-Li aluminum cases, probably an extended length model (extra 10cm of space). See NewEgg [newegg.com], etc. Actually, they've got the cooling gear too.
  • by Zak3056 (69287) on Monday February 03, 2003 @12:06PM (#5215682) Journal
    Did anyone else see a logical disconnect between his assertation that two sticks of RAM were better than one because if one failed, the machine could still operate while they waited for a replacement stick... and yet he chose NOT to use RAID?

    Even worse, his choice of drive was a single WD 80GB IDE drive? WTF? There's a reason the warranties on those things just dropped to a year!
    • Actually the warranty dropped from their scsi units as well. Something tells me a defect might be in there. Especially with larger capacity drives.

      Also many scsi drives are less reliable then ides. Hu?? This is because scsi drives typically spin at higher revolutions so they tend to fail more. Higher capacity drives are more prone to defectives and data corruption. The lower capacities typically are more reliable. Ask any admin how often they replace scsi drives on various raids? The fastest and biggest ones from what I read here on slashdot fail every 2-6 months! Quantums I heard fail on a weekly basis on some of the more questionable units. The newer ones seem to be the worse.

      I have been doing computers since 1991 and I have never seen a hard drive fail. I only use ide. I believe part of the reason is I use to upgrade my drives every 2 years and until recently did not run my systems 24x7 like servers do. For the last 2 years I have been running 24x7 without any problems. Like you I would still select scsi assuming its for critical level work and money isn't an issue. I would pick Ide if raid was not needed since scsi is not more reliable unless its in a raid-5 configuration. Most workstations use alot of graphics and cpu power. Server applications tend to bottleneck at the hard drive. So hard disk performance is not really a factor unless the application runs of memory and swaps to the drive. Scsi vs Ide benchmarks show that they are almost identical in speed unless lots of i/o requests go to the drive in parrellel. Most cad apps today easily stay within the 2 gigs of ram. I know exceptions exist but they are rare.

      However I would try to stay within 7200 rpm and not go above 10,000 for the drive. Your asking for trouble with the higher speeds not to mention do not really provide an increase in performance more then single percentage points in alot of benchmarks. Another benefit also with going with slower rpm drives is that they are alot more quiet.

      Scsi is nice because it offloads alot of i/o processing to the scsi card. For any database or crtical application where raid is needed its the only way. For a graphical workstation for non critical use (artist or grunt level engineer) price and huge storage might be a bigger factor as well as reliability. Scsi without raid is not more reliable. I know a few raid workstations exist but raid is almost exclusively used in servers and is expensive for a desktop. Most engineers save their work on a network share. I guess you have to take in the cost of a hard drive failure. Yes engineers are sometimes expensive but not more then any guy in sales or marketing in a big corporation. You might as well give everyone raid.

  • Computers should be silent. Any noise at all is too much, and 55 dB is way too much.

    • At the rate we're going, this is the type of hardware [usgr.com] we'll need to dissipate heat in 5 years!

      You'd think that at the rate the latest and greatest silicon is being churned out and running hotter and hotter, one of the brilliant minds of today could figure out a way to make quiet "stealth" cooling fans. Yep, I know there's liquid cooling for PC's, but even though it's "safe", the idea of liquid and 500 Watts flowing side by side is not appealing to me! Not to mention, are you gonna liquid cool your Powersupply too?

      It's incredible that at all the web sites you see "ultra quiet" CPU cooling fans for sale... their decibel ratings are starting at 30! Of course, lots of them drop down to 30 only when their speed limiting systems kick in with the system idling. There is nothing quiet about that!!! You'd think there'd be some scientific solution to move air with a fan and not make such a racket!!! (If so, someone PLEASE point me in the direction of the CPU and Case fans that do this!!!)

  • Maybe I missed it, but I didn't see a price final. For all his talk about cost vs. performance in the beginning, you'd think we'd see a final overall price for this thing...
  • by asv108 (141455) <alex@@@phataudio...org> on Monday February 03, 2003 @01:09PM (#5216007) Homepage Journal
    Hard Drive:
    Western Digital 80GB Caviar with 8MB Cache

    Why would you use a single IDE HD when you have SCSI built in the motherboard? In my experience storage upgrades always provided tremendous speed improvements. Disk access is always a big bottleneck. If your going to have a "high-end" workstation, you need at least SCSI, preferably SCSI RAID. If you want to go barebones, at least have IDE-RAID with a really good backup plan.

    And WTF do Quake 3 benchmarks have to do with a workstation?

    • He mentioned in the article about his budget. Have you looked at the price of scsi drives? Tiny 20-gigs $400 each! Ouch.

      Scsi is not faster or more reliable then ide unless its in Raid. So if your going to do scsi then you might as well buy not 1 but 4 drives for raid. That adds up. If your doing alot of i/o requests in parrallel then scsi is faster because it can offload the tasks and que them from the controller. A single app will not do this unless its a database or other server oriented application. I notice a bigger increase in performance from a faster processor but this is because I do not run a server. A workstation with lots of ram has the bottleneck in memory, cpu, and graphics card. A server on the other hand is different.

      More emphasis should be on the processor and video card for any workstation purchase.

      I agree with IDE-RAID if the job can not be interrupted because of a failed drive but 4 drives are expensive but still alot cheaper then scsi. Also worth mentioning is using bigger storage capacities with the ide from the amount of money saved. Keeping critical jobs is not as important as it use to be because engineers like their other white collar associates never store the finished jobs on their own drives. They rather use a network share when they are done. You would be a fool to store your work on your own drive since the file server backs it up on tape. Workstations typically run win2k today rather then unix so this means they can use NT and Novell file servers.

  • by zaqattack911 (532040) on Monday February 03, 2003 @03:25PM (#5216895) Journal
    I think he starts off well talking about the decision making process, the move over x86, what ECC means.

    However, he pretty much dumps his chosen hardware in our laps by the end of the article without much explanation. It feels rushed almost.

    There is way more out there than Tyan, who cares what google uses. What about dual channel DDR? What about the fact that Xeons and newer P4s have HyperThreading?

    He starts slow, then in a few paragraphs blurts out some mystery hardware he decided to go with. Then babbled about Geforce VS Quadro for the rest of the article.

    Oh well, he's a good writer. Better luck next time.
    • If you'd read more closely you'd realize that he specialized his system for raw FPU performance - that means Athlon. HyperThreading is totally not an issue. He had a budget in which he was constrained, and high-speed ECC DDR and a SCSI hard drive were both cut out.
      • uuuh, of course hyperthreading is an issue.

        Him using multi-cpus denotes multi-threaded applications, therefore more cpus is better than less cpus. He might have found at LEAST a 20% increase in performance simply by using a p4 with HT.
  • For x86 Ati and Nvidia have Linux drivers. How about a RS6000 with a gtx3000 GPU? (Or Sun & CGI) Where can I find more info about production workstations running in environments with thousands of clients? (automotive engineering e.g.)

"And do you think (fop that I am) that I could be the Scarlet Pumpernickel?" -- Looney Tunes, The Scarlet Pumpernickel (1950, Chuck Jones)

Working...