Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Graphics Software Hardware

Affordable Workstation Graphics Card Shoot-Out 141

MojoKid writes "While workstation graphics cards are generally much more expensive than their gaming-class brethren, it's absolutely possible to build a budget-minded system with a workstation-class graphics card to match. Both NVIDIA and ATI have workstation-class cards that scale down below $500, a fraction of the price of most high-end workstation cards. This round-up looks at three affordable workstation cards, two new FireGL cards from AMD/ATI and a QuadroFX card from NVIDIA, and offers an evaluation of their relative performance in applications like Cinema 4D, 3D StudioMax, and SpecViewperf, as well as their respective price points."
This discussion has been archived. No new comments can be posted.

Affordable Workstation Graphics Card Shoot-Out

Comments Filter:
  • All I can say is... (Score:5, Informative)

    by snl2587 ( 1177409 ) on Wednesday February 06, 2008 @03:51AM (#22318322)

    ...if you're planning on using a Linux workstation, don't buy an ATI card. I don't mean this as flamebait, just practical advice. Even with the new proprietary drivers or even the open source drivers, there are still many, many problems. Of course, I prefer ATI on Windows, so it all depends on what you want to do.

  • Re:Difference? (Score:5, Informative)

    by kcbanner ( 929309 ) * on Wednesday February 06, 2008 @03:54AM (#22318330) Homepage Journal
    The workstation cards tend to have very low error tolerance, while the real time graphics cards allow for quite a bit of error in the name of speed. This is fine unless your rendering something.
  • Re:Difference? (Score:4, Informative)

    by TheSpengo ( 1148351 ) on Wednesday February 06, 2008 @04:02AM (#22318368)
    High end gaming cards specialize in pure speed while high-end workstation cards specialize in extreme accuracy and precision is the basic answer. They are incredibly accurate with FSAA and sub-pixel precision. Workstation graphic cards also have other features such as overlay plane support which really helps in things like 3dsmax.
  • Re:Difference? (Score:2, Informative)

    by acidream ( 785987 ) on Wednesday February 06, 2008 @04:04AM (#22318378) Homepage
    Workstation cards typically have certain features enabled that their gaming counterparts do not. Some are just driver features, others are in silicon. Hardware overlay planes are a common example. This is required by some 3d applications like maya in order to display parts of the gui properly.
  • by SynapseLapse ( 644398 ) on Wednesday February 06, 2008 @04:11AM (#22318416)
    Not exactly.
    Gaming grade video cards tend to be very fast at special types of pixel shaders and excel at polishing the image to look better. Where they tend to be inaccurate is how they clamp the textures and even then it's fuzzy estimates that only are ever issues at extreme angles.
    This is only in the way it displays data and wouldn't cause a COD program to "fall over."

    Workstation cards are primarily high polygon crunchers. Games are rendered entirely in Triangles, whereas rendering programs use Triangles, Quadrangles and honest to goodness polygons (5+ sides).
  • by doombringerltx ( 1109389 ) on Wednesday February 06, 2008 @04:35AM (#22318504)
    It depends. I bought a new ATI card after they opened up the 2D driver specs. When booted into Linux I haven't had any problems with my day to day activities. Its only when it tries to render anything in 3D that it shits bricks. To be fair there may be a problem besides the driver that I haven't found yet, but right now all signs are pointed to driver/card problems. Honestly its not a big deal to me. I just don't use any fancy compositing manager and I never played games in Linux anyways. While I'm on the subject, I know when they released the 2D specs they said the 3D specs were on their way, but then I never heard anything out of that again. Does anyone know if or when that will happen if it hasn't already?
  • Re:Difference? (Score:4, Informative)

    by Psychotria ( 953670 ) on Wednesday February 06, 2008 @04:41AM (#22318538)
    I am not sure "error tolerance" is the correct term; there is no "tolerance" (you are correct though; I am just debating the term), it's just that the high-end workstation cards sacrifice speed over accuracy. To say "error tolerance" implies that both types of card have errors (that they may or may not have and may or may not compensate for), and one tolerates them more than the other. This, strictly, isn't true. A better analogy would be something like high-end gaming cards have (for example... making the figures up) 24-bit precision and the high-end cards have 64-bit precision. There is no "tolerance" involved; just that one does the math better for accuracy and the other does the math better for speed.
  • by nano2nd ( 205661 ) on Wednesday February 06, 2008 @05:44AM (#22318782) Homepage
    Have a look at this site - it is possible to flash an 8800 GTX to Quadro FX 5600:

    http://aquamac.proboards106.com/index.cgi?board=hack2&action=display&thread=1178562617 [proboards106.com]
  • Re:Difference? (Score:2, Informative)

    by 0xygen ( 595606 ) on Wednesday February 06, 2008 @06:00AM (#22318824)
    Error tolerance refers to pixel errors in the output image compared to a reference rendering.

    eg, the fast texture sampling methods on gaming cards lead to aliasing errors, where the pixel is in error compared to a refernce rendering.

    There are also a lot more factors to this than just floating point precision, for example how the edges of polys are treated, how part-transparent textures are treated and how textures are sampled and blended.
  • by Animaether ( 411575 ) on Wednesday February 06, 2008 @06:38AM (#22318968) Journal
    sorry, but a graphics card does not speed up your rendering unless your renderer can take advantage of the graphic card; hint: that's not very many, and those that do only do so for very limited tasks.

    The only reason you should have for upgrading your graphics card within the 'consumer' market is if your viewport redraws are being sluggish; this will still allow you to play games properly* as well.
    The only reason to upgrade to e.g. FireGL or a QuadroFX is if you're pushing really massive amounts of polys and want a dedicated support line; e.g. for 3ds Max, there's the MaxTreme drivers for the QuadroFX line - you don't get that for a consumer card.

    * on the other hand, do *not* expect to play games with a QuadroFX properly. Do not expect frequent driver upgrades just to fix a glitch with some game. Do not expect the performance in games to be similar to, let alone better than, that of the consumer cards.

    For 3D Artists dealing with rendering, the CPU should always be the primary concern (faster CPU / more cores = faster rendering**) followed by more RAM (more fits in a single render; consider a 64bit O/S and 3D Application), followed by a faster bus (tends to come with the CPU)/faster RAM, followed by a faster drive (if you -are- going to swap, or read in lots of data, or write out lots of data, you don't want to be doing that on a 4200RPM drive with little to no cache) followed by another machine to take over half the frames or half the image being rendered (** 'more cores' only scales up to a limited point. A second machine overtakes this limit in a snap), as long as you don't have something slow like a 10MBit network going (for data transfer).
  • Re:Difference? (Score:3, Informative)

    by White Flame ( 1074973 ) on Wednesday February 06, 2008 @06:42AM (#22318978)
    Another difference that at least existed in the past, and probably still holds true today, is that workstation cards have more geometry pipelines, whereas gaming cards have more pixel pipelines. The gamer stuff puts out very pretty but a lower number of polygons, whereas workstations often just use kajillions of tiny untextured polygons. It's a tradeoff that affects silicon size and internal chip bandwidth, and explains why games and workstation apps run slowly on the wrong 'type' of card with their different demands.
  • Re:Difference? (Score:4, Informative)

    by Molt ( 116343 ) on Wednesday February 06, 2008 @07:22AM (#22319148)
    This doesn't hold true any more, the latest generation of hardware are all using the Unified Shader Model [wikipedia.org]. This removes the distinction between a pixel pipeline and a vertex pipeline as a unified pipeline is used which can be switched between pixel and vertex processing as the scene demands.
  • by bytesex ( 112972 ) on Wednesday February 06, 2008 @07:32AM (#22319198) Homepage
    And I daresay that he isn't the only one. The write-up is confusing, at best. Had me going for a bit, anyway. To ordinary people, even 'ordinary' slashdot-readers, a 'workstation' is some 'station' (a desk with a computer) that you do your 'work' on. That thing will usually contain a graphics controller that is on-board these days, the cost of which has been discounted in the price of the board, and certainly isn't expensive to an extent that a gaming-person's graphics controller will have a 'fraction' of the cost. Chagrin or no chagrin about lay (non-graphics) people reading topics that aren't meant for them, but to act as if this is logical, implicit or otherwise self-explanatory, is disingenuous and not much different from those slashdot-write-ups that start off describing some event in second life as if it happened in real life, and pretend that everybody knows what they're talking about. Clarity is king, and no man is an island and that sort of thing.

Nothing is finished until the paperwork is done.

Working...