Pushing The 512MB Barrier On Video Cards 525
Hack Jandy writes "Remeber your ancient TNT graphics card that had 16MB of memory? ATI is pushing the texture barrier by incorporating 512MB in their newest X850 video card lineup. The catch? Even ATI acknowledges there will probably be no performance benefits to bumping the memory support from 256MB to 512MB as the cards are 'intended to demonstrate the next-generation capability to gamers." An anonymous reader points out that Gainward (which sells NVidia-based graphics cards), will shortly introduce its own 512MB card, according to Hexus.net.
But what about Doom 3 (Score:1, Informative)
No performance benefits? (Score:5, Informative)
There certainly will be if you want to run Doom 3 (or Half Life 2 - I think?) with totally maxed out texture quality. From all the hoop-la I remember surrounding the Doom 3 launch, even 256MB of memory isn't as much as Doom 3 in Max mode will want to use.
Re:Well make it useful in a creative way (Score:5, Informative)
Or also seen here [linuxnews.pl].
I can't believe I'm going to admit this, but... (Score:3, Informative)
In the game, I have the option of clicking an "Extreme performance" tab that will tax the hell out of my video card (if it can handle it).
Sony's software has a warning that says "...to be used on video cards with a minimum of 512MB video memory..."
I have a Geforce 6800 with 256MB of DDR3 memory and dual 400MHz RAMdacs. This "Extereme performance" option taxes the hell out of the card. I'm getting one frame per second in this mode!
It is really how much memory you have, or should they just add more processing power to the cards? Perhaps a quad RAMdac?
Shoes to fill out (Score:4, Informative)
I think this is great. And there is already software to fill out these new specs too.
There is a next generation of engines that make the gap smaller and smaller between real-time graphics and rendered animated films. Take a look at this Unreal Engine 3 page [unrealtechnology.com] for example.
What makes these new engines exciting is not just the fancy graphics. Increasing the resources on the hardware ultimately allows for a much more streamlined art pipeline, easier engine development and overall a faster and simpler product creation.
Re:What's the limit? (Score:1, Informative)
Nothing new here... 512mb is common... (Score:3, Informative)
3D Labs WildCat VP990 Pro 512mb
Quadro FX4400 PCI-EXPRESS SLI 512MB.
I think Dome makes the 3rd card I'm thinking of - 512mb there too (or maybe we asked them to, I can't remember).
So
The 512MB barrier has already been broken (Score:4, Informative)
640MB GDDR3 total memory
512MB GDDR3 unied memory with 512-bit-wide interface bus
128 MB GDDR3 DirectBurst memory with 128-bit-wide interface bus
Full Specs Here [3dlabs.com]
Re:Fast and Big mem (Score:3, Informative)
And I'm waiting for the day that a processor has more cache than my first comp had disk space.
--
*sigh* you HAD to mention that, some of us are already there
The 8 inch floppies (pre hard drives) on some of the kits had 200K disk (180K, 160K, I forget), the pentium processors (300 mhz & 333 mhz) I just tossed out had 512K cache so I'm sure the ones running ( 1+ ghz & 3+ ghz) exceed the drive drive size.
I'm not sure what size floppy drives the C64 and Vic 20 had, I think ~180K for the 1541 model. I THINK the CBM 8032 had a 5MB hard drive option. The old Apple floppies were small too, not sure about the hard drive option where it finally arrived. So yeah, I've passed the 'first drive' level and am now waiting for first hard drive.
GL based window managers (Score:5, Informative)
Assume you were to use an OpenGL based window manager, wherein each window on your screen is little more than a polygon with a texture applied to it.
Assume you are working at 1600x1200 resolution, 24 bit color depth (padded to 32 bits for possible alpha channel).
Your frame buffer alone takes 7.3 MiBytes.
If you have a 32 bit Z buffer, add another 7.3 MiBytes.
Each 2D window in use will consume texture memory, so if we assume that the remaining 497.4 MiBytes of memory on the card as window memory, that lets us open roughly 68 full-screen windows before consuming all texture memory on the card.
If some of the windows are 3D windows themselves, you are going to want them to have their own Zbuffers - so double the memory usage for them.
While 68 windows may sound like a lot, given that most GL compositing schemes I've heard of want to keep ALL windows available, even if they are not mapped, to avoid expose events to the apps and to speed window open and close events, and I could see you getting to 30 windows pretty easily. Allowing double that for headroom doesn't seem like so bad an idea to me.
And I've ignored the XVideo overlay needs.
This is happening... (Score:1, Informative)
Re:translation (Score:3, Informative)
Your current games likely won't use 512 MB of video RAM, so you're right that it isn't practical to buy one of these just now for gaming (and gamers will probably buy it anyway). But future games will benefit with more realistic graphics, and other 3D card applications (3D modeling, visualization, offline rendering, vector processing, accelerating next-gen GUIs like Xorg/Longhorn) could definitely use the extra room immediately.
* well, it could have performance benefits if you're already using more textures than your video RAM can hold and thus spilling over into main RAM. But games go to great pains to avoid this so you likely aren't.
Re:But what about Doom 3 (Score:2, Informative)
Re:I can't believe I'm going to admit this, but... (Score:3, Informative)
Re:This is why sound cards are no big deal! (Score:5, Informative)
Ever wonder why GPUs are such a big deal and sound cards are such an after thought?
I think the reason why soundcards don't change very much because the fundamental methods of generating sound isn't compute intensive.
With 3D video, you're computing the display output, ray tracing, shading, whatever it is. Algorithms not samples define the visuals. Certainly there are "samples" (ie, texture maps) but these themselves need to be rendered through computation. At the same time, resolutions for display are increasing, requiring more computational horsepower. Hence a need for progressively faster CPUs to drive larger, more details and faster framerate visuals.
With audio, a lot of the audio world is still sample based--there usually aren't algorithms generating sounds from fundamental principals. If there are, it's in a highly specific use (ie, virtual instruments in something like Cubase, which uses the main CPU) or it's in some sort of environmental processing, like DSP effects, positioning etc which don't require that much performance past existing products today that have integrated DSPs. That and audio resolution in general isn't increasing--not at a rate compared to someone going from a 800x600 to 1920x1280 pixel display. Even adding extra channels doesn't seem to drive this requirement further.
As a result, I guess you just don't see the requirement to have "more powerful sound cards".