ATI R300 and R250V 297
Chuu writes "The ATI R300 (Radeon 9700) and R250V (Radeon 9000/Radeon
9000 Pro) reviews are out, at all the
usual suspects, but the one you want to
pay attention to is over at anandtech.com, since somehow Anand got permission to publish his benchmark results for the R300
while the other sites were stuck with whitepapers. The results? The R250V is a GF4MX killer, which is not saying much. On the other hand, the R300 absolutely trounces the GeForce4 Ti4600, running
54% faster in
Unreal Tournament 2003 and 37% faster in Quake 3 at 1600x1200x32 on a Pentium4 2.4ghz."
Anand's benchmarks (Score:5, Insightful)
Of course, as he points out, the GF4 numbers are available, and it only takes some simple math to extrapolate from there.
The card looks very impressive. It's out 4 months before the NV30. Maybe by then ATI will have drivers worth a crap too.
Re:Anand's benchmarks (Score:2, Informative)
Re:Anand's benchmarks (Score:2)
Re:Anand's benchmarks (Score:2)
I will note that often, the Linux drivers for nVidia cards are faster than in Windows. They use the same driver model.
I've been pretty happy with the PowerVR Kyro 2 drivers in Linux. They are only beta drivers right now, but seem to be faster than when I used Windows 2000. RTC Wolfenstein flies, and looks gorgeous.
Personally, I don't care if they are closed or open, as long as I have something that works well. Not everything can be open source. There need to be some exceptions at times- which is why I fronted $35 to www.opensound.com for proper CS4630 (for the Santa Cruz) drivers.
Re:Anand's benchmarks (Score:2)
After installation, be sure to set your X-Server to 16 or 24 bit color. On the Kyro cards, it seems as though 24 bpp really acts as 32 bpp. This may be due to the 32 bit internal true-color rendering.
Re:Anand's benchmarks (Score:2)
Uhhh... No offense, but you obviously don't know much about the ATI linux driver situation.
ATI has written linux drivers for the Radeon 8500 and FireGL line of video cards.
Dinivin
Re:Anand's benchmarks (Score:2)
Matrox and ATI both need to make top-notch linux drivers like nVIDIA, but open-source. Matrox needs to implement occlusion culling, Z-compression, fast Z-clear, and a better memory controller.
If, however, NVIDIA open-sourced thier current driver code for thier graphics cards and the nForce, I'd probably ignore other companies' solutions completely.
Re:Anand's benchmarks (Score:3, Informative)
All the rumors for the R300 indicated a GPU clocked at 300+ MHz, with 315, 325, and 350 being the most bandied about numbers.
Re:Anand's benchmarks (Score:2)
I think a crack pipe is calling your name.
the only one (Score:1)
I'm still holding off on my excitement (Score:2)
But they've been known to, um, "help" their drivers along with specific applications. When I see one plugged into my PowerMac while I'm playing Medal of Honor or Warcraft III and I see better performance, then I'll believe it.
Now, see what happens to the boy who cried "framerates", kids?
Re:I'm still holding off on my excitement (Score:2)
Apple needs to keep up with the high end graphics cards if they want to keep attracting gamers (and games) to their platform.
-Spyky
It should! (Score:1, Troll)
That said, congrats to ATI - I love competition in the marketplace. Now if only they could write some decent drivers for once in their lives.
Re:It should! (Score:3, Insightful)
If you don't compare it to current cards you don't have a frame of reference for how powerful the new cards are.
Re:It should! (Score:2)
secondly, about your sig: the "Linux is only free if your time is worth nothing" quote is NOT anonymous. It's from jwz [jwz.org], a rather famous (among geeks) Linux user.
Re:It should! (Score:2)
Why not, is it any different from nVidia and their "Detonator" drivers?
And the current Catalyst drivers have been having some issues with Neverwinter Nights, often requiring that they be uninstalled and the user revert to an older version of the drivers.
The main contention that I have is that since they seem to have dropped the pretense that their drivers for the 8500 up were going to be binary compatible with newer cards that they will orphan the 8500 series drivers. Yes, it can happen, ask anyone who has a Rage Maxx and Windows 2000.
Re:It should! (Score:2)
That was a somewhat different issue, as it had as much to do with
how Win2k handled the two GPUs on the card together, as with how ATI
designed the Rage Maxx. If you check now, you might find it surprising
how many of ATI's legacy cards have had newer drivers released. I've even
found W2K drivers for my old AIW Pro card, which most people thought ATI
abandoned close to a decade ago.
ATI still has a ways to go, but thier current level of driver support
has gone from non-existant to visible, which is an unbelievable improvement
from the viewpoint of most of thier users. If this keeps up, they might
even be able to claim a consistent release schedule some time soon.
This is also less of a break than it was when ATI went from the Rage 128 series
to the Radeon. As the R300 is still based on the same design of the original
Radeon. They may never be able to claim a complete UD model for all thier
cards, but they is still some uniformity that can carry thru all generations
of the Radeon.
Re:It should! (Score:3, Insightful)
All that matters is who has the best cost/performance ration (right now), and who has the best performance come Christmas time when people really start spending money.
Re:It should! (Score:3, Insightful)
Nice Logic. What are you some sort of Nvidia fanboy?
So when the NV30 comes out(its more like 5 months away), is it O.K. for the ATI people say "yawn" big deal, put it up against our next card in 3 or 4 months.
Disrepect ATI all you want but don't act like a card that will crush the top of the line Nvidia for several months to come is just something to take for granted.
Excellent! (Score:1)
Holy Mother of Carmack!!! (Score:4, Informative)
Though the framerates at 1600x1200 on UT2003 are not exactly playable (there goes my hopes of running DoomIII at 1600x1200 on this baby) ATI has finally produced a card worthy of their name.
Nvidia has atleast six months to go before they can have something to show. And running the 927 leaked build of UT2003 on a GF4 Ti 4600, you dont get playable framerates beyond 1024x768 with every detailed notched up.
Re:Holy Mother of Carmack!!! (Score:3, Funny)
2.51 is 251% of 1. 1 + 1.51 (which is 151% of 1) = 2.51. Here endeth the lesson.
You could on the other hand say, "2.5 times faster than the nVidia card."
Re:Holy Mother of Carmack!!! (Score:2)
100% faster == 200% of the speed == 2x faster
150% faster == 250% of the speed == 2.5x faster
Nope, he's right.
Same speed = 100% of the speed = 0x faster, ie no faster, ie the same speed. If it is any "x" faster, then it is not the same speed.
The only problem (Score:3, Informative)
The Raven.
XFree drivers (Score:4, Funny)
If by "drivers" you mean "closed source drivers for the FireGL card based on this chip that support all the card's rendering features, but none of the video capture or tuner functions of the inevitable AIW version", then I would guess about 8 months.
If you mean "closed source drivers that support all the rendering, video capture, tuner, etc. functions of this card" from ATI, then I suggest you monitor Mr. Andy Krist's credit cards for purchases of cold weather gear - this will happen about the same time the MBA selects Dr. Hawking as a star player.
If you mean "Open source drivers that support some of the rendering, none of the video capture, and none of the tuner", then I would guess about 18 months.
Sad but true. A pity - were there to be good drivers for this card (good = open source, all features supported by the standard APIs (Xv, Video4Linux2, DRI)) then I would pay up to $500 for one.
Now, the question is, what about all the Mac owners?
Re:XFree drivers (Score:2)
On the bright side, though, Linux actually has working tuner and capture drivers for a lot of ATI hardware here at the gatos project. [sf.net]
Not for ATI7500AIW (Score:2)
Re:XFree drivers (Score:2)
Although, with his new exoskeleton [theonion.com], I'd think Hawking could take anybody on a little pick-n'-roll...
Re:XFree drivers (Score:2)
MBA? What is that? Someone with a Master's degree in Business Administration? Wacky.
Oh, maybe you meant NBA, the National Basketball's Association. In that case, your presumption isn't too far fetched. Dr. Hawking already has a powered exoskeleton [theonion.com] he can use to fight crime AND play basketball. So I guess those ATI drivers are just around the corner!
Re:XFree drivers (Score:4, Insightful)
Because I can compile the drivers for my CPU, and screw compatibility with other CPUs I don't have.
Because I will know it will work with the kernel I am running, which may be some mutant patched up version that the vender has never seen.
Because when the card is discontinued, I will STILL have it, and will still be USING it, and will still want UPDATES to the drivers.
Because my bretheren who run *BSD also deserve to have good drivers.
Because I am an embedded software developer, and damn it I NEED to be able to tweak the drivers if I am using it in my designs.
Re:XFree drivers (Score:2, Insightful)
Point #2 will give you a 0.01% improvement. Using open-source drivers that support half the card's features will give you a 40% disadvantage.
What the hell does the kernel have to do with video drivers? These are in XFree, not the kernel. There are some hooks in the kernel, but nobody says you can't give away the source for the module. That's what nvidia does.
When the card is discontinued, you'll probably throw it away. And if you will be the only person left using that card, drivers won't fix themselves even if they're open source. And I seriously doubt that you can maintain them, given that you probably don't even know how they work. Who in the world uses video cards so old that there aren't any drivers for them? For what?
The bsd people can very well run the drivers if the company makes them for bsd. Given that there are very few games or 3d apps on bsd, I don't think there's a market there. It's mostly used for servers, anyway.
If you're an embedded developer, I don't think you'll be integrating a PCI card into your "designs," much less tweaking drivers for it.
I'd rather have fast, stable, closed-source drivers that work than piece-of-shit reverse-engineered open source drivers developed and supported by amateurs. No company in their right mind will give away the complete specs, anyway. So we will be stuck with crappy slow drivers that can only take advantage of half the card's features. Just compare then Nvidia drivers on Linux (fast, stable, compatible with all cards) with alpha drivers for the ATI Radeon 8500 (that the WEATHER CHANNEL paid to develop, no less).
Re:XFree drivers (Score:2)
Yes, because THAT bug is important to ME, whereas it may NOT be important to the developers, so they may not work on it.
Point #2 will give you a 0.01% improvement.
I suppose you have some actual EVIDENCE to back that number up, other than having freshly extracted it from your nether oriface? Most drivers are compiled for least common denominator, and so cannot most effectively use features specific to one processor. Additionally, by compiling the code inline rather than accessing it via a run-time conditional, you can greatly speed up the code. You see child, I do this for a living, and probably have been since before you were able to wake up with dry sheets.
What the hell does the kernel have to do with video drivers?
All the 3D drivers have a component in the kernel to provide for security - a fact you would be aware of if you actually followed the DRI mailing list.
When the card is discontinued, you'll probably throw it away.
Since my first message clearly stated that wasn't the case, your point was void before you made it. Especially in my work, where I design systems with a projected service lifetime of a decade.
The bsd people can very well run the drivers if the company makes them for bsd.
And if they don't? Consider nVidia - they DON'T make BSD drivers. Again, had you done some homework you would have known my point was valid, but you lost this one before you started.
If you're an embedded developer, I don't think you'll be integrating a PCI card into your "designs," much less tweaking drivers for it.
BZZT! Wrong. I do. Again, since this point was made in my previous message, you lost this one before you began as well.
Let me make a suggestion: Next semester, see if your high school has a Debate class. Perhaps when you've actually studied how to analyze an argument, identify the points made, research them, and then formulate a response your ability to post intelligent discourse will be improved.
Re:XFree drivers (Score:2)
Re:XFree drivers (Score:2)
Believe it or not, patches and bugfixes *do* get sent in to open-source software, even if there's an existing paid maintenance team.
The ability to fix a bug doesn't say that you're more experienced and competent than someone else that missed it. A while back, I corrected a bug in a systems programming book that my professor had written. Does that make me more experienced and competent than him, a CS PhD? Nope.
Point #2 will give you a 0.01% improvement. Using open-source drivers that support half the card's features will give you a 40% disadvantage
So *all* of the features should be supported in the open source driver. Furthermore, compiling snes9x myself and setting the compiler flags I wanted sped it up by about 2-3x.
When the card is discontinued, you'll probably throw it away. And if you will be the only person left using that card, drivers won't fix themselves even if they're open source. And I seriously doubt that you can maintain them, given that you probably don't even know how they work.
Open source drivers get maintained. People that modify the kernel and break stuff have to take care of it. No care is given to breaking closed source drivers. Look at the driver list sometime in make menuconfig. There's some *old*, discontinued stuff in there. How are you going to explain this?
ATI will not fund BSD drivers. There isn't enough demand to make it worthwhile to pay the people.
Re:XFree drivers (Score:2)
And one of the single biggest problems holding back the DRI developement is that hardware venders refuse to provide documentation for the drivers (e.g. no specs on the chips), forcing the DRI team to reverse engineer everything. And surprise surprise, they sometimes miss things (oh, you have to CLEAR the DMA fifo before starting a new operation!). nVidia's internal engineers have access to all the documentation on the card - OF COURSE they can implement all the features.
And even for companies like ATI, the chip set docs are only available to the XFree developers under NDA - ordinary folks cannot get access to them. Because you cannot get access to the info on the chips unless you are a registered XFree developer, many people who COULD contribute CANNOT because they cannot get access to the chip docs. This reduces the number of people contributing.
And as for X being "huge" for embedded work - there's embedded and there's embedded. The systems I design have 64M of RAM on each of the 4 DSPs, before we EVEN start talking about the main processor. X is a drop in the bucket to me.
This is why I keep beating the drum on this.
CHIPSET MANUFACTURERS - PAY HEED!
The magic is IN YOUR CHIP, not in the interface to your chip. Telling me about the settings of the frobnicate register does NOT tell me how to implement the frobnicate function in silicon - and if it DOES, then you are SCREWED anyway, as your competitors WILL reverse engineer it.
Provide US, the developers, with the chip docs. We in turn will provide YOU with quality drivers, and we ALL profit!
Linux drivers? (Score:2, Interesting)
If so, count me in. Otherwise, I'll stick to NVidia.
Re:Linux drivers? (Score:2, Insightful)
It really isn't a question of will _ATI_ release linux drivers, but will they release enough documentation so folks like GATOS can implement a driver in a reasonable amount of time.
It'll be MORE interesting by end of the year (Score:3, Informative)
Slow it definitely won't be.
Re:It'll be MORE interesting by end of the year (Score:3, Interesting)
The only product scheduled to come out of ATI by Q4 is the 9500, which is a slower, stripped down R300 for less money.
And by that time it'll have to compete against the NV30, which is allegedly going to blow the R300 away (as it should given the time differences involved).
Re:It'll be MORE interesting by end of the year (Score:2)
I can imagine the nVidia NV30 (neé Geforce5) chip is probably going to need the floppy drive power connection, given its even higher transistor on die count than the ATI R300 chip.
By the way, the Intel Northwood Pentium 4's are well-liked because the switch to the 0.13 micron process allowed Intel to run a much cooler CPU, which allowed Intel to crank up the CPU clock speed to 2.53 GHz.
Re:It'll be MORE interesting by end of the year (Score:2)
Odds are they're stuck with
The NV30 probably will need additional power as well - I don't expect a 0.02 um change to reduce power consumption enough to eliminate the need while at the same time adding 10-15% more transistors.
I'd expect the next revision of AGP to seriously bolster the power available from the bus though. 3Dfx hit the wall 2 years ago, and now the non-monsterous die sizes are hitting it too.
Re:It'll be MORE interesting by end of the year (Score:2)
Given the fact that the 0.13 micron R300 variant isn't going to need huge production capacity, I think there is fab capacity around that could make the chip.
They used “Intel-like” approach to design?!? HA! (Score:2, Informative)
What we need to clue into is that due to marketing reasons ATI wanted to get the chip running at 300mhz. They didn't care about the possible performance loss, all the marketing assholes want is a high MHZ and they had to take a "different approach" (meaning, shittier design) to reach the 300mhz mark.
The sooner the average joe can accept that MHZ no longer equals performance... the better off chip design will be.
The Pentium 4 basically is less efficient than a pentium 3, however 2ghz makes morons happy. So 2ghz whatever the cost!
Noodle.
Re:They used “Intel-like” approach to design?!? HA (Score:3, Informative)
The Parhelia gets beaten in DX8 bechmarks by the GF4Ti because of the difference in clock speeds. If the 9700/10000 wants to compete with the NV30 at the top then raw MHz is needed - the DX9 specs state that both cards need 8 pixel pipelines for compliance (and both ATI and Nvidia say their next-gen cards do). This means whoever has the highest clock rate will have the highest pixel fill rate. Need to wait and see if the NV30 has more than one texture unit per pipeline, the R300 only has one (for a total of 8 texture units) which means if the NV30 has two or more for each pipeline, then it will beat the R300 in texel filrates by architecture alone (though for more than one texture unit per pipeline will need HUGE amount of memory bandwidth which is not likely to happen until DDR-II is utilized).
So MHz does equal performance... which can mean a marketing success or failure. Parhelia is not grat for gamers (other than TripleHead) due to low clock speeds. Nvidia have delayed the NV30 to make use of the 0.13u process to get higher clock speeds (rumoured to be 400+) and ATI plan on releasing the Radeon 10000 next year based on a 0.13u process as well to try and beat the NV30 if it proves to outperform the 9700.... which is expected to happen.
- HeXa
Re:They used “Intel-like” approach to design?!? HA (Score:2, Insightful)
Intel had to REALLY stretch to get the PIII core up to 1.1GHz on a 180nm fab process, including having to recall their first attempt entirely. With the P4, they had little trouble releasing a 2.0GHz core on the exact same 180nm fab process.
Which do you think is faster, a 1.1GHz PIII or a 2.0GHz P4? Intel's design strategy for the P4 wasn't all about marketing, being a little bit less efficient but clocking a LOT higher isn't entirely a bad thing.
Of course, I don't know just how well this correlates to ATI's newest video card, we'll just have to wait and see.
Re:They used “Intel-like” approach to design?!? HA (Score:3, Insightful)
Now, reread the section you misquote so heavily. The reviewer was specifically discussing an Intel-like design in that they hand picked transistors to go in specific places, rather than letting a VLSI design program layout the chip according to how it thinks is best. Intel and others have shown that this strategy - hand tuning critical junctures - can pay off in performance and manufacturing.
Intel's chip designs have been pretty damn amazing for the past two decades. They've frequently been the ones pushing Moore's law (yeah, go ahead and take the obvious whine - "because they needed to, their chips are so inefficient"), and they've eeked a helluva lot more features and performance out of designs than anyone in their right mind expected. Their fabbing is second to none and their processes are emulated industry wide. A 35% yield for a first run Intel design is godawful, but considered spectacular for other companies.
Have they made missteps? Yup. And I largely attribute those to upper management sticking its head deeply up its ass rather than to the engineers. Intel's brass stopped listening to their engineers 4 or 5 years ago. And it's been biting them since. Remains to be seen if they've figured this one out yet.
that MHZ no longer equals performance
No, it doesn't. But it's often a damn good indicator of performance, particularly in the GPU world. Frankly, the only people who know what the clock speeds on the chips are are the geeks who are into this thing. They're not advertised on the packaging.
Re:They used “Intel-like” approach to design?!? HA (Score:2)
extra power connection? (Score:2)
That's very interesting. For one thing, I don't know of many cases which come with two floppy power connections any more. Other than that, it sounds like a good idea. Finally use the legacy floppy crap for a modern purporse...
Re:extra power connection? (Score:3, Informative)
Honestly though, the past few power supplies I've bought did have a 2nd floppy connector on them. Never figured out what the hell they'd be used for until now though.
Re:extra power connection? (Score:2)
Re:extra power connection? (Score:2, Informative)
My older computer power supplies don't have two floppy power connectors, but my newer machine does.
If I change... (Score:2, Funny)
Re:If I change... (Score:2)
calculated frame rate (Score:2, Informative)
Unreal Tournament 2003 (DM-Antalus) 1024x768x32 High Detail Settings
Radeon 9700: 130.4
GF4 Ti4600: 94.5
Parhelia: 54.4
Radeon 8500: 57.6
Unreal Tournament 2003 (DM-Antalus) 1280x1024x32 High Detail Settings
Radeon 9700: 87.8
GF4 Ti4600: 59.3
Parhelia: 35.1
Radeon 8500: 37.9
Unreal Tournament 2003 (DM-Antalus) 1600x1200x32 High Detail Settings
Radeon 9700: 63.3
GF4 Ti4600: 41.1
Parhelia: 24.6
Radeon 8500: 25.2
Unreal Tournament 2003 (DM-Asbestos)1024x768x32 High Detail Settings
Radeon 9700: 210.3
GF4 Ti4600: 178.2
Parhelia: 100.4
Radeon 8500: 91.1
Unreal Tournament 2003 (DM-Asbestos)1280x1024 High Detail Settings
Radeon 9700: 144.3
GF4 Ti4600: 115.4
Parhelia: 65.5
Radeon 8500: 58.9
Unreal Tournament 2003 (DM-Asbestos)1600x1200 High Detail Settings
Radeon 9700: 104.1
GF4 Ti4600: 82.0
Parhelia: 46.9
Radeon 8500: 42.0
Jedi Knight 2 'demo jk2ffa' @ 1600x1200
Radeon 9700: 124.3
GF4 Ti4600: 113.0
Parhelia: 65.9
Radeon 8500: 93.2
Serious Sam 2: The Second Encounter 'Little Trouble' 1024x768x32
Radeon 9700: 115.2
GF4 Ti4600: 100.2
Parhelia: 67.4
Radeon 8500: 58.2
Serious Sam 2: The Second Encounter 'Little Trouble' 1280x1024x32
Radeon 9700: 102.6
GF4 Ti4600: 72.9
Parhelia: 49.5
Radeon 8500: 45.3
Serious Sam 2: The Second Encounter 'Little Trouble' 1600x1200x32
Radeon 9700: 77.6
GF4 Ti4600: 51.7
Parhelia: 37.3
Radeon 8500: 32.1
Whew... glad I bought ATI stock two days ago... (Score:2)
If the stock triples, I might be able to afford a 1987 Honda Civic with only 200,000 kilometres on the engine.
Wow - 24 page review (Score:2)
Real Not Unreal (Score:2)
Who Cares, This is a good thing (Score:3, Insightful)
Personally I don't, and you know why? Competition. Good clean, healthy, product innovating competition.
Something that is sadly lacking in the DeskTop OS market. Not to name any names *cough*microsoft*cough* but there is a very good example of what having one and only one player in the field. Poor quality product that we keep seeing so many bugs that we've become desensitized. Really, who falls over stunned when
So ATI is ahead today, so nVidia will be ahead tomorrow, so what?
Be glad that there is more than one dog fighting over the bone
Phoenix
ATI priorities (Score:2)
Anand and roundabout (Score:2)
Anand states: We didn't have much time with the Radeon 9700 so we couldn't run a full suite of AA tests nor take screen shots
Come on, this has got to be bullshit. All it takes in most games to take a screenshot is to push PRINT-SCREEN on the keyboard. All these tests, different games, different settings, it must have taken at least half a day to complete, and not once did they have the time to push print-screen?
Just say 'ATI wouldn't let us publish screenshots' instead of lying about it. Oh, maybe you weren't allowed to say that either? Bah.
I think it's great we'll soon have some competition in the arena, but these are really previews of things to come, previews tightly controlled by ATI, I'll wager.
One beeeeellion FPS (Score:2)
Does anyone else get the feeling there's a director of marketing at the GPU companies who just pulls a Dr. Evil inspired number out of the air and declares to the engineers that that's the FPS for Q3 they need for the next card?
"I want our next card to not just give me a frame per second but TEN frames per second in Quake. We'll see how they like that, huh?!"
"Uh, Sir? We already do one hundred and ninety two frames per second at maximum resolution."
"I see. Then I want..." finger curls against his bottom lip, "....One Beeeeeeeellion frames per second."
"But Sir, that's insane! No one can tell after about 50fps anyway!"
"Mini-Marketing-Me, keeeeeell him!"
So much for 'unified' drivers... (Score:2)
Now, if there's somebody here that knows more, what exactly does 'unified' mean in technical terms (not marketspeak)? How can a 'unified' driver work for vastly different cards like GeForce1 (basic T&L) GeForce3/4 (shaders and so on) and NV30 (FP pipeline)?
I don't see much that can be 'unified' in those architectures, and even less between the DX_8.1_and_lower and DX_9_and_higher parts, given the jump from integer to FP pipelines.
So, is the claim of 'unified' drivers purely marketspeak (and maybe it's just a collection of 'workarounds' for specific game problems) or is there a technical case to be made for them?
I honestly couldn't care about ATI cards any more (Score:2)
Guess I've just been burned too many times by crappy drivers that don't do everything that was promised. And I'm talking Windows drivers here. Not quite complete OpenGL support, games not working correctly when they come out, games still not working correctly months after they come out because they are not big enough titles to get ATI's attention...
Now I'm not the world's greatest NVidia fan either. I'll complaign about their lack of innovation, the way they seem to just want to throw more hardware at the problem, rather than find a more elegant solution, whatever. But as long as their drivers manage to play the new games, and they keep the new drivers comming out that I can play the new games when they come out, I'll take their cards, even if they are slower.
A faster video card that doesn't play what I want it to, when I want it to, is of no use to me...
And to bring this a bit back on topic... This is basically a warning to prospective buyers, check out ATI's track record for drivers before making a purchase. I haven't heard too many problems with the Radeon card, so they may have finally turned their policies of not caring about you after you give them your money around. But lets be honest here, current correctly working drivers are more important than the gap from 120 fps to 150 fps...
To Chuu (Score:2)
Honestly, if they'd used an fps base and you'd had to do this with Gnome's calc, I could see you screwing up. But screwing up (1.38 - 1) * 10? Ouch. You should be able to do that in your head.
Re:To Chuu (Score:2)
We're both dumb. (Score:2)
My fault for that.
But what about the drivers? (Score:2)
Remember how long it took them to get the drivers for their other radeon right? (Forgive me if I can't remeber exactly which one).
Competition is great, but I'm not buying another ATI card until the have worthwhile Windows and Linux drivers.
I had an All-in-Wonder Pro card, and I could never get it to run anything 3d correctly. Yes, I tried to newest drivers. I tried the experimental drivers. I tried the drivers for non-AIW cards. Nothing made it not crash. I couldn't run Quake III for more than a couple minutes without my computer completely locking up.
I'm curiousity (Score:2)
No (Score:1, Funny)
Insert seven comments below this comment which all do the same thing.
Re:does it matter? (Score:2)
If you play Quake 2 still.. stick to your Voodoo 2. Otherwise , upgrade if your interested in next gen gaming..
Re:does it matter? (Score:2)
Playing current games at 120fps isn't any gain over 60fps. It's the next generation of games that will push these cards to perform at 60fps and last generations cards will start to show their age.
-Spyky
Re:does it matter? (Score:2)
At 60fps each frame lasts for 16.7 ms
At 120fps each frame lasts for 8.3 ms
Human response time is on the order of 100 to 200 ms [google.com]
Therefore at best (best case scenario for improvment: fast human and longest delay to screen update) The response time improvement goes from 116.7ms to 108.3 ms, a mere 7% improvement.
I'm not arguing against your point. Clearly there is a difference, however the improvement is very small. It's not like it doubles your response time like some people may mistakenly believe.
-Spyky
Re:Great... (Score:4, Insightful)
Sure you can. Hell, you've been able to for years now. Just drop down to the lowest resolution available, turn off all the effects, textures on lowest quality, and look at the not-so-pretty pictures.
What? You want all that? You want fog, bump mapping, realistic lighting, high quality textures, and anti aliased to boot? At 1280x1024? Well, then keep upgrading. Because while the R300 comes closest to that of any card to date, I doubt it'll be able to handle it for long. Real time graphics still aren't even approaching the level of Toy Story, much less that of Final Fantasy (the movie), or (*gasp*) photorealism.
When we can do realtime 3D effects that are indistinguishible from reality, we might be there. We aren't even close yet.
Oh, and you contradict yourself - you ask for disposable form factors and then ask for an open laptop standard. Hint - if it's disposable, it's not going to be open. Unless you're talking about something as silly and trivial as alkaline batteries.
Re:Great... (Score:4, Interesting)
I find it interesting that untouched realism is frequently just not fun. There are aspects to games that require tweaking or simplifying the environment so it isn't frustrating or impossible to make progress.
Masters of game-making understand that fun isn't derived purely from realism and that some unrealistic elements are the only way to make a game interesting and playable. For example, do you really want a football to get lost in the sun, so the receiver screws up and you lose the game? Or do you want clues in a mystery game to be so well hidden that you have to have take the hours of a real forensic investigation to find that triply-ricocheted bullet embedded in the neighbors compost pile?
I really think that super-realism in games is a pipe-dream, and the only way to achieve it is in a Star Trek-style immersive holodeck...or, perhaps, just going outside.
It also seems to be harder to find the basic time-waster games, since, I guess, it is a waste to put classics like Tetris or Solitaire on gigaflop-class consoles. In a way, this really is not progress at all.
He's not really talking about games. (Score:2, Insightful)
Games will never need to be photorealistic to be fun, the two things are totatly unrelated. I think that most graphics in games today are wasted on the player anyway, I never realized how good looking many games were until I could watch instead of play them. GT3 is a good example, amazing to watch, fun to play but you really can't enjoy the graphics when you are playing it. Same with many games for the X-Box. Or in MGS2, all the time, I'm checking out the little radar in the corner of the screen to see where guys are, not seeing any of the awesome special effects that they spent so much time on.
Re:Great... (Score:2)
You didn't read that it was a fraction of the movie's size with a subset of the functions they performed to generate the movie.
And I've read a good bit of Carmack, et. al. - and sorry, even the next generation of cards (R300/NV30) aren't up to cinematography level effects in real time. Digital effects are done at resolutions of roughly 4000x2000 with 48-bit or 64-bit color depths. The top end effects houses do rendering at a sub-pixel level so when things are super-sampled they actually start looking realistic.
They're getting better, but have a long way to go yet.
Re:Great... (Score:2, Interesting)
And, yet, the original post always gets to "5, Insightful".
It's almost as if the hardware review sites need to explain the situation in the intro of their review, so that the masses could understand what's going on.
Re:Great... (Score:5, Insightful)
Yes, an excellent troll; you've sounded passionate enough and invoked "cheaper!" enough to confuse some moderators into giving you points. Let me go over your completely misleading rants:
Now we can all play games at 3x the refresh rate of the monitor.
We don't PLAY games at that rate; we benchmark them there for comparison with other cards. I have a 15" LCD that I game with. I run my refresh at 60Hz, which I am imaging you are translating to "a maximum displayable 60 frames per second." I'll let someone more technical than myself debunk that one.
I just got a very nice MSI GF4 Ti4200 (for $145 from GameVE.com, free shipping, only because newegg doesn't carry them and they are extremely overclockable cards with a great software bundle). If I ran this card in my LCD's native resolution of 1024x768, with the basic graphics settings, I pull approximately 180 frames per second in Quake 3. If, however, I go to the driver settings, crank up Aniso filtering, and turn on 4x FSAA (anti-aliasing is beautiful, btw), and set all settings to max quality in the game, I get about 85 frames per second in Quake 3. That is what my GF4 MX440 card was pulling with no options on.
We need cheaper and more integrated. Get rid of the DIMM and PCI slots and all the legacy hardware. Put the memory on the motherboard and create a disposable form factor and an open laptop standard.
Again, very nice karma troll. Cheaper is nice, and integrated has its place, but we do not need it. We don't want to put memory, cpu, and all peripherals on the board, for a variety of reasons. The two bigs ones are 1) repair/replacement after failure, and 2) CHOICE. If you want to buy a $20 video card to put in that AGP slot, you can! If you want to buy a $400 Matrox Parhelia to run 3 monitors in Quake 3, you can! Anything and everything between, as well! Everybody has a different budget and a different set of needs. Let the consumer decide.
Disposable form factor? Is that tongue in cheek? Do we want disposable PC cases? Or just good standards like ATX? I know plenty of people who have had ATX cases for 5 years and have housed 4 different generations of hardware in them.
Open laptop standard? Good idea, but many OEMs already work from something similar. The problem is the high cost of development on miniturized, highly integrated systems like laptops, especially when they need extremely tight cooling systems. Someday there will be a standard, where you can go into a store, buy a chassis (for 12, 14, 15, 17 inch LCDs), assemble the mobile parts, and walk out the door... but why bother? There are tons of cost-effective, and vendor-serviced laptops available in any conceivable configuration RIGHT NOW. Just because you can't get it for $1.99 at 7-Eleven does NOT mean the market is broken.
So my summary is that we don't need more integrated, and cheaper is good, but we have cheap already. You were trolling, and I was feeding you. Any questions?
Re:Great... (Score:2)
LCD Gaming performance (offtopic but answering Q.) (Score:2)
I have zero complaints. The cost was good, I got zero dead pixels, the fit and finish of the unit is very tight, and the refresh runs up to 85Hz, depending on the video card you have it hooked up to. The brightness and contrast are superior; I have hooked up a 17" tritiron and a 19" tube to test at 1600x1200 to see what I am missing, and I REALLY miss the flicker-free brightness of the LCD. (PS - No magnetic interference from things like fans, either! Just solid picture.)
The only nags that I have are it's not big enough pixel-wise (because I am too cheap to buy a nice 17-19" unit), it only has the standard 15-pin VGA connector (most newer ATI/Matrox/Nvidia/SIS cards have DVI connectors), and when I am in a fast round of capture the flag in Q3:TA with Scout powerup, I can see a little bit of clipping during power running and jumping. Just a hint of it, but enough that the purists would complain.
There really isn't any "blurring" as you might have seen on dual-scan LCDs, but sometimes you can see some pieces clip as the screen does fast color-changes during colorful terrain areas. Granted I have a GF4 Ti4200 cranked up to 300 core and 600 RAM, which is capable of running at many more frames per sec than an LCD monitor is capable of displaying cleanly (because of the alluded-to activation/deactivation time on the pixels), but it is a good value for $370. At least it is for me, and I have been dealing with monitors for a long time. I plan on staying with my 15" until the 17s are available for under $300... in other words, a long time.
Re:Great... (Score:2)
If the video card is generating 90 frames per second, then during that second, the 60 Hz monitor has displayed bits and pieces of the majority of those 90 frames, depending on the gun position when the frame was displayed.
If the video card is generating 35 frames per second, then during that second, the 60 Hz monitor has displayed (probably) all 35 of those frames, sometimes two or three times as the gun scanned the screen several times before another frame was pushed.
At least I hope I'm thinking this out right, or MAN am I going to look like an idiot.
Re:Great... (Score:2)
I smell a know-it-all that needs whackin'.
Modern graphics API's use time, not frames, to determine the speed of a game. The graphics rendering is completely independant of actual gameplay. It's the only way a game can be expected to run 'at the same speed' on the wide range of hardware.
Ever use a boot disk to load DOS and then play an old game? The original, unadapted Wing Commander (1989 version) is completely unplayable because hardware speeds were so close back then, that the programmers could get away with using frame rate to regulate gameplay.
A modern game doesn't care if the rocket impact is rendered. The game registers 'impact' by the vertex's position, which is computed seperately. When the graphics card does the vertex handling, the game still keeps a (much smaller) set to calculate object positions. In other words:
The graphics card computes thousands of vertices, and renders the entire scene once.
The CPU will compute a few hundred vertices. (the collision boundaries, which is generally a bunch of cubes the model fits inside) There is all kinds of time for the CPU to compute a few hundred intermediate steps before the graphics card asks for the next 'snapshot' to render.
No, you don't miss the frames at all. What is this so-called 'need'? First, there is a very big difference between keeping track of the objects (Poly boundaries/collisions, positioning the vertices, etc), and actually rendering them. Vertex calculations (including physics and animation) is much less computationally-intensive. That's why the first 3D cards really only handled rendering. The CPU still did all the vertex operations-- the 3D card did the (exponentially) more intensive rendering of the frame.
The way it usually works is as follows:
Frame Buffer A is displayed on screen
Graphics card renders to Frame Buffer B
Graphics Card renders to Frame Buffer C
When all of Buffer A has been displayed, flip display pages (or use a blit) to Buffer B.
Frame Buffer B is displayed on screen
Graphics Card renders to Frame Buffer A
IF Frame Buffer A finishes rendering before B finishes drawing, flip pages (or blit) to C.
Begin rendering B
If A is being displayed, render C.
If C is being displayed, render A.
If the buffer isn't being displayed, render the next frame. Show frames in order, but drop frames when a more recent one is available.
And so on. This is 'triple buffering', which not all games support (although it is becoming much more common). Double-buffering is almost always used, where there are only 'A' and 'B' buffers.
Which means, that even with vsync enabled, the card is capable of rendering 120 (double) or 180 (triple) buffered. (And that's at an eye straining 60 Hz. With a better monitor that refreshes at, say 85 Hz, the card renders 170 (double) to 255 frames per second.
Re:Great... (Score:2)
It has been shown, however, that even though it's impossible to tell the difference in frame rate. However, in real life (as in games) there are things that happen too fast to see the motion.
Games are full of explosions, etc. Very high-speed motion. Most people have watched too many movies; they're used to 'slow' explosions where debris & effected objects are visible on screen. Movie makers know we like eye candy, so they give it to us.
Reality is quite different. A TOW missile explodes before it hits its target. The expanding gas forms a 2-4" hole in the targets armor in microseconds. A person watching it can't see the transition. Bullet wounds take 2-3 frames to fully appear in a movie. Reality is more like 1e-6 frames. Explosives can lift an entire car feet into the air so fast that a human thinks it's instantaneous.
Video games use fairly real physics, as it both makes animating easier, as well as having a more realistic 'feel'. The frames rendered follow the model. With even moderately real physics, an object can move large distances in between frames.
And, of course, there's the ultimate trump: Online gaming. Where the object boundaries (often simplified/compressed) must be transmitted over a low-bandwidth link, with a latency of hundreds of milliseconds. It doesn't matter how fast the graphics card renders, or how well the game keeps track of positions interally.
Updates of 30/sec is pretty optimistic, with 10-20/sec more typical. Other players can 'pop' locations in between frames simply because, in between location updates, the opponent's 'actual' location(s) end up being different than the one the CPU guessed it would be.
Which can mean 4-5 frames were rendered with incorrect locations, the update is recieved, and the 'real' frame is rendered. Next the game guesses where the opponent will be by the next update, and renders the frames necessary to make things look smooth.
The guessing is an imperfect way to make up for the large difference in frame rates and multiplayer location updates. However, there simply isn't any option; there are 4-5 frames that must be rendered before the next update. Simply 'stitting still' looks awful, and lends itself to the perception of a lower framerate than actually exists.
Programmers try to close the gap by making an educated guess. Since they use a realistic motion model (inertia, gravity, etc.), nearly all the possibilities for the 'next frame' can be eliminated immediately. Then it just chooses a 'middle road' that is close enough that us humans don't notice.
Any high-speed, unexpected changes (such as an explosion) can foil the system:
The player thinks they've killed someone (that's what was rendered/displayed on their screen, after all)
But the estimate was wrong. The 'someone' was actually in a safe place when the explosion changed things.
There is no 'backing up', so the next frame shows the person alive and well, and in a completely different place.
The gamer gets upset because they want a perfectly synchronized game
The much lower frequency of positional updates is unacceptably 'chooppy' when such synchronization is used.
The programmers use a 'physics' trick to try and smooth out the picture, but the trick sacrifices accuracy.
Re:Great... (Score:2)
Re:Great... (Score:2)
Re:Great... (Score:5, Insightful)
> We don't need faster anymore.
Nonsense.
A *lot* of people want movie quality graphics in real-time. Imagine playing a game with the visual quality of "The Matrix", but completely interactive!
There are a ton of physical effects that still can't be done in real-time (yet), due to the memory bandwidth and geometry complexity.
e.g.
High resolution (4Kx2K) color/z/stencil buffers used for ray-tracing, radiosity, and displaying thousands of models each with a million+ vertices (used by CAD/Medical/Games), etc, come to mind.
Then when you add in compositing / blending multiple alpha layers you just burnt all your left-over speed (if you had any). DOH!
There is a reason that Pixar and other CG studios render scenes at the *sub* pixel level @ 64 bits/pixel. We're talking about 100+ triangles PER pixel. Because detail, such as hair which is less then a pixel wide, needs to be "super-sampled". Right now, games show hair by approximating the surface of it which makes it look "blocky". UGH.
So if you want graphics to stagnate, and never look more "realistic", then sure, stick with your GeForce4 (or below.) It will continue to be usefull for the years to come.
The rest of us will be trying to dazzle the world with new visual FX making people go "Wow!".
Re:Great... (Score:2)
I wasn't sure whether I should just mod you down, or reply. Obviously I'm replying.
First, most benchmarks measure average framerate. That's fine, but what's really more important is the LOWEST framerate in a benchmark. A card might average 80fps in a certain game, but dip down into the 30's, 40's, or lower during really intense action... which is when you need high framerates the most.
Second, PC games evolve. Sure, nobody needs to run Quake3 at 250fps, but future (and even current games... Q3 is several years old) games will be much more demanding. So when people drool over ridiculously-high Q3 benchmarks, they're really drooling over the raw power of the card, and how it's going to run games a year or two from now. Some people like to purchase for the future.
Third, while I don't know of any open laptop standards, there's plenty of motherboards with integrated video and other components. While it still has SDRAM slots and 1 PCI slot, the motherboard has a built-in NIC, modem, video, and sound. And I don't see a downside to non-integrated memory... why would you want it soldered onto the motherboard?
Re:Congrats ATI (sarcasm) (Score:2)
I've never seen GOOD drivers from ATI. I've had drivers that worked, but always made me nervous. I've had nothing but grief with them if pushed in any way. (And no, I don't mean by overclocking them!)
I replaced my ATI card with an nVidia, and you know what, NO crashing issues I can blame on the video card since. None. Zero. Zilch.
ATI has to get their drivers in order. I don't care if their video cards are 3 times faster than anyone elses, if it isn't stable its useless.
Re:Congrats ATI (sarcasm) (Score:2)
Upgrade your drivers to the most current ones, and see what happens to your system. Then try to go back to the old ones.
Granted, the card was stable up until the point I installed the new drivers (mine were about a year old), but installing new, improved drivers from the manufacturer should not do that. I don't consider the instability unlucky, I consider the stability lucky. Oh, it was on an AMD900, 256RAM, Win98SE.
Re:Neither! Here's why... (Score:2)
Re:Neither! Here's why... (Score:2, Informative)
The big difference is the memory your card has, 64MB cards won't be great performers for high texture settings due to a statement about DOOM 3 having about 80MB worth of textures to be loaded onto the card at any one time.... 128MB cards will be needed for maximum eye candy.
- HeXa
Re: (Score:2)
Re:ati drivers ...again (Score:2)
As much as I like playing GTA3 slowly with extra fog, I found a patch to fix the fog problem on ATI's site. I reported the other issues I was having to ATI's technical support; however, since ATI doesn't even have a checkbox for reporting application or video related issues (I'm not kidding, see for yourself on their support page!), I have my doubts about the issues being resolved. My only recourse is to go back to the old drivers and get a Playstation2 if I want to play GTA3... Course, there's also Nivida.
Re:its not just linux (Score:2)
Re:slashdot payola? (Score:2)
Re:Whatever it is..Its good (Score:4, Informative)
No need to describe, I guess 3dfx owners with a clue understood what kind of a company they are... In hard way...
Oh me? When it ships (or shipped already), I am buying it... I won't buy from a company which left me in "digital cold" just because they bought my card/chip maker...
mod me as you wish, I couldn't stand not saying this stuff...
You've not got the slightest idea what you're talking about. Nvidia did not buy 3dfx. They bought the intellectual property of 3dfx. They bought most of the 3dfx design work, technology, patents, etc. They didn't buy any of the office space, manufacturing plants or employees. They bought the IP because they thought that there was something in it that would be useful in their future chip designs.
3dfx Interactive [nasdaq.com] is still a company and is still in business, in a manner of speaking. If you want more info on the nVidia purchase of 3dfx IP, you can read about it here [nvidia.com], here [nvidia.com], or here [nvidia.com]. But don't go blaming nVidia because your favorite graphics card company stopped producing and supporting your product.
Re:Whatever it is..Its good (Score:2)
While the 3dfx web site is no longer available, the official 3dfx drivers (including Windows XP) are available from other sources [voodoofiles.com]. Just look for them, you'll find them. It took me all of 5 seconds with Google.
Fortunately we still have Linux drivers because they were GPLed.
The wonderful thing about the GPLed 3dfx Linux drivers is that you could use them (if you had a mind) to write your own drivers for other operating systems. Like so [voodoofiles.com]. Again, nVidia bought the IP from 3dfx because that's all that they were interested in. If they were interested in supporting 3dfx hardware they would have just bought the company lock stock and barrel.
Re:Whatever it is..Its good (Score:3, Insightful)
Huh?
What crack are you smoking man? You're claiming that 3dfx had quality? What a load of horsecrap.
3dfx supported only 16-bit color long after nVidia, ATI, Matrox, and PowerVR had moved on to 32-bit color. Their 2D output was even worse than nVidia's, and they had no features other than pure fps.
I had an original 4 MB Voodoo, and I bought a pair of 6 MB V2's. I still saw the writing on the wall when 3dfx ignored the rest of the industry and continued pushing 16-bit and Glide while 32-bit and DirectX/OpenGL were ascendant. Their anti-aliasing sacrificed too much performance for too little advantage, their cards were overpriced, and the chips were designed with such monsterously large traces that they created too much heat and used too much power (yeah - ATI and probably nVidia will now require additional power too.. but these are 0.15 and 0.13 um designs as opposed to the 0.25 beheomoth that 3dfx had).
Also I know Aureal story, what a sad thing in fact. Just by "law" you can "kill" your rival
Although I wish Aureal was still around (and still think A3D is far better than EAX), you still don't get it.
Aureal and 3dfx were already dead. Bankruptcy, selling of goods, and so forth. Competitors bought the intellectual property (read: patents) because it meant they could utilize some of the nifiter features in future products. It sure as hell doesn't mean they have to support the old products - they didn't buy the product lines, plants, existing inventory, etc. And it doesn't mean that someone else couldn't buy all of that and support it.
Did Creative kill Aureal? Essentially. Aureal was a couple years ahead of itself in order to make a break in Creative's stranglehold on the audio market. But Aureal failed to market themselves properly and fell to a much larger, much more market savvy, highly entrenched competitor.
Did nVidia kill 3dfx? Again, essentially. nVidia produced superior products with superior support, pricing, availability, and features. And unseated the 800 lb gorilla of the graphics market. Whining about the fact that 3dfx died only proves that you have no clue just how incompetent a company they really were.
Re:Whatever it is..Its good (Score:2)
I also remember them supporting Linux and having a decent product. Also, the 32 bit "image quality" thing you mentioned in your last post is, sorry to tell you, marketing hype. Why? Because at the time, mostly nobody used 32 bits in real life (ie: gaming): performace was too low, and i really mean slow (TNT card).
3Dfx did a lot of mistaques, but they lost to crappy RIVA cards mostly not to a GForce or anything decent. TNT was the first thing to rival 3Dfx hardware, and it had more to do with 2D + 3D integration than anything else.
But I also remember one more thing: buying an NV1 Edge 3D which I still own. What happened? Well, the Nvidia folks NEVER supported it. No games (except the bundled ones), no Direct3D drivers, no Win98 drivers. Not even DirectDraw drivers (except a buggy 2.0 version).
NVidia is something that I will be avoiding. They didn't support a poduct they sold and they HAD resources. 3Dfx made big mistaques, but they NEVER left a product unsupported while in bussiness. They had flowed products, but support was always there trying to fix all hardware.
Re:Is it again the case of game specific drivers? (Score:2)
I'd like to see non-game tests, too, which disappointed me. Something to test raw video horsepower without invoking memory or CPU limitations because of non-video processes. (sound, ai, etc)