Tile Based Rendering and Accelerated 3D 109
ChickenHead writes "AnandTech has put together a review
of the Hercules 3D Prophet 4500 based on the new Kyro II chip from STMicro.
What's unique about this particular chip is that it uses a Tile-based Rendering
Architecture which results in a much greater rendering efficiency than conventional
3D rendering techniques. It is so efficient in fact, that the $149 Kyro II card
clocked at 175MHz is able to outperform a GeForce2 Ultra with considerably more
power and around 3X the cost of the Kyro II card. With games not able to take
advantage of the recently announced GeForce3's feature set, the Kyro II may
be a cheap solution to tide you over until the programmable GeForce3 GPU becomes
a necessity." A very readable and interesting summary and an interesting technology and a potentially extremely cool video card.
Re:Too much power? (Score:1)
Re:sounds impressive.... (Score:1)
Re:DRI support any time soon? (Score:1)
Re:Too much power? (Score:1)
--
Re:Tile-based rendering (Score:2)
I don't think it's a hard wall by any means.
Re:DRI support any time soon? (Score:1)
Good for competition (Score:2)
Re:Tile-based rendering/alpha transparency (Score:1)
The PowerVR2 chips empirically choke on large transparent textures (House of the Dead 2 on the Naomi arcade hardware, which is PowerVR2-based, is a good example), so you can draw your own conclusions as to whether or not they implemented that optimization.
Nice price. (Score:1)
This sort of thing could really scare nVidia if it takes off; it'd be interesting to see if they come out with a Geforce3 Lite, or something, in order to compete with it.
Re:Too much power? (Score:1)
Funny you should ask... (Score:2)
Jury's still out on this design, but it looks promising to say the least. There's several developers trying to sweet-talk STMicroelectronics or Imagination out of register info to make Linux drivers right now because of the potential of the cards.
Re:Similar to the NEC PowerVR and PowerVR2 (Score:2)
Hercules, eh... (Score:2)
Have they really been making cards under that brand all this time?
Re:sounds impressive.... (Score:2)
---
Re:sounds impressive.... (Score:2)
Re:This card running on a 4 year old architecture! (Score:2)
Re:A little bit hyped maybe? (Score:1)
Not necessarily... The Kyro II retails for $149.99; I just bought an eVGA GeForce2 GTS Pro from a local wholesaler for under $170 and I've seen Radeon DDR cards for as low as $130 locally.
The Radeon performs slightly worse than the Kyro II, the GF2-Pro slightly better. Thus, I'd say that the Kyro II is right in line with other cards in its price range.
Re:Nice card and 1st damn it (Score:2)
explaining "poor" Quake3 performance? (Score:1)
I had the exact same feeling while reading through AnandTech's write-up.
And seeing such a huge performance difference between Q3 and Serious Sam, I wondered if it didn't come from the fact that Serious Sam's 3D engine "wasts" more bandwidth. Which would again explain the huge difference in the Fill Rate measured with Serious Sam.
While Quake3's engine would be more effective, sending less hidden polys to the card.
I remembered the days I fooled around making a couple of Q2 levels that, that a well designed level (mostly iD's) were very optimised in a bsp-tree kind of way (I sure hope what I'm saying makes any sens, because I'm really far from a 3D guru).
I for one would sure LOVE to here from John Carmack's point of view on such a technic, as he is probably the most thourough graphic card analyst I've ever read. And his points are from the other side of the fence, on the cosumer side.
Murphy(c)
Re: 4 year old...!!! It's new. (Score:1)
Saying that this is just a "4 year old architecture" simply because PowerVR has been implementing tile based rendering for some number of years would be like saying that the Geforce3 is nothing more than an overclocked TNT!
The Kyro's (i.e. series 3 PowerVR chips) contain many new features, and so can't be considered to be "sped up" versions of their parents.
Simon
insert standard employee disclainer
A lot hyped, definitely. (Score:1)
Quake III Arena Performance
'Normal' Settings - 640x480x32
'Normal' Settings - 1024x768x32
'Normal' Settings - 1600x1200x32
MDK2 Performance
Default Settings (T&L enabled) - 640x480x32
Default Settings (T&L enabled) - 1024x768x32
Default Settings (T&L enabled) - 1600x1200x32
UnrealTournament Performance
Minimum Frame Rate - 640x480x32 ***
Average Frame Rate - 640x480x32 ***
Minimum Frame Rate - 1024x768x32
Average Frame Rate - 1024x768x32
Minimum Frame Rate - 1600x1200x16
Average Frame Rate - 1600x1200x16
Serious Sam Performance - Fill Rates
Serious Sam Test 2 Single Texture Fillrate
Serious Sam Test 2 Multitexture Fillrate
Serious Sam Performance - Game Play
Serious Sam Test 2 640x480x32
Serious Sam Test 2 1024x768x32 ***
Serious Sam Test 2 1600x1200x32 ***
Mercedes-Benz Truck Racing Performance
All options enabled - 640x480x32
All options enabled - 1024x768x32
All options enabled - 1600x1200x32
FSAA Image Quality and Performance
Serious Sam Test 2 640x480x32 (4 Sample FSAA) ***
Serious Sam Test 2 1024x768x32 (4 Sample FSAA) ***
You can draw your own conclusions, but I think I'll keep saving for that GeForce.
Re:Tile-Based Rendering (Score:1)
This is because its a phenomenally quick render method when designed for, but
(a) it takes a big hit to do stuff the way every other 3d card on the market does things (and guess which method is going to get used by a developer writing for a platform where either might be in place), and
(b) if you are used to doing things the 'normal' way its a pain in the rear to try and re-jig your code into a tile-based format. You might as well rewrite the engine from the ground up.
Of course, if (as with the Dreamcast) you're writing explicitly for a tile-based platform then it kicks arse for the money.
Re:Hercules, eh... (Score:1)
Re:Wait for GeForce 3... (Score:2)
And if you'd read the article, you'd see that this card does achives FSAA at a decent resolution with very good performance, and that the quality of the memory architecture is what really makes it compare well, by massively reducing the amount of memory accesses.
Re:Too much power? (Score:2)
Re:Tile-based rendering (Score:2)
So you should still see a significant benefit - not as much as for opaque areas, though, as it can't just throw away the partially obscured pixels as it can with the totally hidden ones.
Re:Just Curious (Score:1)
--
(OT) Re:More memory speed? (Score:1)
This is the Vooodoo 3 2000, right? I thought the Voodoo what highly dependant on the CPU speed. This will be going into an old PPro 200. I will look into it, thanks.
Of Course (Score:3)
Re:More memory speed? (Score:2)
On the flip side of this, could the tile-based rendering be implemented for the very lowest segment of the video card market: PCI cards for legacy desktops? Wouldn't the tile-based rendering at least partially minimize the performance hit from using PCI as opposed to AGP.
I'd like to find an inexpensive PCI card to replace the 2MB Mystique in my old PPro200... I guess their wouldn't be much of a profit margin, however.
Doh (Score:1)
This design is very similar (if not the same) as the NEC's PowerVR and PowerVR2 chipsets.
Kyro IS a PowerVR chip. Read before you comment.
Re:More memory speed? (Score:3)
Nah - people have designed graphics chips that hit 'perfect' fill rates before - I know I did one (for the Mac 7-8 years back) that hit 1.2Gb/sec into VRAM (then state of the art DRAM) exactly as it was designed to.
Graphics chips have a relativly long history that is at least in part driven by the comodity memory technologies they have available to them. These days we're particularly troubled - system costs are going down, DRAM speeds haven't kept pace with CPU/GPU speed increases (CPUs have maybe gone from 100MHz to 1GHz in the time that memory has gone from 66MHz to 266MHz [transfer rate - latencies have only halved]).
'Tricks' like ISS (aka tiled frame buffers) work because they basicly cache the problem - at the expense of keeping an ordered polygon list (which means that you are more sensitive to scene complexity - too many more polys than pixels and you might be in big trouble) and latency (because you can't finish the poly sort stage before you start rendering - so you have to render a complete screen at once - while maybe buffering the next scene's polys in parallel) - note I'm over simplifying the problems here to explain some of the issues - there's lots of scope for smart people to do smart things in a space like this (before all the patents are granted - then without competition inovation will probably cease :-( )
Department of Redundancy Department (Score:2)
Since tile based rendering eliminates overdraw, the effective fill rate of a tile based renderer can actually surpass the effective fill rate.
Wow! They can make the effective fill rate surpass the effective fill rate?! Maybe they can make my bank account balance surpass my bank account balance!
Re:Linux Support? (Score:1)
This + T&L + high bandwidth (Score:2)
Of course The Carmack has spoken and does not agree with Tile Based rendering right now, at it's core it is kind of a kludge.. hrm..
I wonder what he thinks of that Anandtech article.
Oh great and powerful Carmack, we ask that you can grace us with your knowledge and wisdom in this time of confusion and shed light on the validity of tile based rendering. Hear us!
Re:Tile-based rendering (Score:3)
This is an instance of the old ATM vs IP or CISC vs RISC debates. It's the old engineering tradeoff: work smart but slow or work quick and dirty. Tile based rending is an instance of smart and slow, ie they do no more work than they have to, and thus get away with slower clocks and memory. The NVIDIA card is quick and dirty.
Historically, it is almost always the case that quick and dirty is the cheaper way to go, as it allows economies of scale to come into play. However, it is seeming more and more like the memory bandwidth bottleneck is here to stay, so the smart and slow approach is looking pretty good. Likewise as we run into physical limitations for network bandwidth, IP is going to have a harder and harder time to provide acceptable QoS and multicast solutions and ATM-like technologies will start becoming more prevalent.
Re:What's a "video card"? (Score:1)
I mean you get ripped off, with non upgradeable junk unless you build it yourself.
And unless you build it yourself, when it breaks you usually have to take it somewhere to fix it.
Build it yourself, buy the cards, mb, cpu, ram, hd. and enjoy.
Re:More memory speed? (Score:2)
The benchmarks show 350M pixels/s rendered on a 175MHz chip with two pipelines. I don't think anyone in the PC graphics industry has ever accomplished that. (I believe the VooDoo and other really early cards were held back by time to set up all the polys on the CPU)
Second, the point stands that this is quite new to the scene and that more bandwidth won't help.
BTW, thanks for the info.
Re:Hercules, eh... (Score:1)
I remember seeing ads for high-end Hercules boards in CAD magazines in the mid-90s, also.
zsazs
Re:Of Course (Score:1)
Re:More memory speed? (Score:1)
(I get a solid 60fps in UT, on a Duron 750 machine)
--
Re:Tile-based rendering (Score:1)
Integrated/Laptop support (Score:1)
Re:You can play... (Score:1)
Re:sounds impressive.... (Score:1)
T&L might be coming soon (Score:1)
Re:sounds impressive.... (Score:2)
QIII Arena 1024x768 @32bpp
GeForce2 GTS 64MB: 95.6fps
Radeon DDR 64MB: 80.6fps
That's a quite significant 15fps.
Q3 at 16x12 is unplayable on everything except the Ultra, but the GTS2 still wins.
MDK 1024x768 @32bpp
GeForce2 GTS 64MB: 105.9fps
Radeon DDR 64MB: 86.8fps
Again, about 18 more fps at this res.
MDK 1600x1200 @32bpp
GeForce2 GTS 64MB 43.3fps
Radeon DDR 64MB: 38.2fps
Only 5fps faster, but that's around 12% faster.
Unreal Tournament 1024x768 @32bpp (avg)
GeForce2 GTS 64MB: 84.5fps
Radeon DDR 64MB: 87.8fps.
Here the DDR wins, but only by 3fps.
Unreal Tournament 1600x1200 @32bpp (min)
GeForce2 GTS 64MB: 34.3fps
Radeon DDR 64MB: 18.8fps
Ouch. What were you saying about high resolutions?
The GTS is playable, the Radeon is not.
Unreal Tournament 1600x1200 @32bpp (avg)
GeForce2 GTS 64MB: 68.9fps
Radeon DDR 64MB: 56.9fps
The GTS is 12fps faster here.
Serious Sam 1024x768 @32bpp
GeForce2 GTS 64MB: 47.2fps
Radeon DDR 64MB: 50.1fps
The Radeon wins, but its only 3fps faster.
Serious Sam 1600x1200 @32bpp
GeForce2 GTS 64MB: 22.5fps
Radeon DDR 64MB: 24.7fps
A hair over 2fps faster.
Mercedes-Benz 1600x1200 @32bpp
GeForce2 GTS 64MB: 20.9fps
Radeon DDR 64MB: 24.2fps
The only decisive victory for the Radeon. Still, at the only playable resolution (640x480) the GTS wins 64.7 to 57.8.
So overall, the Radeon is a good card, but NVIDIA still has a significant speed advantage, and for only a little bit more, is worth it, in my opinion. (Not the mention the fact that they have better drivers and pro-caliber OpenGL!)
Re:A lot hyped, definitely. (Score:3)
You can play... (Score:3)
--
Nice card and 1st damn it (Score:1)
Tile-Based Rendering (Score:1)
Dreamcast's PowerVR chip is/was also tile-based (Score:3)
By the way, did you know you can use the Dreamcast Broadband Adapter to connect to your PC for some do-it-yourself development [julesdcdev.com]? Very cool...
/. using new clock technology. (Score:1)
A little bit hyped maybe? (Score:4)
More memory speed? (Score:2)
Isnt't this why the GeForce 2 Ultras even exist? Some people always want the fastest cards, and are willing to pay premiums to be on the bleeding edge... my guess is that the "bleeding edgers" will reap a higher percentage profit on each unit...
sounds impressive.... (Score:2)
Interesting FSAA performance. (Score:2)
My next upgrade will be the video card. I've been intersted in AA as soon as I heard it was available on a video card. If you check out the article, this new card has better AA performance than the Geforce 2 Ultra.
Very intersting.
Good thing I have to wait a few months anyways.
Later
ErikZ
Re:More memory speed? (Score:2)
Probably, because with a first product release, they want to enter a space that they could dominate (much lower price, much better performace) than one that they would have less of a p/p ratio. The kind of gamers that spend for a top of the line video card will just stick to a brand for "loyality" sake, or will skew benchmarks to make their choice look better.
This kind of thing goes on less in the middle range, IMO.
Of course, this assumes that this card delivers, and has not skewed its own benchmark too far.
--
Evan
Re:More memory speed? (Score:3)
Tile-based rendering (Score:4)
Tile-based rendering's big benefit it that is reduces overdraw to 0; that is, each opaque pixel on the screen is drawn exactly once. Performance for certain types of scenes is spectacular.
Dreamcast uses this, as well as many of Sega's arcade systems (HOTD2, for instance), which use the same PowerVR2 rendering system.
Where tile-based rendering falls down, however, is for scenes that contain a large amount of alpha-blended areas. Alpha-blended areas in today's hardware are necessarily drawn multiple times, from back-to-front, to accomplish transparency effects. Having to draw the pixel several times nullifies the zero-overdraw benefit of tile rendering. Since most tile-rendering systems trade fill-rate for zero overdraw, cards with insufficient fill rate for large alpha areas (read: all of them) fall down on large, alpha blended polygons. You can see this in House of the Dead 2 when fighting the Hierophant; if you get enough water splash effects on the screen, the frame rate chokes.
Tile rendering works extremely well for areas that are opaque, or use only small alpha-blended areas. It's getting better; it's just not perfect yet.
Mumbly Joe
You can also... (Score:1)
I know this is a shameless plug, but I spent all weekend working on ethernet, and I sent my friends a couple of e-mails via a telnet session (under a BusyBox filled initrd) from my Dreamcast :). But seriously, we need more kernel hackers in there so we can spit out more drivers....
Back on topic, the LinuxDC framebuffer writes from CPU RAM directly to PVR2 RAM, which is about as slow as you can get. I ran a simple SDL parallax scrolling example, and the results, were shall we say CRAP :). I've started thinking about how to accelerate the FB using the PVR2's Tile Accelerator, but I'm not that keen with its internals or how Tile-based redering would work (yet). If anyone there can point to some TA-based resources in general - there are a few good docs linked from julesdcdev, but I was thinking more general TA docs (e.g. not Dreamcast-specific).
We *need* interested developers, testers, and authors, to stop by LinuxDC (we're also in the process of restructuring our site), as we're finally starting to get the ball rolling...
M. R.
Too much power? (Score:1)
Personally, I would like to see an emphasis on increasing any given video adapter's efficiency and decreasing its price before increasing its power.
Re:Tile-based rendering/alpha transparency (Score:2)
While it may be that the PowerVR2 did not implement it correctly, there is nothing that prevents performance much better than immediate mode style rasterizers. Consider it this way:
A game needs to draw 5 opaque polygons, with 3 alpha polygons on top.
An immediate mode rasterizer would have to write all five polygons to memory, including all of the associated texture lookups and lighting calculations. Then, for each alpha polygon, it would have to reread bits from the framebuffer and combine it with the shaded textured alpha polygon. This is a lot of memory traffic.
A tile based renderer, otoh, would not need to do all of this. Obviously it would be able to eliminate all of the overdraw on the opaque polygons, but it would also be able to do the blending in the ON CHIP 24bit tile framebuffer, which is much much much faster than going to off chip memory. This means that instead of having to do read-modify-write off chip memory cycles for each of those alpha blended polygons, it stays on chip.
Now like I said before, I am not familiar with the PowerVR2 chip, and it may be that they do not implement this obvious optimization... I would assume their newer chip would.
My big question is "why not a T&L unit?" It seems like a sever handicap to an otherwise stellar chip. Although somewhat addressed in the article, they didn't really justify it well, and the benchmarks prove it would be handy. Maybe the 175mhz clock is what prevents an effective T&L unit from being added...
-Chris [nondot.org]
Bad news for ATI (Score:1)
All in all, this is bad news for ATI. They're losing their OEM business to nVidia not only in low cost PC's but in Macs as well. They decided to reinvent themselves with the Radeon's swank environmental bump-mapping and stuff, a high-end 2d card for graphic designers who fired up Quake on the office LAN after hours. This would (they hoped) put them in the #2 spot and help ATI move into the 3d gamer market. But looking at the benchmarks for the Kyro II, the new chip beats the DDR Radeon in several benchmarks, impressive considering the newcomer's lack of T&L rendering. Unless the Kyro has horrible image quality, I would guess ATI is not pleased.
* I realize that Power VR et al have been around for years making chips for consoles and arcade games. So was nVidia before the riva 128; I'm talking about entry into the PC graphics card market.
tile based rendering (Score:1)
Re:sounds impressive.... (Score:1)
Same for ATI. I've got the impression that both of those manufacturers made cards which are more suited for their A/V capabilities than their 3d acceleration capabilities..
-since when did 'MTV' stand for Real World Television instead of MUSIC television?
Re:Too much power? (Score:1)
In fact, I used my voodoo3 up until a month ago when I bought a new geforce2 gts. Yeah, I get double (or more) fps.. but it's almost overkill for what I'm using it for at the moment.. Frontline Force
But seriously.. if it's a fps, and it came out sometime in the past 4 years.. I've ran it on a voodoo2 or a 3 (2k).
-since when did 'MTV' stand for Real World Television instead of MUSIC television?
Re:sounds impressive.... (Score:1)
Re:Tile-based rendering, strenghs and weaknesses (Score:1)
So what are you saying? Tile-based rendering is the work of Satan?
Re:A little bit hyped maybe? (Score:1)
slow server; try freenet (Score:1)
Check out
freenet:CHK@qANifG8baVSFWd-ZsW5kvFVjcwcOAwE,ZXRUsp PkxMFRzwRsJdrpqg
Wait for GeForce 3... (Score:1)
Forget about games. (Score:1)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~ the real world is much simpler ~~
Re:Just Curious (Score:1)
--
Re:Dreamcast's PowerVR chip is/was also tile-based (Score:1)
Apocalype 5D by Videologic. Nice card for it's time, however. There was ZERO support of the 3D part of the card under linux. The windows driver hasn't been update in over two years. What good is the "better" solution if you can't use it, or use it effectively?
My GeForce at least has drivers under Linux and updated drivers under windows. That's more than NEC and Videologic can ever claim.
And yes I'm bitter...
Re:A lot hyped, definitely. (Score:1)
I don't call slightly faster results in a distinct minority of the benchmarks vs. much slower results in the rest 'outperforming'.
Re:sounds impressive.... (Score:1)
Although 3D - nah, they suck.
Re:Good for competition (Score:2)
Re:Bad news for ATI (Score:1)
Re:Similar to the NEC PowerVR and PowerVR2 (Score:1)
Similar to the NEC PowerVR and PowerVR2 (Score:3)
Here's how it works:
Anyway, because the system uses ZERO memory bandwidth for Z-buffer calculations, the system is far more efficient, even though it is essentially traversing the scene dozens of times for each frame.
This is why the Sega Dreamcast is often able to have better performance than the Playstation 2.
Cryptnotic
Tile-based rendering, strenghs and weaknesses (Score:5)
There is a good article on it, as applied to the powervr (which is using the same kind of architecture) at http://www.ping.be/powervr/PVRSGRendMain.htm [www.ping.be]. As others already said, you can see the results on the Dreamcast, or on the arcade version, the Naomi.
The strenghts are obvious:
The weaknesses are a little less obvious:
As a result, these cards are nice, but mostly represent another set of tradeoffs, not necessarily a revolution.
OG.
Tiles and Benchmarks and FUD, oh my! (Score:1)
"Also included in the Kyro II is 8-layer multisampling that allows for up to 8 textures to be applied in a single pass. Other cards are forced to re-send triangle data for the scene being rendered when multitexturing, eating up precious memory bandwidth. Since the Kyro II features 8-layer multisampling, the chip can process the textures without having to re-send the triangle information."
Guys, if the chip is all that, let it stand out on its virtues alone. Your competition has been multitexturing since the Voodoo II.
And of course:
"Missing from the Kyro II feature set is a T&L engine. Claiming that the current generation of CPUs are far superior at T&L calculations than any graphics part can be, STMicroelectronics choose to leave T&L off the Kyro II."
I could sneeze at this point and mutter the appropriate profanity under my breath. However, I'd much rather see the chip succeed or fail because of its feature set, instead of the ability of Imagination/STMicroelectronics at slinging mud at the competition.
Those benchmarks are really interesting. It would be fantastic to have a successor to 3Dfx, if only to keep Nvidia and ATI on their toes. My chief worry towards their commercial acceptance would be how much of DirectX 8 do these guys support? It's not a fair worry, but I think it's a realistic one. I wish them the best of luck.
Wasn't GigaPixel Tile Based ? (Score:2)
Of course GigaPixel was acquired by 3dfx for approx. 300 Million US$ after initially winning the XBox graphics contract and then having it pulled from beneath them. And of course 3dfx was in turn acquired (though for only 150-160 Million US$ ?) by nVidia. So if Tile based rendering has a future (and Gigapixels is good) perhaps we can expect to see it from nVidia too before long.
Re:sounds impressive.... (Score:2)
Hmm... if the OpenGL support was a little bit better, I might be able to discern that it was actually Matrox out there, instead of the 4fps which made me miss the billboard as I rode by on my snail.
Note to moderators: Have you actually checked on the OpenGL driver from Matrox to see its performance? It really _IS_ that bad. Go ahead. Mod me down, its still the truth.
---
Re:This card running on a 4 year old architecture! (Score:2)
That said, adding four of these inline and jumping to DDR would be decidedly sweet. The chips are fairly small, which would facilitate this, but I'm not sure if they are capable of that... since they just work on tiles, I cant see why you couldnt assign each a section of the scene but
It will be quite a while before hardware T&L comes out on these, I think, considering that this iteration is only just being released.
---
This card running on a 4 year old architecture! (Score:3)
What's a "video card"? (Score:1)
Re:Nice card and 1st damn it (Score:1)
Similar, Nay, Identical to the NEC PowerVR... (Score:2)
This design is very similar (if not the same) as the NEC's PowerVR and PowerVR2 chipsets.
That's because the Kyro/Kyro II use the PowerVR3 architecture. NEC used to partner with Imagination to produce those older chips.
-----
#o#
Re:Tile-based rendering, strengths and weaknesses (Score:1)
Tile based rendering is good for alpha blendeding! (Score:1)
A game needs to draw 5 opaque polygons, with 3 alpha polygons on top.
An immediate mode rasterizer would have to write all five polygons to memory, including all of the associated texture lookups and lighting calculations. Then, for each alpha polygon, it would have to reread bits from the framebuffer and combine it with the shaded textured alpha polygon. This is a lot of memory traffic.
A tile based renderer, otoh, would not need to do all of this. Obviously it would be able to eliminate all of the overdraw on the opaque polygons, but it would also be able to do the blending in the ON CHIP 24bit tile framebuffer, which is much much much faster than going to off chip memory. This means that instead of having to do read-modify-write off chip memory cycles for each of those alpha blended polygons, it stays on chip.
Now like I said before, I am not familiar with the PowerVR2 chip, and it may be that they do not implement this obvious optimization... I would assume their newer chip would.
My big question is "why not a T&L unit?" It seems like a sever handicap to an otherwise stellar chip. Although somewhat addressed in the article, they didn't really justify it well, and the benchmarks prove it would be handy. Maybe the 175mhz clock is what prevents an effective T&L unit from being added...
-Chris
Re:Interesting FSAA performance. (Score:1)
Re:More memory speed? (Score:1)
Re:sounds impressive.... (Score:1)
This actually sounds pretty damn cool, and with a little luck will provide some nice compatition for nVidia. Since 3Dfx went bye-bye, I have been a little worried that nVidia would be the only real gaming card supplier(well, I guess that depends on if you count ATi)
And why would you not count ATI?
-----
#o#
Re:More memory speed? (Score:1)
This thing is great, I want one (Score:1)
DRI support any time soon? (Score:2)
The article talks about the Windows drivers (complaining a bit about them - I assume they're still in development though). It does mention openGL support in the windows drivers...
Does anyone know if there will be DRI support for this chipset any time soon? One of these days I'll have to upgrade from my old Voodoo Banshee card...
---
"They have strategic air commands, nuclear submarines, and John Wayne. We have this"
Linux Support? (Score:2)
Re:Similar to the NEC PowerVR and PowerVR2 (Score:2)
Of course, in the mean time, no specs -> no free/open XFree drivers -> no sale for me