Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Graphics Software

Sony's Monster Graphics Chip 148

GFD writes "EETimes Has an article about a monster (462-mm2!!) graphics chip discussed in a paper at the ISSC. The numbers are astounding such as 256 mbits of on chip memory. Barely manufacturable though..." I'd still love to see what that bugger can do... bet it still can't simulate realistic hair in real time ;)
This discussion has been archived. No new comments can be posted.

Sony's Monster Graphics Chip

Comments Filter:
  • Can you imagine running Quake 3 at the higest detail level at the higest resolution. Spilling the blood of your opponents on a big monitor with no noticeable frame rate drops would be heaven for me. (Forgive the Pun)
  • anymore, as most porn starlets are shaven, or maybe have a thin, sleek landing strip.
  • To the power of the Force. Do not put too much faith in this technological terror. It would be nice to rest on our laurels for a minute and not have something 12e500 x better than what we bought this morning. Oh well, at least I can still play Alice...
  • Heh, it'd look about the same. Face it, for all we love Q3's eyecandy, it doesn't really go up to the obscene levels that you would make use of something half this good. Hell, get an Athlon and a good Geforce and you're pretty close to the top anyways.
  • Get out! That just can't be! 462-mm2? 462-mm2??

    Just imagine a card with two of these. It'd be...carry the one....924-mm4!
    --
    MailOne [openone.com]
  • WARNING! This is a goatce.cx link! P.S. It said "sony won't say" whether it's for PS III.

    Ashes of Empires and bodies of kings,
  • Did anyone catch the number of transistors on that chip? It's close to the number you find on AMD or Intel CPUs. Either it's incredibly complex, or deisgned horribly. Think of the cooling system needed to cool the chip. I'd think it'd just about HAVE to have a heat sink and fan along the lines of what you put on a CPU. I could be wrong, but doesn't heat disappation have something to do with the number of transistors? Still like to see some performance numbers.

    Khyron
  • Last I heard the GeForce2 Ultra was doing 31 million. Let's see here. If by next year the GeForce3 Ultra does 45 million, then by the time the PS3 comes out PC video cards will be pushing 100 million. Looks like the only thing this card will be good for is arcade machines.

  • yep, that't about a square inch...
    sounds barely believable...
  • by Pxtl ( 151020 ) on Wednesday February 07, 2001 @11:28AM (#448360) Homepage
    PS2, for all its l33t hardware, doesn't seem too impressive. For all that neat stuff, its designed for benchmarks, can't really use it all that well and its too hard to develop for.... when they make use of this thing, will they have the same probles, eg "Hooray, can render up to 65536x65536 res texture maps on over 4 billion polys... but its only got 4 megs of video ram". Or something to that effect. For that matter, when you get to that level how well can a human develop for a platform? Modelling gets tougher and tougher as the renderers get better.... Making more polys, better texture maps, multiple maps (bump, alpha, luminosity, reflection, etc) for layers, blenders, better frame rates for animations.

    I'm all for this hardware, but ya gotta wonder: can we even properly use it.... then again, that's been said many times before.
  • by Christopher Thomas ( 11717 ) on Wednesday February 07, 2001 @11:28AM (#448361)
    I've just finished reading the article. A few thoughts spring to mind:

    First of all, this sounds like the Emotion Engine hype all over again. It might be an amazing chip, but it'll probably just be "decent" when it finally gets here.

    Secondly, don't expect to see this in quantity until 0.15/0.13 micron fabs get here. Remember the Emotion Engine. Fabbing a chip that big is a royal pain. It'll get much easier when finer linewidths shrink the die size.

    Thirdly, CMOS fabrication processes can be optimized for good quality DRAM, or for good quality logic. Not both (without throwing lots of money at it). The two types of circuit have contradictory requirements for transistor characteristics. In practice, this has meant that DRAM-plus-core chips have either had slow cores or bulky, slow, hot DRAM.

    The only saving grace is that most of the chip area will be DRAM. This means that most of it will be tolerant of manufacturing faults (you usually have more DRAM rows than you need, and cut out the faulty ones before packaging). This is the only thing that will let them fab a chip this size at all.

    The chip should provide interesting perspective when it arrives (much as the Emotion Engine did), but I don't expect it to take the world by storm.
  • by nomadic ( 141991 ) <`nomadicworld' `at' `gmail.com'> on Wednesday February 07, 2001 @11:28AM (#448362) Homepage
    Toshiba has expressed interest in offering the 128-bit processor for high-end routers and switches.

    For....graphics? "Hey, this is great!" "What are you talking about, we lost two whole subnets!?" "Yeah, but look at how beautifully those error messages are rendered"
    --
  • Why pray tell are you running at such a low resolution and why are you satisfied with such a low framerate. I for one will not be happy with anything less than 1280x1024 and ~70-80 fps. So no this chip is far from the final chip.
  • by Tiroth ( 95112 ) on Wednesday February 07, 2001 @11:29AM (#448364) Homepage
    While it is true that something like 50 million polys/s would be the upper limit for the number of renderable polys on a screen of that size, you are forgetting about all of the hidden polygons necessary to build a realistic scene.

    I've seen estimates that figure it would take about 50-200 million polygons to render a modest scene in photo-realism. Now multiply that by 60 frames/s. You are already talking about 3-12 billion polys/s here, and we haven't even started talking about extremely complex surfaces like hair/fur/grass/leaves.

    I think we will be building chips for some time before we reach the same clarity with 3d that motion video currently does in 2d.
  • by djocyko ( 214429 )
    The numbers are astounding such as 256 mbits of on chip memory.

    woah..so they got 32Megs of ram on a chip that happens to be a huge chip. woop!

    I suppose thats more impressive than 32megs on a 7" chipset...

  • Actually, the story said 256 mbits. 256Mbits would be equivalent to 32 Megabytes. You can't do much of anything with 256 millibits
  • I'd think it'd just about HAVE to have a heat sink and fan along the lines of what you put on a CPU.

    It's been done before. I have an Indigo2 R10K Extreme on my desk at work, and the fan on the video card keeps me warmer than the other 3 CPU fans from other machines combined.

    But oh MAN does it render so nicely... :)
  • What matters is all the other shit that gets rendered along the way. All the texturing, alpha blending and anti-aliasing bring those 50 million down considerably. Besides, you're assuming there is one polygon for each pixel. Suppose the scene has polygons behind polygons, ya know, like trees with semi-transparent leaves behind each other. Then the more polygons a card pushes the further it can draw out to infinity.

  • by furiousgeorge ( 30912 ) on Wednesday February 07, 2001 @11:31AM (#448369)
    ah kiss --- such nonsense.

    >Therefore, logically, when we reach 50 million
    >polygons/second in calculation for a graphics
    >chip, it is effectively impossible to make the
    >graphics quality any better without improving
    >the quality of the screen.

    Oh Bollocks. Just spitting a pixel to the screen has nothing to do with the overall quality of the image that is produced. Anti-aliasing. Motion blur. Depth of field. Programmable shading (no more of this gourand/phong with badly mapped textures etc etc). Don't even get me started ---- TONS of effects that can be incorporated. Hair, fur, skin, particles, atmospheric effects, lens effects, volume rendering effects, etc etc etc.

    Until you can make a CG image indistingishable from a live source at that resolution there is TONS that can be improved.

    Have u worked in the graphics biz? I have......

    j
  • This argument gets made frequently. Here is why it doesn't work:
    • You are assuming perfect hidden surface removal before you hit the chip. In real life, some things will get drawn, the obscured by opaque objects in front of them.
    • Complicating this is transparency, where things drawn in front should not completely obscure things behind them.
    • Multisampling (eg FSAA) where pixels are actually calculated from sub-pixels.
    • Volumetric objects. This is an extreme case of the transparency item above. This is used in medicine, but can also be used for things like clouds.
    • Stereo (takes twice the frame rate).
    • Besides, who still runs in 1024x768 ;-)
  • Come on here. He was thinking that because Sony makes consoles that the chip will go in a console, and not a PC. HDTV doesn't get better then 1024x768 at 60Hz.

  • 1600 X 1200 resolution at 60 fps is over 115 million pixels/second. IIRC, 60 fps is the limit our eyes can see. Also, I'm not sure your pixel to polygon comparison is valid.

    -B
  • by rw2 ( 17419 )
    bet it still can't simulate realistic hair in real time ;)

    Jesus taco, enough pr0n talk already...

    --

  • Using 0.18-micron design rules, the latest Graphics Synthesizer is an astounding 21.7 x 21.3-square-millimeters and contains 287.5 million transistors
    (snip)

    I assume the mean that the wafer is 21.7x21.3mm^2, this is a little under an inch to a side. At first read I thought they meant total package size, which isn't that impressive considering even the size of an old PII.

  • Just think about it. Microsoft has done its best to hype the X-Box. I saw the previews, it's not all that impressive. I saw actual jaggies on the X-Box demos. This is not a good thing!

    Let's hope Sony doesn't alienate its developers like they did for the PS2.

    fialar

  • For you non-metric people, 462 millimeters is 18.18 inches. Of course this thing will be near impossible to manufacture, it'll be impossible to shove it into any standard computer or game console! That leaves servers and insanely big workstations, and I doubt that Sony has those in mind for this.

    However, if this is another stupid die-shrinking example, I strongly advise you to go to your local Sony representative, and slap him or her in the face.

  • My eyes can tell the difference between 60Hz and 100Hz I swear! And if I don't have 100fps in Quake 3 I swear the game sucks! Wahhhhh! Wahhhh! Quake 3 at 100fps improves my game! Really it does! I can aim better at 100fps then 60! Wahhh! My hands are really that fast!

  • by Anonymous Coward on Wednesday February 07, 2001 @11:39AM (#448378)
    The current GS performs the same triangle and fill rates: 75 million pps peak, 1.2/2.4 gigapixels/sec [the larger number is for untextured pixels]. The internal busses on GS are 1024-bits, so at least they've doubled this.

    So this chip has the same fill rate, but 8x the RAM, only 2x the RAM ports, and 7x the complexity?

    It sounds to me like Sony have just made this an 8x multitexturing part at *huge* expense. And an 8x multitexturing part with only 2x the internal bus for texture cache reloading. Slow.

    And supersampled antialiasing will cost you 75% of your fillrate, since that isn't increased either.

    I just don't understand who this chip is for.

  • IBM tried this with their monster machines (e.g. vax 8650, etc). Found out that you get better performance with distributed systems.

    In a smaller way, chip manufacturers found this out, too. They were doing it for a different reason, though -- speed of an electron slows things down. You want a short path from memory to CPU (like the on-die memory of Pentium Pro), but manufacturing wasn't up to par (thus the excessive failures of the P-Pro). Apparently, the tech hasn't advanced enough to produce such huge chips without excessive loss. For instance, slot-1 tech with cache stored external to cpu allows you to match working components, and toss only the failures.

    The PS2 lost huge percentage of chips due to technical problems, and so will this if attempted without some new die manufacturing tech.
  • by ilsie ( 227381 ) on Wednesday February 07, 2001 @11:40AM (#448380)
    "The numbers are astounding such as 256 mbits of on chip memory."

    Wow, two hundred fifty six millibits of on chip memory. That's like, what, almost 1/20th of a byte?
  • In raw graphics performance, the chip can process 75 million polygons per second, has a pixel fill rate between 1.2 and 2.6 gigapixels/s and can draw 75 million polygons/s...

    So what? The NV20 will probably be close to that when it's released, and will eventually beat it. Not to mention the NV20 is only a matter of months away, and this thing probably won't even live to see mass production.

  • Indeed one would normally expect a chip of this size to suffer yield problems as dictated by Murphy's law (that's the REAL Murphy's law, where fractional yield is given by

    Y = ((1 - e**(-AD))/AD)**2, where A is area, D is defect density

    rather than the other Murphy's law which affects all our lives). However there are remedial measures, especially with DRAMs, which can keep Murphy at bay. These largely have to do with redundancy, which is to say one designs the RAM array with many more rows than can actually be addressed, then one detects dead or malfunctioning rows at device test and substitues in the spares. This is a relatively easy thing to do with a nice regular structre like a RAM.

    Also one wonders whether with that many polygons squirting past the eyeballs, is it acceptable to ignore a modest number of defects? After all, human vision is reasonably fault tolerant compared to many computing applications.

    Even so, I take my hat off to anyone who gets acceptable yield from a device more than 2cm on a side. RESPECT!

    Robert

  • HDTV doesn't get better then 1024x768 at 60Hz

    I thought that HDTV (if it will ever exist :-) ) supports resolutions of up to 1080i which equals 1920x1080???
  • um, that was 462 SQUARE millimeters.

    Now, 462 Trapezoidal millimeters, that'd be an accomplishment.
  • You might be right about that. I think you may well be right about that, my mistake.

  • I checked myself, it's 21.7mm x 21.3mm. Also, that article misused the power of 2 and the term "square millimeters". I call upon the wrath of Le Système Internationale!
  • The X-box won't necessarly be the most powerful console. Nintendo's Game Cube [ign.com] is shaping up to be a pretty powerful machine (400MHz PowerPC Gekko (similiar to G3) processor, 24MB 1-T SRAM main memory, 16MB Graphics/Sound Memory, Art-X video card, etc.) Also, Nintendo is well known for their video game characters (Mario, Luigi, Zelda, Link, and that damn Pikichu), so the machine won't be pure hype like the X-Box...
  • I don't need to, my 1ghz T-bird, GeForce 256, and 19" mag monitor do it for me :)
  • Since the chip is 18 inches on each side - which is about the same size as a flat planel display - they'll probably just build this video card *into* tft screens. Imagine how fast and clean the picture will be with the graphics card so close to the display.
  • Yeah, but the size in the article was 462 *square* mms. That's not quite so big, and is under a square inch. 462 / (25.4 * 25.4) will give you the approximate size in square inches.

  • Actually the resolution of a standard TV is far less accurate than that of any pc monitor out there. It's just that you don't notice it because you're too far away, the signal is analog instead of digital, and the pixels are auto-antialiassed. I think somebody once told me the resolution of a TV corresponds to something like 625 lines, 50 Hz, 2:1 interlace, 4:3 aspect ratio. So if you play your game on a regular TV you're wasting an awfull lot of detail (& computational power), still all these powerfull and expensive gaming consoles usually are connected to home TV's.. strange thing, no ?

  • You're assuming that each pixel only has one texture or lighting pass applied to it, which hasn't been true for years. Modern games can hit 6. Also, you're tossing out the possibility of anti-aliasing.. 4x4 supersampling will multiply your fill-rate requirement by a factor of 16. Sticking with your 36Mpixels/sec number, to get the same performance in an app with 4x anti-aliasing and 5 passes, you need 2.88 BILLION pixels/sec.. That's quite a bit more.
  • Possibly, but the "i" means that you only have to do half the refresh rate. Substantially reducing demand

    -Michael
  • Bzzzt. Wrong. That's 462 SQUARE mm. or 21.49mm each side. That's about an inch for each side.
    --
  • One thing that always bugs me about the the graphics chip industry is that they only talk about simple measures like polygons per second. Polygons per second of what? You can render a billion phong shaded polygons per second, and you're still going to have awful plastic looking surfaces and unrealistic, difficult to illuminate building interiors. There's more to graphics rendering than polygons and OpenGL. We have a long way to go before chips replace software for high quality photorealism. A chip that accelerated high fidelity ray-traced radiosity solutions, now _that_ would be cool.

  • That's hilarious.. Dual P3-1GHz, GeForce2 Ultra, 512MB PC133, and I still look forward to the day when I can play Quake3 in 1024x768, or with FSAA enabled. Maybe the NV20 will let me pick one of those two..
  • "924-mm4!" eh? I don't think I have any slots that accomodate a four-dimensional video card.
  • I don't think that Lovers Arrival, The has this right at all -- YHBT.

    The comparison *is* invalid. Remember that FPS measurements are averages, not sustained performance. 60fps isn't that great if the card slows down to 20 or less in a critical moment of the game when things get heavy. There is much to be said for people who claim to see the diff. at higher framerates.

    --

  • moron - it's 462 mm^2, which is 0.716 sq inch, perfectly manufacturable. read before you shoot
  • by maraist ( 68387 ) <michael.maraistN ... m ['AMg' in gap]> on Wednesday February 07, 2001 @11:59AM (#448400) Homepage
    I think the real push should start moving away from higher polygon rates and more towards greater visualization enhancements for each polygon. We're already dealing with cool things such as environmental bump mapping. I'm still waiting for the fully featured ray-tracing engine. I'd be perfectly happy with a scene that was only 30fps, 800x600, average number of polygons if I could just feel the glimmer of living light.

    Anymore, I'm not impressed with making the numbers of yesterday's technology bigger. Perhaps with this on-board memory, Sony could venture into some real of high-bandwidth calculations. Not being well enough versed in the industry, I can't venture to make guesses though (voxels or better shading techniques maybe?)

    -Michael
  • HDTV does 1920x1080 frames. They can be up to 30Hz progressive scanned or 60Hz interlace (that is, drawing half of the screen at a time) scanned. Or, you can bump the resolution down to 1280x720 60Hz progressive. 1080i, as they call the 1920x1080x60Hz Interlaced standard, is still a pretty good resolution and is nothing to sneeze at. In a few years, there will probably be HDTV sets with computer-friendly inputs that will do 1920x1080x75Hz progressive.
  • Pish. Four-dimensional? Wow. Retro. ;)
    Oops... wasn't supposed to tell you about that quantum computer sitting in my garage....
  • Your eyes see much faster than 60fps. I'm not sure what they see at, but to prove this point: wave your finger in front of your monitor -- you see the strobe behind it. Now wave your finger in front of a constant light source, like a white piece of paper -- no strobe. The strobe is because you see faster than 60 fps (or 85 with my monitor).
  • Take the square root before converting to inches. It is large, but not *that* large.
  • Though I agree with you, just wanted to nit-pick:

    Hidden surface removal is only really a factor when the card can't handle the current volume. It scales with the complexity of the scene. Plus there are technologies such as ATI's (and now nVida) that help reduce the effect considerably.

    FSAA is largely irrelevant when you achieve high enough resolutions. Results may vary though.

    Stereo has never been a major factor, nor do I think it'll really catch on; especially on a console, unless you split the output signal.

    Still there are plenty of other common sence arguments promoting the continued bleeding edge development. Not least of which is the fact that the intro's are still rendered seperately.

    -Michael
  • I just checked to see if it was still there, but unfortunately, my observation destroyed it.
  • If I remember correctly, our eyes see at ~430 FPS so we still are talking about a long way to go to make a "realistic" image where the eyes physically are slower in refresh compared to the video.

    I hardly think our realism barrier consists mainly of faster-than-TV refresh rates. If that was the case, I could get out my CGI-pong video game and run it up to 1,000fps.

    Better yet, net hack!!!

    -Michael
  • Imagine a screen of 1000x768 pixels, at 50fps. Thats 36 million pixels per second. Therefore, logically, when we reach 50 million polygons/second in calculation for a graphics chip, it is effectively impossible to make the graphics quality any better without improving the quality of the screen.

    as many others have replied you're missing a lot about what pixel rates are about (hidden pixels, alpha blending, antialiasing etc etc)

    However there is a grain of truth in what you're getting at that in the future may eventually result in a whole new generation of hardware. basicly it's this - the number of visible pixels on the screen isn't really changing much, certainly not at the same rate that the ability of silicon to manipulate them is .... which means that rendering techniques that are proportional to the number of pixels (rather than screen complexity) may become more interesting - for example - ray tracing - the number of rays is (to a 1st approximation) is proportional to the number of pixels rather than the scene complexity (however the cost of processing each ray also goes up with scene complexity - but not necessarily always at the same rate if you are carefull) ....

  • No, modelling gets tougher if you try to do it all.

    Most things are done programmably. Instead of adding each hair by hand you set some settings and define where they should be. Or you define some control points and the NURBs engine renders the polys. Or you define how much snow is falling from where and how fast and the system calculates all the collisions. And a million other effects that create massive polys that do NOT take a long time to create. (Relativly speaking)
    FunOne

  • As a general trend, chipmanufacturers are claiming all sorts of strange records in order to keep the attention and the "nr1" idea focussed on their company. Remember AMD and Intel fighting for first spot, Transmeta with it's paperware, XBOX selling future nVidia designs like they are here today, nVidia topping off 3DFX with really fast (and big, and dense) processors and very ugly image quality? So you see them putting out new designs and new chips that in reality don't make much sense, are even absurd in some cases, but that are put out nevertheless to play king of the hill with concurrents. The poublic can only enjoy glimpses of that wealth about 6 or 12 months later. This chip is no difference.

    For instance, the chip Sony is proposing can only be put to work in a dedicated hardware environment, like a playstation II for instance. Even with this much memory on-chip, you still have bus issues, though they won't play as big a part. You should not forget that while this chip opens up new possibilities, games, by the time this chip arrives in full quantities (which will be in about 2 years, enough time to get revenue out of the PSII) will have evolved as well and will probably even be limitted by this kind of a design. My guess is dram on chip will help for games that exist today, but won't do for games that we'll play in 2 years time (as the 75million poly's per second rate suggests). I'm thinking firewire, and optical here. So clearly this chip is heading for dedicated and expensive platforms like the playstation III and possibly pc videocards as well, but I expect nVidia to have a serious advantage in performance by that time, because they know their designs inside out and know where the additional gains can be found in pc architectures, not to mention the experience they are getting from the XBox design, which will gain considerable market share from the PS II if it should proove to be stabel in gameplay. Let's hope all this designing and showing-off in the end does arrive where we want it, which is in our boxes!

  • Question:
    how is sony expecting to get decent yields on such a big die? The probablity of a chip flaw goes up with the surface area

    Answer:

    256Mbit = 128Meg Byte = approx 500Meg of HIGHLY symmetric transistors. We're already building chips with 20, 30 and even 100Meg of complex transistor layouts (granted, most is in symmetric caching or register sets). Additionally, single ported memory is a lot simpler than multi-ported LRU-tagged cache. So while Intel, AMD, Alpha, SUN fuss over 4Meg L2 cache sizes, we're all in a similar ball-park.

    Next, a little over a year ago, I read an article about a new DRAM memory architecture that was designed for extremely high yields.. Basically you'd have dozens, hundreds or thousands of mostly independant memory cells, then after the testing stage, you marked which cells were good, which then allowed the memory to ignore bad chunks transparently. As long as you met a minimum memory-size, you were golden. If a similar technology is used here, then they'll probably over-allocate it a bit, and allow down to like 70Meg to be considered passing.

    However, I've read other interesting questions such as "are they going to optimize this for power consumption / heat dessipation or performance?"

    -Michael
  • Since when is 32 megabytes on a card impressive? Maybe last year or the year before, but 64 is sort of standard now for any sort of high-end pro-line gaming or rendering card.

    Why would they sight the memory in megaBITS? Sure it sounds more impressive to people that don't know the different between a megabyte and a megabit, but it's still ONLY 32 MEGABYTES!

    Girly.
  • I must disagree with you. When the N64 came out around 1995, it was going to wipe the floor with the Playstation. That never happened. With the XBOX entering the fray of consoles, with the top names, as well as a shitload of other game developers working on titles, there will be a lot more reality than hype. What does Microsoft doing right? They are using a set of API's for the XBOX that most Windows developers are familar with, DirectX.

    It disturbs me that people just can't seem to get in their heads that Microsoft is a capitalist organization. What drives them? Money of course, and they see a good market in video games. Microsoft has been producing software for about as long as I have been around. I would love to really see if Nintendo is still in the game as far as consoles go (no pun intended)

    Yes I am expecting to get modded down for being pro-Microsoft, but if it weren't for their products I would not have a career.

  • This story puts everything in bits it seems. As a result the numbers appear much better than their byte counterpart. Here are some things I noticed.

    462mm x 462mm? You want large? There are 25.4 mm per inch. This is saying it is almost 18.2 inches x 18.2 inches (can anyone say 1 1/2 feet by 1 1/2 feet?). I sure hope this is a mistype of the actual size of this beast.

    256mbit of memory? that comes out to 32MB of memory. I got a geForce2 GTS coming in the mail with 32MB of mem. Granted the memory is embedded in the chip itself but I think that would result in the price being alot more, especially if you want 64MB of mem.

    75 million polys/sec. Sure, when the chip has nothing else going on, doesn't have to worry about lightning, textures, and the triangles it is drawing are all touching each other so there are less vertice's to draw. Splitting the triangles up so there are 3 vertices being drawn per triangle will easily drop this number to be 1/3 of it. Throw in some lightning and it drops more. Same with textures.

    "the chip can process 75 million polygons per second, has a pixel fill rate between 1.2 and 2.6 gigapixels/s and can draw 75 million polygons/s". Anyone like being redundant? I count 2 things in there and it seems they are searching for features.

    A 2,000 bit internal bus means a 250 bytes internal bus. Why 250? why not 256? Most chips have the maximum internal bus the size of how many bits the chip can handle. If the chip is a 128bit chip then it appears to have a bus double of that so it is feeding the chip faster than the chip can empty it. This could be good but it can also be bad.

    With all said and done, the sony graphics chip is 4x as big as nVidia's geForce2 GTS and only 2x the power. Yep, lets slap a huge beast into a machine that probably sucks up the power supply and generates more heat than the CPUs :)
  • Per-Pixel shading. This is where we'll see a drastic improvement in the quality of interactive graphics. See those nice 3D renders from MAX? Those are likely Phong/Blinn shading models, which are per-pixel. As opposed to Gouraud shading, which calculates color values at each vertex and interpolates across the polygon. This latter method is what is used today in interactive 3D. This is why you can see the nasty aliasing across a low resolution mesh when using realtime lighting.

    Unfortunately, this technique doesn't rely on enormous fill rates that this new Sony chip probably offers, but rather it requires an incredibly fast, integrated hardware lighting engine. Nothing in the article mentions this, and in the current PS2, rasterization and hardware T&L, while they work together, are completely seperate entities. Could be a while...

    --Terrence
  • bet it still can't simulate realistic hair in real time ;)

    See Shenmue on Dreamcast, particularly on the Passport disc where you can zoom in close to the characters' faces. The game supports the best realtime skin and hair work I've seen (right now, beating the Playstation 2).

  • So this new Uber-chip does 75 million triangles, and has a fill rate of 1.2 - 2.6 G/pixels. Doesn't that seem familiar [arstechnica.com] to anybody? Those are the box specs for the PSX2. The extra memory will be quite helpful, but this isn't very impressive so far.
  • Last I heard the alpha of NVidias newest chip was already seven times as powerful as the Geforce2 Ultra. Matter'o'fact, I think i read this on /. some months back. On the other hand, I've been known to dream of these things. :)

    "// this is the most hacked, evil, bastardized thing I've ever seen. kjb"

  • by Beatlebum ( 213957 ) on Wednesday February 07, 2001 @12:39PM (#448419)
    That's 400 SQUARE MILLIMETRES which is less than a chip measuring 1 inch x 1 inch.
  • Cheesewhiz,
    I think you fail to realize the fundamentals of the architecture here. Based on my experience with the PS2, the machine this thing is eventually put in will have an incredibly fast bus. PC's are severely limited by the fact that any data that the graphics card needs must be sent over the AGP/PCI bus to the card. This is slow. However, the PS2 has very fast bus from RAM to the GS (DMA, no need to interrupt the CPU), and the bandwidth from the GS local memory to the GS is INSANELY fast.

    So while a PC must have a lot of local storage in the video card (which isn't even on die, which makes it much slower) because texture transfers are very expensive, a PS2 (and logically its successor) doesn't need as much, simply because texture transfers are so cheap. And having the memory on die makes writing to/reading from texture buffers or the frame buffer also incredibly fast, thus increasing fill rate.

    --Terrence
  • 256mbit = 32mbyte

    Its 8 bits to a byte in my world.
    FunOne
  • I don't think this is correct. An HDTV that does 1080i can't do 1080p, because that isn't part of the spec. the highest progressive rez is 720p. Lucas is using special cameras to do 1080p 24fps recording, but that's cause he has money. Perhaps this is where the confusion is coming in?
  • by ywwg ( 20925 )
    nope. the Quake _1_ engine was designed so that there was zero overdraw. every poly drawn was needed, and no extra pixels were covered up.
  • I'm really just not sure what planet you are on. Cause it ain't earth.

    Have you ever actually SEEN a ps2? Not just screenshots on the web? It's freaking beautiful looking. Play Madden 2001 with snow and fog cranked up and tell me it doesn't look kickass. And EA has even stated that this was just their first game so they just cranked it out. Their second generation games always look sooo much better. Madden 2002 will blow your freaking mind.

    And about the modelers not being able to keep up. Dream on, buddy. We have to hold them down and threaten them with blunt objects to meet polygon counts. When you render something in 3D you don't do it with trianges. You use the software afterwords to tesselate the model into trianges. Getting high-poly models is as simple as increasing the resolution of the tesselations. You don't design animations using discrete frames either. You use keyed frames or skeletal animation systems so you just tell it how to move and it breaks it down into as many distinct frames as it feels like depending on your frame rate. For that matter most textures are designed at higher resolutions and than scaled down to fix in 256x256 buffers.

    Now, once the video cards can render things CG stuff in realtime, which is estimated to be atleast 10 years off (i don't remember offhand) than we are closer. But even CG doesn't look "real". I don't think graphics cards will EVER hit a point where people look at them and say, "It's got too much power, we can't use it." "Wadda we do with all them triangles."

    Justin Dubs
  • by Anonymous Coward
    No, no, no.

    256 millibytes = about a quarter of a byte.

    That would make it 2 bits.

    This chip contains a Shave and a Haircut.

    :)
  • by grahamwest ( 30174 ) on Wednesday February 07, 2001 @01:18PM (#448434) Homepage
    Ok, the story is light on details and nobody else here seems to have any understanding so here is the real skinny. This is an expanded version of the GS (Graphic Synthesiser) chip in the PS2. I expect even the same clock speed, from the 75 million poly number. By the way, that number is a theoretical peak based on it taking 2 cycles to do triangle setup of a flat-shaded untextured polygon.

    Your comment about triangles having 3 verts thus cutting the performance in half is wrong though. Tristrips get you pretty close to 1 vert/poly. Each time you kick a vert you use the previous two kicked verts to form your poly. Thus, a 20 poly strip only needs 22 verts. You are correct that texturing and shading require more setup time, however. Generally you're at 5 cycles of setup time and thus 30 million polys/sec.

    2560 bit bus is because you have 16 functional units in parallel, thus 160 bits per unit. 32 bits framebuffer read, 32 bits framebuffer write, 32 bits Z buffer read, 32 bits z buffer write, 32 bits texture read giving 5x32 = 160 bits total. Note you need all these accesses to happen concurrently to fully render a pixel in 1 clock cycle. This is all internal to the chip, too. The external bus interface is 128 bits.

    The advantage of having 32 megabytes of on-die memory is that you can generate many full-screen buffers in 32 bit and use them as texture sources for high-quality image processing effects like motion blur or depth of field or environment mapping. Think of that 32 megabytes as a big cache. You could store many more megabytes of texture in system memory and DMA them up to the GS for rendering as needed.

    This latter fact is also true for PS2. I generally suggest that people think of the PS2's graphics chip (NOT the cpu core) as 16 Voodoo1s in SLI, overclocked to 150mhz, on a 32x AGP bus. To be sure, PS2 has some developer issues but lack of texture memory is not that high on the list.

    The 'router' comment surely refers to the Emotion Engine itself. Sony developed that chip in a joint venture with Toshiba and it is manufactured in a fab owned by the Sony/Toshiba JV. It's essentially a 300MHz MIPS core with the ability to do lots of floating-point math in parallel.

    I am surprised that this chip is only news now, Sony demonstrated this concept at the last SIGGRAPH (the GSCube machine). It's intended purpose is to replace render farms. Put 16 of these chips together and you could do semi-close-to-pixar rendering quality in semi-realtime. Good enough to preview animations and lighting and so on.
  • 462 mm^2 == (21.5 mm)^2, or about (.85 in)^2.
  • I hope to clear up a few misconeceptions that people seem to have. I have read some of the replies and it seems that most people are making valid points that are not taking into account all of the factors involved.

    1. As many people have noted, raw polygons are only the underlying factor in the visual quality of a scene.

    2. There are things that are wasteful both in memory bandwidth and processing power such and redundant pixel redering on the z-buffer. PowerVR, ATI, and NVIDIA are all using techniques to bypass rendering more pixels than necessary. PowerVR in a current card, which is in the middle end because of a lack of hardware T & L (transform and lighting). PowerVR is using what is called tile based rendering, which is a more elegant way to reduce load. ATI has a somewhat less pure technique called hyper Z which decreases memory bandwidth usage (as seen in benchmarks at very high resolutions), and NVIDIA is doing something similiar with the NV20 but doesn't have anything built into their GeForce cards.

    3. Yes multi-pass rendering is a factor, but at the same time techniques are being used to render multi-textured polygons in one pass instead of many. PowerVR's has this feature (at least when used with DirectX).

    4. Yes, anti-aliasing is a factor also, but 4x4 anti-aliasing doesn't have to require 16 times the rendering power. Only the pixels that have enough contrast to contribute to jaggedness in the first place need to be assesed.

    5. The limit how many polygons actually need to be rendered is MUCH less than one per pixel if enough optimization tricks are used. When proper smoothing algorithms are used, nothing distiguishes a highly facted sphere from a regularly facted sphere except for the edges, which will be smoother with increased polygons. A low polygon count object's edges can be smoothed more efficiently with some 2D tricks. Objects far in the distance can be simplified so that the aren't taking up more polygons than necessary.

    6. More power can always always always be used. And not just for higher resolutions eighther. One of the things that is so cool about console systems I think, is that they are made to run at 640x480 so they can use plenty of effects and in the end up the visual quality quite a bit.

    7. This took me a while, and I didn't preview it.
  • by Christopher Thomas ( 11717 ) on Wednesday February 07, 2001 @01:44PM (#448440)
    think the real push should start moving away from higher polygon rates and more towards greater visualization enhancements for each polygon. We're already dealing with cool things such as environmental bump mapping. I'm still waiting for the fully featured ray-tracing engine. I'd be perfectly happy with a scene that was only 30fps, 800x600, average number of polygons if I could just feel the glimmer of living light.

    If you have decent calculation engines on-chip, you can use a silly polygon throughput to emulate nicer features that might be difficult to implement directly. Tesselate large polygons to make NURBS surfaces. Add multiple semitransparent "halos" for fancy lighting effects. Use various sneaky tricks to emulate volume effects like smoke and Ye Canonical Plasma Field. Etc.

    You can do all of these in the main CPU, but it bogs down the CPU like crazy and saturates your system bus (sending all of those triangles to the chip). If you can get the chip to do it for you, then it'll look almost as good as real curved surfaces/lighting/etc, without hogging system resources (just rendering resources).

    While a true hardware implementation of nifty features would be more efficient, the brute force approach lets you use mainly well-understood designs, and lets you patch bugs in firmware instead of needing a new chip revision.

    No idea what Sony's actually going to do.
  • Well! I've had all sorts of things promised to me by vendors, but this one takes the cake.

    Look here [duhaime.org] (http://www.duhaime.org/dict-b.htm) man, and get that chip away from me!

    I can't believe he mentioned hair in the same sentence. Ewwww!

  • Money of course, and they see a good market in video games

    I agree with you to a point that M$ is in the game to make some money. I don't think that Microsoft would have entered if Sony had hyped the PS2 as a game machine, but instead Sony hyped it as a home entertainment center (ie: USB & Firewire ports, theoritical internet capability, DVD, etc), which would slice into Microsoft's business model of selling Windows (heh, and both WebTV units :-) ), and keeping IE as the dominant browser (which in turn allows them to sell IIS as a viable solution). I think that Microsoft was partially threatend to create a game console
  • by Pxtl ( 151020 ) on Wednesday February 07, 2001 @02:36PM (#448447) Homepage
    Okay, I seem to have given the mistaken impression that I'm an idiot. Modelling is not that simple, just because the tools are more powerful. Yes, I actually do have experience in 3d modelling - and it is very easy to use tons of polys poorly. The problem is actually making non-crap with them. I mean, with better rendering tools you get higher standards to live up to. People don't expect to see eyelashes and animated blushing on a PS1. With that sort of hardware, such expectations would be quite reasonable.

    You know, like they say in spiderman, with great power comes great responsibility? If you took modern modeling and just made more rounded versions of old cheap looking game sprites (like, say, the tank from battlezone) then all you'd get is a lot of laughs. Better technology means that more detail is needed (and detail is not easier regardless of how many polys you have) as well as making sure that the polys deform properly.

    Having more polys does not make modelling easier - while it does give you more freedom, it also massively raises the bar. Look at Jurassic Park - look how long they took to make it, and look how shitty every other 3d rendered dinosaur looks in comparison. That's the problem with such powerful technology. Eventually, video cards will be good enough to produce things like Jurassic Park in realtime. Dealing with realistic skin, hair, and things like that is only as easy as you describe if you're working with heavy helpers, which, on one hand are the only way to go, but on the other have the disadvantage that it limits your control on the environment. Imagine if all documents were made in the lobotomized windoze wizards.

    Yes, modellers tend to make with too many polys then strip down - but as a 2d artist as well, I always work in at least 4x the res the final work will be in, then scale down. So, modellers will probably have to work in even higher detail, then tear out the excess polys from that.

    Look at your face - look at Lara Croft's face. Lara's not that hard to put together, I can pretty well see how it works. Yours is much more complicated. When playing a realistic 3d game, they will expect to see something more like your face then Lara's, if the hardware exists that can do it. That sounds a lot harder to me.

    Oh, and I played a bunch of the 1st gen PS2 games and found them to be about on par with the Dreamcast, really. A little better, but not the kind of performance they were boasting of. Amored Core, Tekken, and that snowboarding game, they all look about on par with their Dreamcast counterparts. I haven't seen Madden though.
  • Comment removed based on user account deletion
  • Oh, I still think most gamers would prefer to look at Lara's face rather than mine. I'm not that pretty. But of course I would be flattered... (especially if it were female gamers).
  • Before any of you freak out about the chip being 1.5ft^2... the actual size is actually reported at 46.2mm^2. Read more closely.
  • Yes, what's your point? That's a big honkin' die.

  • 462mm x 462mm? You want large? There are 25.4 mm per inch. This is saying it is almost 18.2 inches x 18.2 inches (can anyone say 1 1/2 feet by 1 1/2 feet?). I sure hope this is a mistype of the actual size of this beast.
    Ummm, I'm pretty sure he meant 462 mm^2, not 462mmx462mm.

    That's sqrt(462) per side, or ~21.5mmx21.5mm. At a little less than an inch a side, that's reasonable.

    Well, at least until you figure in how many transistors have to be on the chip for it to have the logic and memory mentioned. But then, this is still vapor, so they're probably counting on a .13u process being readily available by then.

    -----
  • And psDooM [sourceforge.net] runs like a bat out of hell. I can't wait for psQuake 3: Arena for SysAdmins to be ported to my Layer 3 switches!

    --
  • I assume the mean that the wafer is 21.7x21.3mm^2, this is a little under an inch to a side

    No, the wafer is probably eight inches or more in diameter. The 21.7x21.3mm^2 refers to the size of the silicon die. Many of these will be fabricated simultaneously on one wafer, which is how semiconductor manufacturers get economies of scale.

    The package size depends most strongly on the material used and the number of pins the chip requires (for I/O, power, ground, etc.)

  • Have you looked at a modern CPU lately? 400mm^2 is enormous: about 4x the size of current consumer-level chips. 400mm^2 is about the size of IBM's POWER4 CPU, and Sony won't have the margins IBM has to pay for it. They'd better hope the games sell like hotcakes!
  • But then, this is still vapor, so they're probably counting on a .13u process being readily available by then.

    No, it's not. The Solid-State Circuit Society requires all papers presented at ISSCC to be based on measurements of physical prototypes, not simulations. So, the chip has been fabricated, and it does work, or it wouldn't be at the conference.

  • First off.. The memory size was a typo, but too late for that.

    Second, Mega means 1E6. M/Meg often refers to 2^20, but not always.. Depends on what you're talking about (such as hard drive, system memory, etc).
  • The bus runs at CPU speed. However it is still DRAM, hence the latency is a lot higher than your L1 cache. But yes, you have the right idea, its a lot faster then it being off-chip, but not so fast as your L1 (or L2 or even L3 (on a P4)) cache.

    And of course, for graphics use, latency is less of an issue then raw bandwidth, because you aren't jumping around looking at a lot of different places in memory (like a PC OS) but you are trying to grab large sections of memory (textures) and keep 'em coming (bus speed).

    Spyky
  • You glossed over one point, and overemphasised another in my mind.

    The overemphasis was k vs. K. It's not an extreme mistake to use K instead of k, because K has no meaning of its own. It _is_ still wrong, but it's not as egregious a mistake as confusing b/B or m/M.

    The second point isn't one of SI notation, but strictly computer notation. When talking about computers, counting is ALWAYS done in powers of two! So...
    k is 2^10 = 1024 NOT 1000!
    M is 2^20 = 1024^2 = 1048576 NOT 1000000!

    The reason this confusion came about was that drive manufacturers found they could up the advertised size of their disks by nearly five percent, and sell more of an identically sized drive than the competition. The lie of k=1000, M=1000000 in computing was pure and sleazy marketing. No more.

  • The guy has a point, a tesselated mesh isn't a nurb :). And how would you calculate rendering in nurbs? JFYI 3D programs converts subpatch/nurbs to polygons before rendering. The more detail you want, the more it will "add polygons (normally triangles)" to mimic the nurb curve.

    As you point out, a NURB is usually _implemented_ as a tesselated mesh (though not a flat one, so we may just be disagreeing over the definition of "tesselated mesh").

    For implementation, you'd either let the graphics card indiscriminately assume that all triangles are really curved surfaces, and interpolate and tesselate surfaces with normals and corner vertices matching the original triangle's corner vertices and normals, or define a GL extension for such curved patches (possibly specified by NURBS parameters, possibly not).

    The first approach gives you benefits for all models, though it can cause nasty artifacts on models that aren't constructed nicely, and the second approach makes all of your models look fine, but requires that the programmer know about the extension to take advantage of it.

    Implement the translation (if any) in GLUT, and your Playstation III programmers don't even have to know about it. Distribute a modified GLUT library with your SDK that probes for this feature and uses it to accelerate GLUT's NURBS, and you might even get game designers using this in other games (paving the way for a PC graphics card based on this or a similar chip).

    I doubt Sony's actually _doing_ this, but that's one of the things I'd use a high poly-rate chip for if I was writing the firmware and SDK for it.
  • The second point isn't one of SI notation, but strictly computer notation. When talking about computers, counting is ALWAYS done in powers of two!

    When dealing with communication channels, k=1000 and M=1E6. Bytes, or more properly, octets, are a unit for storage devices.

    The usage of b=bit and B=byte is not universal. BPS, KBPS and MBPS refer to "bits per second", not "bytes per second". These were in widespread use long before bytes became a common unit.

  • by joss ( 1346 )
    billion used to be 10^12, but now common UK-english has billion at 10^9, and trillion at 10^12. I,m not sure when change occured, I vaguely think it changed about the same time a shilling became 5p instead of 12d.

    Whenever some says a billion pounds they mean 10^9 not 10^12. You must find the financial news very confusing.
  • Oops! I should have said storage systems, rather than computing. Quite right about communication channels--I stand corrected.

    As for the BPS/KBPS/MBPS notation, they predate ASCII, and mixed case digital notation in general. I haven't seen them used except by the old guy shuffling off to retirement, for at least a decade. Ethernet, token ring, and modem communications all seem to use mixed case notation now.

  • By writing a driver for POVray and given enough processingpower could I use beowulf-cluster as my graphics card? And play nicely rendered counterstrike? ..and and get more girls?

    Yes, no (unless you like playing at 5 beautiful frames per second), and only if they're geeks, respectively.
  • Are you referring to things like walking and jumping (think of the crap stopmotion Terminator in the end of T-1 or the lack of weight the jumping Raptors had in Jurassic Park (*))?

    I had always assumed that physical models and evolution would be the way to go -- you model your Velociraptor and then assign a couple of centers of gravity. Then you drop it in your physical model and let it learn how to balance, run, and jump overnight. Basically, you evolve a dynamic control center for it (neural net probably, underneath a subsumption architecture).

    The main problem is that when you now want to constrain the movement -- for example you want it to turn its head halfway through a jump, you might have to go and retrain it for that scenario.

    Ok, that was all speculation: can anyone with experience in the field comment on how far in the future the above scenario is?

    (*) Mind you, Crouching Tiger went to great pains to get exactly that effect in its fight scenes...

Get hold of portable property. -- Charles Dickens, "Great Expectations"

Working...