Sony's Monster Graphics Chip 148
GFD writes "EETimes Has an article about a monster (462-mm2!!) graphics chip discussed in a paper at the ISSC. The numbers are astounding such as 256 mbits of on chip memory. Barely manufacturable though..." I'd still love to see what that bugger can do... bet it still can't simulate realistic hair in real time ;)
Can you imagine Quake 3 (Score:1)
You don't need to simulate real hair in real time (Score:2)
Does not compare (Score:1)
Re:Can you imagine Quake 3 (Score:1)
462-mm2!! (Score:1)
Just imagine a card with two of these. It'd be...carry the one....924-mm4!
--
MailOne [openone.com]
Re:Awesome! (Score:1)
Ashes of Empires and bodies of kings,
Holy Cow! (Score:1)
Khyron
only 75 million polygons? (Score:2)
Re:462-mm2!! (Score:1)
sounds barely believable...
I wonder if they'll have the same problems..... (Score:3)
I'm all for this hardware, but ya gotta wonder: can we even properly use it.... then again, that's been said many times before.
A few thoughts on this. (Score:5)
First of all, this sounds like the Emotion Engine hype all over again. It might be an amazing chip, but it'll probably just be "decent" when it finally gets here.
Secondly, don't expect to see this in quantity until 0.15/0.13 micron fabs get here. Remember the Emotion Engine. Fabbing a chip that big is a royal pain. It'll get much easier when finer linewidths shrink the die size.
Thirdly, CMOS fabrication processes can be optimized for good quality DRAM, or for good quality logic. Not both (without throwing lots of money at it). The two types of circuit have contradictory requirements for transistor characteristics. In practice, this has meant that DRAM-plus-core chips have either had slow cores or bulky, slow, hot DRAM.
The only saving grace is that most of the chip area will be DRAM. This means that most of it will be tolerant of manufacturing faults (you usually have more DRAM rows than you need, and cut out the faulty ones before packaging). This is the only thing that will let them fab a chip this size at all.
The chip should provide interesting perspective when it arrives (much as the Emotion Engine did), but I don't expect it to take the world by storm.
a great idea (Score:4)
For....graphics? "Hey, this is great!" "What are you talking about, we lost two whole subnets!?" "Yeah, but look at how beautifully those error messages are rendered"
--
Re:The final chip? (Score:2)
Nope (Score:3)
I've seen estimates that figure it would take about 50-200 million polygons to render a modest scene in photo-realism. Now multiply that by 60 frames/s. You are already talking about 3-12 billion polys/s here, and we haven't even started talking about extremely complex surfaces like hair/fur/grass/leaves.
I think we will be building chips for some time before we reach the same clarity with 3d that motion video currently does in 2d.
uh (Score:1)
woah..so they got 32Megs of ram on a chip that happens to be a huge chip. woop!
I suppose thats more impressive than 32megs on a 7" chipset...
Re:256 megabits? That's 32 megabytes.. (Score:1)
Re:Holy Cow! (Score:1)
It's been done before. I have an Indigo2 R10K Extreme on my desk at work, and the fan on the video card keeps me warmer than the other 3 CPU fans from other machines combined.
But oh MAN does it render so nicely...
Re:The final chip? (Score:1)
Re:The final chip? (Score:5)
>Therefore, logically, when we reach 50 million
>polygons/second in calculation for a graphics
>chip, it is effectively impossible to make the
>graphics quality any better without improving
>the quality of the screen.
Oh Bollocks. Just spitting a pixel to the screen has nothing to do with the overall quality of the image that is produced. Anti-aliasing. Motion blur. Depth of field. Programmable shading (no more of this gourand/phong with badly mapped textures etc etc). Don't even get me started ---- TONS of effects that can be incorporated. Hair, fur, skin, particles, atmospheric effects, lens effects, volume rendering effects, etc etc etc.
Until you can make a CG image indistingishable from a live source at that resolution there is TONS that can be improved.
Have u worked in the graphics biz? I have......
j
Re:The final chip? (Score:1)
Re:The final chip? (Score:1)
Re:The final chip? (Score:1)
-B
Hair (Score:2)
Jesus taco, enough pr0n talk already...
--
Clarification... (Score:2)
(snip)
I assume the mean that the wafer is 21.7x21.3mm^2, this is a little under an inch to a side. At first read I thought they meant total package size, which isn't that impressive considering even the size of an old PII.
This will kill the X-Box (Score:1)
Let's hope Sony doesn't alienate its developers like they did for the PS2.
fialar
This thing is large! (Score:1)
However, if this is another stupid die-shrinking example, I strongly advise you to go to your local Sony representative, and slap him or her in the face.
NO wayyy!! (Score:1)
Render speed? (Score:4)
So this chip has the same fill rate, but 8x the RAM, only 2x the RAM ports, and 7x the complexity?
It sounds to me like Sony have just made this an 8x multitexturing part at *huge* expense. And an 8x multitexturing part with only 2x the internal bus for texture cache reloading. Slow.
And supersampled antialiasing will cost you 75% of your fillrate, since that isn't increased either.
I just don't understand who this chip is for.
Barely manufacturable? IBM tried this... (Score:1)
In a smaller way, chip manufacturers found this out, too. They were doing it for a different reason, though -- speed of an electron slows things down. You want a short path from memory to CPU (like the on-die memory of Pentium Pro), but manufacturing wasn't up to par (thus the excessive failures of the P-Pro). Apparently, the tech hasn't advanced enough to produce such huge chips without excessive loss. For instance, slot-1 tech with cache stored external to cpu allows you to match working components, and toss only the failures.
The PS2 lost huge percentage of chips due to technical problems, and so will this if attempted without some new die manufacturing tech.
SI is fun!!! (Score:4)
Wow, two hundred fifty six millibits of on chip memory. That's like, what, almost 1/20th of a byte?
This thing is already obsolete. (Score:1)
So what? The NV20 will probably be close to that when it's released, and will eventually beat it. Not to mention the NV20 is only a matter of months away, and this thing probably won't even live to see mass production.
Yield (Score:2)
Y = ((1 - e**(-AD))/AD)**2, where A is area, D is defect density
rather than the other Murphy's law which affects all our lives). However there are remedial measures, especially with DRAMs, which can keep Murphy at bay. These largely have to do with redundancy, which is to say one designs the RAM array with many more rows than can actually be addressed, then one detects dead or malfunctioning rows at device test and substitues in the spares. This is a relatively easy thing to do with a nice regular structre like a RAM.
Also one wonders whether with that many polygons squirting past the eyeballs, is it acceptable to ignore a modest number of defects? After all, human vision is reasonably fault tolerant compared to many computing applications.
Even so, I take my hat off to anyone who gets acceptable yield from a device more than 2cm on a side. RESPECT!
Robert
Re:The final chip? (Score:2)
I thought that HDTV (if it will ever exist
Re:This thing is large! (Score:1)
Now, 462 Trapezoidal millimeters, that'd be an accomplishment.
Re:The final chip? (Score:1)
Re:Wait a minute. (Score:1)
Re:This will kill the X-Box (Score:2)
Re:Can you imagine Quake 3 (Score:1)
Re:This thing is large! (Score:1)
Not so big - Re:This thing is large! (Score:2)
Screens ? (Score:2)
Actually the resolution of a standard TV is far less accurate than that of any pc monitor out there. It's just that you don't notice it because you're too far away, the signal is analog instead of digital, and the pixels are auto-antialiassed. I think somebody once told me the resolution of a TV corresponds to something like 625 lines, 50 Hz, 2:1 interlace, 4:3 aspect ratio. So if you play your game on a regular TV you're wasting an awfull lot of detail (& computational power), still all these powerfull and expensive gaming consoles usually are connected to home TV's.. strange thing, no ?
Re:The final chip? (Score:1)
Re:The final chip? (Score:2)
-Michael
Re:This thing is large! (Score:1)
--
Re:The final chip? (Score:1)
Re:Can you imagine Quake 3 (Score:1)
Re:462-mm2!! (Score:1)
Re:The final chip? (Score:1)
The comparison *is* invalid. Remember that FPS measurements are averages, not sustained performance. 60fps isn't that great if the card slows down to 20 or less in a critical moment of the game when things get heavy. There is much to be said for people who claim to see the diff. at higher framerates.
--
Re:This thing is large! (Score:1)
Re:Nope (Score:3)
Anymore, I'm not impressed with making the numbers of yesterday's technology bigger. Perhaps with this on-board memory, Sony could venture into some real of high-bandwidth calculations. Not being well enough versed in the industry, I can't venture to make guesses though (voxels or better shading techniques maybe?)
-Michael
Re:The final chip? (Score:1)
Re:462-mm2!! (Score:1)
Oops... wasn't supposed to tell you about that quantum computer sitting in my garage....
Re:The final chip? (Score:1)
Re:This thing is large! (Score:1)
Re:The final chip? (Score:2)
Hidden surface removal is only really a factor when the card can't handle the current volume. It scales with the complexity of the scene. Plus there are technologies such as ATI's (and now nVida) that help reduce the effect considerably.
FSAA is largely irrelevant when you achieve high enough resolutions. Results may vary though.
Stereo has never been a major factor, nor do I think it'll really catch on; especially on a console, unless you split the output signal.
Still there are plenty of other common sence arguments promoting the continued bleeding edge development. Not least of which is the fact that the intro's are still rendered seperately.
-Michael
Re:462-mm2!! (Score:1)
Re:The final chip? (Score:2)
I hardly think our realism barrier consists mainly of faster-than-TV refresh rates. If that was the case, I could get out my CGI-pong video game and run it up to 1,000fps.
Better yet, net hack!!!
-Michael
Re:The final chip? (Score:2)
as many others have replied you're missing a lot about what pixel rates are about (hidden pixels, alpha blending, antialiasing etc etc)
However there is a grain of truth in what you're getting at that in the future may eventually result in a whole new generation of hardware. basicly it's this - the number of visible pixels on the screen isn't really changing much, certainly not at the same rate that the ability of silicon to manipulate them is .... which means that rendering techniques that are proportional to the number of pixels (rather than screen complexity) may become more interesting - for example - ray tracing - the number of rays is (to a 1st approximation) is proportional to the number of pixels rather than the scene complexity (however the cost of processing each ray also goes up with scene complexity - but not necessarily always at the same rate if you are carefull) ....
Re:I wonder if they'll have the same problems..... (Score:1)
Most things are done programmably. Instead of adding each hair by hand you set some settings and define where they should be. Or you define some control points and the NURBs engine renders the polys. Or you define how much snow is falling from where and how fast and the system calculates all the collisions. And a million other effects that create massive polys that do NOT take a long time to create. (Relativly speaking)
FunOne
Re:A few thoughts on this. (Score:2)
As a general trend, chipmanufacturers are claiming all sorts of strange records in order to keep the attention and the "nr1" idea focussed on their company. Remember AMD and Intel fighting for first spot, Transmeta with it's paperware, XBOX selling future nVidia designs like they are here today, nVidia topping off 3DFX with really fast (and big, and dense) processors and very ugly image quality? So you see them putting out new designs and new chips that in reality don't make much sense, are even absurd in some cases, but that are put out nevertheless to play king of the hill with concurrents. The poublic can only enjoy glimpses of that wealth about 6 or 12 months later. This chip is no difference.
For instance, the chip Sony is proposing can only be put to work in a dedicated hardware environment, like a playstation II for instance. Even with this much memory on-chip, you still have bus issues, though they won't play as big a part. You should not forget that while this chip opens up new possibilities, games, by the time this chip arrives in full quantities (which will be in about 2 years, enough time to get revenue out of the PSII) will have evolved as well and will probably even be limitted by this kind of a design. My guess is dram on chip will help for games that exist today, but won't do for games that we'll play in 2 years time (as the 75million poly's per second rate suggests). I'm thinking firewire, and optical here. So clearly this chip is heading for dedicated and expensive platforms like the playstation III and possibly pc videocards as well, but I expect nVidia to have a serious advantage in performance by that time, because they know their designs inside out and know where the additional gains can be found in pc architectures, not to mention the experience they are getting from the XBox design, which will gain considerable market share from the PS II if it should proove to be stabel in gameplay. Let's hope all this designing and showing-off in the end does arrive where we want it, which is in our boxes!
Re:The final chip? (Score:2)
how is sony expecting to get decent yields on such a big die? The probablity of a chip flaw goes up with the surface area
Answer:
256Mbit = 128Meg Byte = approx 500Meg of HIGHLY symmetric transistors. We're already building chips with 20, 30 and even 100Meg of complex transistor layouts (granted, most is in symmetric caching or register sets). Additionally, single ported memory is a lot simpler than multi-ported LRU-tagged cache. So while Intel, AMD, Alpha, SUN fuss over 4Meg L2 cache sizes, we're all in a similar ball-park.
Next, a little over a year ago, I read an article about a new DRAM memory architecture that was designed for extremely high yields.. Basically you'd have dozens, hundreds or thousands of mostly independant memory cells, then after the testing stage, you marked which cells were good, which then allowed the memory to ignore bad chunks transparently. As long as you met a minimum memory-size, you were golden. If a similar technology is used here, then they'll probably over-allocate it a bit, and allow down to like 70Meg to be considered passing.
However, I've read other interesting questions such as "are they going to optimize this for power consumption / heat dessipation or performance?"
-Michael
32 Megabytes is for Girls (Score:1)
Why would they sight the memory in megaBITS? Sure it sounds more impressive to people that don't know the different between a megabyte and a megabit, but it's still ONLY 32 MEGABYTES!
Girly.
Re:This will kill the X-Box (Score:1)
It disturbs me that people just can't seem to get in their heads that Microsoft is a capitalist organization. What drives them? Money of course, and they see a good market in video games. Microsoft has been producing software for about as long as I have been around. I would love to really see if Nintendo is still in the game as far as consoles go (no pun intended)
Yes I am expecting to get modded down for being pro-Microsoft, but if it weren't for their products I would not have a career.
Think time (Score:2)
462mm x 462mm? You want large? There are 25.4 mm per inch. This is saying it is almost 18.2 inches x 18.2 inches (can anyone say 1 1/2 feet by 1 1/2 feet?). I sure hope this is a mistype of the actual size of this beast.
256mbit of memory? that comes out to 32MB of memory. I got a geForce2 GTS coming in the mail with 32MB of mem. Granted the memory is embedded in the chip itself but I think that would result in the price being alot more, especially if you want 64MB of mem.
75 million polys/sec. Sure, when the chip has nothing else going on, doesn't have to worry about lightning, textures, and the triangles it is drawing are all touching each other so there are less vertice's to draw. Splitting the triangles up so there are 3 vertices being drawn per triangle will easily drop this number to be 1/3 of it. Throw in some lightning and it drops more. Same with textures.
"the chip can process 75 million polygons per second, has a pixel fill rate between 1.2 and 2.6 gigapixels/s and can draw 75 million polygons/s". Anyone like being redundant? I count 2 things in there and it seems they are searching for features.
A 2,000 bit internal bus means a 250 bytes internal bus. Why 250? why not 256? Most chips have the maximum internal bus the size of how many bits the chip can handle. If the chip is a 128bit chip then it appears to have a bus double of that so it is feeding the chip faster than the chip can empty it. This could be good but it can also be bad.
With all said and done, the sony graphics chip is 4x as big as nVidia's geForce2 GTS and only 2x the power. Yep, lets slap a huge beast into a machine that probably sucks up the power supply and generates more heat than the CPUs
The next evolutionary step in interactive 3D is... (Score:1)
Unfortunately, this technique doesn't rely on enormous fill rates that this new Sony chip probably offers, but rather it requires an incredibly fast, integrated hardware lighting engine. Nothing in the article mentions this, and in the current PS2, rasterization and hardware T&L, while they work together, are completely seperate entities. Could be a while...
--Terrence
Hair (Score:2)
See Shenmue on Dreamcast, particularly on the Passport disc where you can zoom in close to the characters' faces. The game supports the best realtime skin and hair work I've seen (right now, beating the Playstation 2).
Hmmm (Score:2)
Re:only 75 million polygons? (Score:1)
"// this is the most hacked, evil, bastardized thing I've ever seen. kjb"
Hey Morons... (Score:3)
Re:32 Megabytes is for Girls (Score:1)
I think you fail to realize the fundamentals of the architecture here. Based on my experience with the PS2, the machine this thing is eventually put in will have an incredibly fast bus. PC's are severely limited by the fact that any data that the graphics card needs must be sent over the AGP/PCI bus to the card. This is slow. However, the PS2 has very fast bus from RAM to the GS (DMA, no need to interrupt the CPU), and the bandwidth from the GS local memory to the GS is INSANELY fast.
So while a PC must have a lot of local storage in the video card (which isn't even on die, which makes it much slower) because texture transfers are very expensive, a PS2 (and logically its successor) doesn't need as much, simply because texture transfers are so cheap. And having the memory on die makes writing to/reading from texture buffers or the frame buffer also incredibly fast, thus increasing fill rate.
--Terrence
Re:The final chip? (Score:1)
Its 8 bits to a byte in my world.
FunOne
Re:The final chip? (Score:2)
Re:Nope (Score:2)
Re:I wonder if they'll have the same problems..... (Score:2)
Have you ever actually SEEN a ps2? Not just screenshots on the web? It's freaking beautiful looking. Play Madden 2001 with snow and fog cranked up and tell me it doesn't look kickass. And EA has even stated that this was just their first game so they just cranked it out. Their second generation games always look sooo much better. Madden 2002 will blow your freaking mind.
And about the modelers not being able to keep up. Dream on, buddy. We have to hold them down and threaten them with blunt objects to meet polygon counts. When you render something in 3D you don't do it with trianges. You use the software afterwords to tesselate the model into trianges. Getting high-poly models is as simple as increasing the resolution of the tesselations. You don't design animations using discrete frames either. You use keyed frames or skeletal animation systems so you just tell it how to move and it breaks it down into as many distinct frames as it feels like depending on your frame rate. For that matter most textures are designed at higher resolutions and than scaled down to fix in 256x256 buffers.
Now, once the video cards can render things CG stuff in realtime, which is estimated to be atleast 10 years off (i don't remember offhand) than we are closer. But even CG doesn't look "real". I don't think graphics cards will EVER hit a point where people look at them and say, "It's got too much power, we can't use it." "Wadda we do with all them triangles."
Justin Dubs
Re:SI is fun!!! (WRONG!) (Score:2)
256 millibytes = about a quarter of a byte.
That would make it 2 bits.
This chip contains a Shave and a Haircut.
:)
Re:Think time (Score:5)
Your comment about triangles having 3 verts thus cutting the performance in half is wrong though. Tristrips get you pretty close to 1 vert/poly. Each time you kick a vert you use the previous two kicked verts to form your poly. Thus, a 20 poly strip only needs 22 verts. You are correct that texturing and shading require more setup time, however. Generally you're at 5 cycles of setup time and thus 30 million polys/sec.
2560 bit bus is because you have 16 functional units in parallel, thus 160 bits per unit. 32 bits framebuffer read, 32 bits framebuffer write, 32 bits Z buffer read, 32 bits z buffer write, 32 bits texture read giving 5x32 = 160 bits total. Note you need all these accesses to happen concurrently to fully render a pixel in 1 clock cycle. This is all internal to the chip, too. The external bus interface is 128 bits.
The advantage of having 32 megabytes of on-die memory is that you can generate many full-screen buffers in 32 bit and use them as texture sources for high-quality image processing effects like motion blur or depth of field or environment mapping. Think of that 32 megabytes as a big cache. You could store many more megabytes of texture in system memory and DMA them up to the GS for rendering as needed.
This latter fact is also true for PS2. I generally suggest that people think of the PS2's graphics chip (NOT the cpu core) as 16 Voodoo1s in SLI, overclocked to 150mhz, on a 32x AGP bus. To be sure, PS2 has some developer issues but lack of texture memory is not that high on the list.
The 'router' comment surely refers to the Emotion Engine itself. Sony developed that chip in a joint venture with Toshiba and it is manufactured in a fab owned by the Sony/Toshiba JV. It's essentially a 300MHz MIPS core with the ability to do lots of floating-point math in parallel.
I am surprised that this chip is only news now, Sony demonstrated this concept at the last SIGGRAPH (the GSCube machine). It's intended purpose is to replace render farms. Put 16 of these chips together and you could do semi-close-to-pixar rendering quality in semi-realtime. Good enough to preview animations and lighting and so on.
462 mm^2 != (462 mm)^2 (Score:2)
Re:The final chip? (Score:2)
1. As many people have noted, raw polygons are only the underlying factor in the visual quality of a scene.
2. There are things that are wasteful both in memory bandwidth and processing power such and redundant pixel redering on the z-buffer. PowerVR, ATI, and NVIDIA are all using techniques to bypass rendering more pixels than necessary. PowerVR in a current card, which is in the middle end because of a lack of hardware T & L (transform and lighting). PowerVR is using what is called tile based rendering, which is a more elegant way to reduce load. ATI has a somewhat less pure technique called hyper Z which decreases memory bandwidth usage (as seen in benchmarks at very high resolutions), and NVIDIA is doing something similiar with the NV20 but doesn't have anything built into their GeForce cards.
3. Yes multi-pass rendering is a factor, but at the same time techniques are being used to render multi-textured polygons in one pass instead of many. PowerVR's has this feature (at least when used with DirectX).
4. Yes, anti-aliasing is a factor also, but 4x4 anti-aliasing doesn't have to require 16 times the rendering power. Only the pixels that have enough contrast to contribute to jaggedness in the first place need to be assesed.
5. The limit how many polygons actually need to be rendered is MUCH less than one per pixel if enough optimization tricks are used. When proper smoothing algorithms are used, nothing distiguishes a highly facted sphere from a regularly facted sphere except for the edges, which will be smoother with increased polygons. A low polygon count object's edges can be smoothed more efficiently with some 2D tricks. Objects far in the distance can be simplified so that the aren't taking up more polygons than necessary.
6. More power can always always always be used. And not just for higher resolutions eighther. One of the things that is so cool about console systems I think, is that they are made to run at 640x480 so they can use plenty of effects and in the end up the visual quality quite a bit.
7. This took me a while, and I didn't preview it.
Using polygons to fake fancier primitives. (Score:4)
If you have decent calculation engines on-chip, you can use a silly polygon throughput to emulate nicer features that might be difficult to implement directly. Tesselate large polygons to make NURBS surfaces. Add multiple semitransparent "halos" for fancy lighting effects. Use various sneaky tricks to emulate volume effects like smoke and Ye Canonical Plasma Field. Etc.
You can do all of these in the main CPU, but it bogs down the CPU like crazy and saturates your system bus (sending all of those triangles to the chip). If you can get the chip to do it for you, then it'll look almost as good as real curved surfaces/lighting/etc, without hogging system resources (just rendering resources).
While a true hardware implementation of nifty features would be more efficient, the brute force approach lets you use mainly well-understood designs, and lets you patch bugs in firmware instead of needing a new chip revision.
No idea what Sony's actually going to do.
bugger? (Score:2)
Look here [duhaime.org] (http://www.duhaime.org/dict-b.htm) man, and get that chip away from me!
I can't believe he mentioned hair in the same sentence. Ewwww!
Re:This will kill the X-Box (Score:2)
I agree with you to a point that M$ is in the game to make some money. I don't think that Microsoft would have entered if Sony had hyped the PS2 as a game machine, but instead Sony hyped it as a home entertainment center (ie: USB & Firewire ports, theoritical internet capability, DVD, etc), which would slice into Microsoft's business model of selling Windows (heh, and both WebTV units
Re:I wonder if they'll have the same problems..... (Score:3)
You know, like they say in spiderman, with great power comes great responsibility? If you took modern modeling and just made more rounded versions of old cheap looking game sprites (like, say, the tank from battlezone) then all you'd get is a lot of laughs. Better technology means that more detail is needed (and detail is not easier regardless of how many polys you have) as well as making sure that the polys deform properly.
Having more polys does not make modelling easier - while it does give you more freedom, it also massively raises the bar. Look at Jurassic Park - look how long they took to make it, and look how shitty every other 3d rendered dinosaur looks in comparison. That's the problem with such powerful technology. Eventually, video cards will be good enough to produce things like Jurassic Park in realtime. Dealing with realistic skin, hair, and things like that is only as easy as you describe if you're working with heavy helpers, which, on one hand are the only way to go, but on the other have the disadvantage that it limits your control on the environment. Imagine if all documents were made in the lobotomized windoze wizards.
Yes, modellers tend to make with too many polys then strip down - but as a 2d artist as well, I always work in at least 4x the res the final work will be in, then scale down. So, modellers will probably have to work in even higher detail, then tear out the excess polys from that.
Look at your face - look at Lara Croft's face. Lara's not that hard to put together, I can pretty well see how it works. Yours is much more complicated. When playing a realistic 3d game, they will expect to see something more like your face then Lara's, if the hardware exists that can do it. That sounds a lot harder to me.
Oh, and I played a bunch of the 1st gen PS2 games and found them to be about on par with the Dreamcast, really. A little better, but not the kind of performance they were boasting of. Amored Core, Tekken, and that snowboarding game, they all look about on par with their Dreamcast counterparts. I haven't seen Madden though.
Re: (Score:2)
Re:I wonder if they'll have the same problems..... (Score:2)
Hold it... (Score:2)
Re:Hey Morons... (Score:2)
Re:Think time (Score:2)
Ummm, I'm pretty sure he meant 462 mm^2, not 462mmx462mm.
That's sqrt(462) per side, or ~21.5mmx21.5mm. At a little less than an inch a side, that's reasonable.
Well, at least until you figure in how many transistors have to be on the chip for it to have the logic and memory mentioned. But then, this is still vapor, so they're probably counting on a
-----
Re:a great idea (Score:2)
--
Re:Clarification... (Score:2)
No, the wafer is probably eight inches or more in diameter. The 21.7x21.3mm^2 refers to the size of the silicon die. Many of these will be fabricated simultaneously on one wafer, which is how semiconductor manufacturers get economies of scale.
The package size depends most strongly on the material used and the number of pins the chip requires (for I/O, power, ground, etc.)
Re:Hey Morons... (Score:2)
Re:Think time (Score:2)
No, it's not. The Solid-State Circuit Society requires all papers presented at ISSCC to be based on measurements of physical prototypes, not simulations. So, the chip has been fabricated, and it does work, or it wouldn't be at the conference.
Re:The final chip? (Score:2)
Second, Mega means 1E6. M/Meg often refers to 2^20, but not always.. Depends on what you're talking about (such as hard drive, system memory, etc).
Re:32 Megabytes is for Girls (Score:2)
And of course, for graphics use, latency is less of an issue then raw bandwidth, because you aren't jumping around looking at a lot of different places in memory (like a PC OS) but you are trying to grab large sections of memory (textures) and keep 'em coming (bus speed).
Spyky
Re:Does no one understand basic notation systems? (Score:2)
The overemphasis was k vs. K. It's not an extreme mistake to use K instead of k, because K has no meaning of its own. It _is_ still wrong, but it's not as egregious a mistake as confusing b/B or m/M.
The second point isn't one of SI notation, but strictly computer notation. When talking about computers, counting is ALWAYS done in powers of two! So...
k is 2^10 = 1024 NOT 1000!
M is 2^20 = 1024^2 = 1048576 NOT 1000000!
The reason this confusion came about was that drive manufacturers found they could up the advertised size of their disks by nearly five percent, and sell more of an identically sized drive than the competition. The lie of k=1000, M=1000000 in computing was pure and sleazy marketing. No more.
Re:Using polygons to fake fancier primitives. (Score:2)
As you point out, a NURB is usually _implemented_ as a tesselated mesh (though not a flat one, so we may just be disagreeing over the definition of "tesselated mesh").
For implementation, you'd either let the graphics card indiscriminately assume that all triangles are really curved surfaces, and interpolate and tesselate surfaces with normals and corner vertices matching the original triangle's corner vertices and normals, or define a GL extension for such curved patches (possibly specified by NURBS parameters, possibly not).
The first approach gives you benefits for all models, though it can cause nasty artifacts on models that aren't constructed nicely, and the second approach makes all of your models look fine, but requires that the programmer know about the extension to take advantage of it.
Implement the translation (if any) in GLUT, and your Playstation III programmers don't even have to know about it. Distribute a modified GLUT library with your SDK that probes for this feature and uses it to accelerate GLUT's NURBS, and you might even get game designers using this in other games (paving the way for a PC graphics card based on this or a similar chip).
I doubt Sony's actually _doing_ this, but that's one of the things I'd use a high poly-rate chip for if I was writing the firmware and SDK for it.
Re:Does no one understand basic notation systems? (Score:2)
When dealing with communication channels, k=1000 and M=1E6. Bytes, or more properly, octets, are a unit for storage devices.
The usage of b=bit and B=byte is not universal. BPS, KBPS and MBPS refer to "bits per second", not "bytes per second". These were in widespread use long before bytes became a common unit.
bollocks (Score:2)
Whenever some says a billion pounds they mean 10^9 not 10^12. You must find the financial news very confusing.
Re:Does no one understand basic notation systems? (Score:2)
As for the BPS/KBPS/MBPS notation, they predate ASCII, and mixed case digital notation in general. I haven't seen them used except by the old guy shuffling off to retirement, for at least a decade. Ethernet, token ring, and modem communications all seem to use mixed case notation now.
Re:So could I use POVray as renderer? (Score:2)
Yes, no (unless you like playing at 5 beautiful frames per second), and only if they're geeks, respectively.
Re:I wonder if they'll have the same problems..... (Score:2)
I had always assumed that physical models and evolution would be the way to go -- you model your Velociraptor and then assign a couple of centers of gravity. Then you drop it in your physical model and let it learn how to balance, run, and jump overnight. Basically, you evolve a dynamic control center for it (neural net probably, underneath a subsumption architecture).
The main problem is that when you now want to constrain the movement -- for example you want it to turn its head halfway through a jump, you might have to go and retrain it for that scenario.
Ok, that was all speculation: can anyone with experience in the field comment on how far in the future the above scenario is?
(*) Mind you, Crouching Tiger went to great pains to get exactly that effect in its fight scenes...