Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Graphics Software Hardware

Future of 3d Graphics 292

zymano writes "Extremetech has this nice article on the future of 3d graphics. The article also mentions that graphic card gpus can be used for non-traditional powerful processing like physics. A quote from the article, "GPU can be from 10 to 100 times faster than a Pentium 4 and Scientific computations such as linear algebra, Fast Fourier Transforms, and partial differential equations can benefit". My question - If these cards are getting so powerful at computations then why do we need a Intel/AMD processor at all? Just make a graphics card with more transistors and drop the traditional processor..."
This discussion has been archived. No new comments can be posted.

Future of 3d Graphics

Comments Filter:
  • No processor. (Score:3, Informative)

    by rebeka thomas ( 673264 ) on Sunday May 18, 2003 @01:43PM (#5986261)
    Just make a graphics card with more transistors and drop the traditional processor..."

    Apple have done this several years ago. The Newton 2000 and 2100 didn't use a CPU but rather the graphics processor.
    • Re:No processor. (Score:5, Insightful)

      by Karamchand ( 607798 ) on Sunday May 18, 2003 @01:50PM (#5986318)
      In my opinion that's just a matter of definition. If it manages the core tasks of running the machine it's a CPU - i.e. a central PU - and not just a GPU since it handles more than the graphics.

      That is as soon as there is no CPU and the GPU handles its tasks it becomes a CPU by definition!
    • I don't even want to know what kind of graphics card had a 162 MHz StrongARM processor six or seven years ago. I strongly suspect that you don't know all of the facts.
      • IIRC the StrongARM and PXA (XScale) have a framebuffer built in to them. not sure on how they're programmed though. (there must be docs on intel.com somewhere)
        • Really, that would mean that they used the CPU as the GPU, not the other way around. It was designed to be a CPU that also had graphics capabilities.

          That's pretty cool. I haven't done much ASM for the StrongARM, so I don't know as much about its internals as I should.
    • The Newton 2x00 series used the StrongARM series of processor, predecessor to the xScale, running at 162MHz.

      It's an ARM derivative from Digital and it is not a graphics processor but a genuine CPU.
  • The head of Nvidia was written about in wired a while ago and he essentially said the same thing.

    He was like, our cards ARE the computer, and are becoming far more important then the CPU for the hard core stuff.

    It was interesting, but I totally foo fooed it.

    obviously he was smarter then me.
    • by Anonymous Coward
      Alright alright.. this is all fine and dandy.. but does any idea of _how_ to write software that uses the hardware? Is it even possible to access the hardware without using something like opengl?

      I'm very interested in the answer...

      Thanks
      • by mmp ( 121767 ) on Sunday May 18, 2003 @04:08PM (#5987125) Homepage
        OpenGL and Direct3D are the two interfaces that graphics card vendors provide to get at the hardware; there is no lower-level way to get at it. However, these APIs now include ways of describing programs that run on the GPUs directly; you can write programs that run at the per-vertex or per-pixel level with either of those APIs.

        These programs can be given to the GPU via specialized low-level assembly language that has been developed to expose the programmability of GPUs. (They are pretty clean, RISC-like instruction sets).

        Alternatively, you can use a higher level programming language, like NVIDIA's Cg, or Microsoft's HLSL, to write programs to run on the GPU. These are somewhat C-like languages that then compile down to those GPU assembly instruction sets.
      • by jericho4.0 ( 565125 ) on Sunday May 18, 2003 @05:14PM (#5987576)
        NVidia's Cg language is a C like language for GPUs. You can download the compiler and examples from nvidia. Writing for a GPU is not trivial, though, and getting the best use out of it requires quite a bit of knowledge about how a GPU works.
    • Gary Tarolli (Chief Technical Officer of 3dfx) has an interesting interview on a similar subject.

      Interestingly he thinks it'll be specialized hardware that will do ray-tracing, etc.

      http://www.hardwarecentral.com/hardwarecentral/rev iews/1721/1/ [hardwarecentral.com]

      "Is there a future for radiosity lighting in 3D hardware? Ray-tracing? When would it become available?

      Gary: Yes, but probably just in specialized hardware as it's a very different problem. Ray-tracing is nasty because of it's non-locality, so fast localized hacks will probably prevail as long as people are clever. Especially for real-time rendering on low-cost hardware. It's interesting that RenderMan has managed to do amazing CGI without ray-tracing. That's an existence proof that a hack in the hand, is worth ray-tracing in the bush.

      Oh... and for people who haven't seen it before, here's a cool detailed paper about how the pipeline of a traditional 3d accellerators can be tweaked used to do ray tracing...

      http://graphics.stanford.edu/papers/rtongfx/rtongf x.pdf [stanford.edu]

      Reading that shows how programming a graphics pipeline is quite different (more interesting? more complicated?) than programming a general purpose CPU.

  • Because GPUs are NOT general purpose devices. A normal processor, like a P4 can be programmed to do anything. It might take a real long time, but it is a general purpose processor and so can process anything, including emulating other processors. A GPU is not. It does one thing and one thing only: pushes pixels. Now more modren GPUs have gained some limited programmability, but they still aren't general purpose processors.
    • A comparison that may help understand the problem might be to suggest keeping say, an Altivec unit, but dropping the main processor.

      Sure you'd be able to do some hellish good data transforms, and perhaps a 'CPU' with a dozen of these Altivec units could crunch through some RC5-72 units like crazy, but not much else!
    • by Tokerat ( 150341 ) on Sunday May 18, 2003 @01:59PM (#5986388) Journal

      Because GPUs are NOT general purpose devices.
      General Purpose Unit, duh!

      ...Huh?

      Graphics? I still use a VT100 :-\
    • by cmcguffin ( 156798 ) on Sunday May 18, 2003 @02:04PM (#5986427)
      While optimized for graphics, GPUs can indeed be used as general-purpose processors [unc.edu]. GPUs are effectively stream processors [stanford.edu], a class of devices whose architecture and programming model make then particularly efficient for scientific calculation.

      > It might take a real long time, but it is a general purpose processor and so can process anything

      The same holds true for GPUs. Like CPUs, they are turing complete [wikipedia.org].
      • GPUs are effectively stream processors, a class of devices whose architecture and programming model make then particularly efficient for scientific calculation.

        Also known as vector processors.

        The more things change...
        • Yes, but unlike vector processors of old these ones are at or ahead of general purpose chips for technology. They are also extremely cheap which is exactly opposite of traditional vector processors. A parallel array of GPU's might be interesting though I doubt that would end up as cheap because all of the glue logic would be one off, it might be better just to combine dual cpu 1u units with AGP cards and use the cluster for multiple types of problems, some running on the cpu's and some running on the GPU's.
    • Ah, but what if it uses those pixels to simulate a Universal T-Machine... :)
    • Now more modren GPUs [...]
      Domo arigato.
    • Now more modren GPUs have gained some limited programmability, but they still aren't general purpose processors.

      Yes they are. DX9 class GPUs are turing complete. Given enough cycles they can solve any problem you give them. Just like a CPU.
    • by Anonymous Coward
      Ditto.

      Until NV35, GPU's didn't even have flow control. As of today, the largest GPU program you can have is 1024 instructions (or 2048?). Either way, freakin' small.

      GPU's are essentially glorified math co-processors with a crap-ton of memory bandwidth. Instead of focusing on the square roots and cosines, they focus on the dot products and matrix transforms.

  • by RiverTonic ( 668897 ) on Sunday May 18, 2003 @01:45PM (#5986279) Homepage
    ... everybody would use his computer for 3D only, but I know a lot of people never do anything with 3D. And I don't think a computer for office-work benefits a lot of the GPU.
  • Precision (Score:3, Informative)

    by cperciva ( 102828 ) on Sunday May 18, 2003 @01:45PM (#5986280) Homepage
    GPUs work with limited precision -- IEEE single precision is typical. This is good enough for 3D graphics -- after all, in the end you'll be limited by the 10-11 bit spatial resolution and 8 bit color resolution -- but not good enough for most scientific problems, which typically require a minimum of double precision.

    Simulating higher precision with single precision arithmetic is possible, but the performance penalty is too severe for it to be useful.
    • Re:Precision (Score:5, Informative)

      by Sycraft-fu ( 314770 ) on Sunday May 18, 2003 @02:00PM (#5986398)
      Just a note, new graphics cards step up the resolution of internal calcuaions. GeForceFX cards (and I assume new Radeon cards too) can calculate colour with 128-bit precision. However, doesn't change the fact that the card is designed to push pixels fast, not to do general calculation work.
    • Re:Precision (Score:5, Insightful)

      by Sinical ( 14215 ) on Sunday May 18, 2003 @02:38PM (#5986620)
      Not true. Newer cards appear to be IEEE 'extended' (there isn't a definition of the bitsize of extended from the spec, so even Intel's 80-bit format is considered 'extended') at 128 bits wide per color channel. This is pretty much the last word in accuracy as far as I'm concerned. Perhaps numerical analysts can come up with scenarios where 128 bits aren't sufficient, but I don't want to hear about them.

      For some of the stuff that we do, we would kill for a slightly faster card. Right now, for simulation of IR imagery, we have to prefly a scenario where the sensor-carrying vehicle (use your imagination) flys a trajectory and we render the imagery along this path. This rendering consists of doing convolutions of background scenes with target information to generate a final image. At the end we have a 'movie'. This can take a few hours to run.

      Afterwards, we run the simulation in realtime and play frames from this movie (adjusted in rotation and scaling, etc. because real-time interactions can result in flight paths subtly different from the movie) and show it to a *real* sensor and see what happens.

      The point: if we could do real time convolution inside a graphics card and then get the data back out some way (we usually need to go through some custom interface to present the data to the sensor), then a lot of pain would be saved. First, we could move the video-generating infrastructure into the real time simulation, which would be simpler, we wouldn't have to worry about rotating and scaling the result since we'd be generating exactly correct results in the fly, we wouldn't have to worry about allocating huge amounts of memory (Gigabytes) to hold the video and all the concerns about memory latency and bandwidth and problems with NUMA architectures, and finally (maybe) we could change scenarios on the fly without having to worry about whether we already had a video ready to use.

      I think the computational horsepower is almost there, but right now there's no good way to get the data back out of the card. On something like an SGI you get stuff after it's gone through the DACs, which mean you now have at most 12-bits per channel (less than we want, although you can use tricks for some stuff to get up to maybe 16-bits for pure luminosity data). What would be sweet in the extreme is to get a 128-bit floating point value of each pixel in the X*Y pixel scene. So if the scene were 640x480 then we'd get about 4.5Meg of data per frame at say 60Hz then we'd get about 281Meg a second to convert and send out.

      Life would be sweet. Sadly, this is a pretty special purpose application, so I'm not too hopeful. What's weird is that only NVidia (and perhaps ATI) are coming up with this horsepower because of all the world's gamers, and vendors like SGI are left with hardware that is many, many generations old (although it does have the benefit of assloads of texture memory).

      In short: need 1GB of RAM on the card and a way to get stuff back out after we've done the swoopty math.
      • Unfortunately, the "128 bit colour" is just four 32-bit elements (RGB+alpha). If were actually using 128 bit floating point arithmetic, they'd call it "512 bit colour". ;)
  • Old news... (Score:5, Insightful)

    by ksheka ( 189669 ) on Sunday May 18, 2003 @01:45PM (#5986284)
    ...This is what the future releases of DirectX is supposed to address: The use of 3D renderers to render non-graphical elements and other work.

    Good for the end user, but going to be a pain in the ass for software developers to take advantage of, is my guess. :-)
    • Good for the end user, but going to be a pain in the ass for software developers to take advantage of, is my guess. :-) ...Until someone ties it into the OS, a la Macintosh Quartz.
    • Good for the end user, but going to be a pain in the ass for software developers to take advantage of, is my guess. :-)

      Kids, these days! Real developers love a good PITA, their whole life is just a PITA, in fact if you ever see someone on a bike without a seat you can be sure this is either an MFC or Kylix core developer taking his work home.
  • Specialised hardware (Score:4, Interesting)

    by James_Duncan8181 ( 588316 ) on Sunday May 18, 2003 @01:46PM (#5986287) Homepage
    I do often wonder why specialised hardware is not used more often for tasks that are often performed. I recall that the Mac used to have some add-on cards that spead some Photoshop operations up to modern levels 3-4 years ago.

    Why buy a big processor when the only intensive computational tasks are video en/decoding and games, tasks that can easily be farmed off to other, cheaper units?

    • by Sycraft-fu ( 314770 ) on Sunday May 18, 2003 @01:55PM (#5986348)
      Because it isn't cheaper if you need hundres of these simple units. The good thing about a DSP (which is what a GPU is, after a fashion) is that because it is specialised to a single operation, it can be highly optimised and do it much quicker than a general purpose CPU. However the good is also the bad, the DSP is highly specialised and can ONLY do that operation, or at least onyl do it efficiently.

      Take digital audio. Used to be that CPUs were too pathetic to do even simple kinds of digital audio ops in realtime, so you had to offload everything to dedicated DSPs. Protools did this, you bught all sorts of expensive, specialesed hardware and loded your Mac full of it so it oculd do real time audio effects. Now, why bother? It is much cheaper to do it in software since processors ARE fast enough. Also, if a new kind of effect comes out, or an upgrade, all you have to do is load new software, not buy new hardware.

      Also, if you like, you can get DSPs to do a number of computationally intensive thing. As mentioned, the GPU is real common. They take over almost all graphics calculations (including much animation with things like vertex shaders) from the CPU. Another thing along the games line is a good soundcard. Something like an Audigy 2 comes with a DSP that will handle 3d positioning calculations, reflections, occlusions and such. If you want a video en/decoder those are available too. MPEG-2 decoders are pretty cheap, the encoders cost a whole lot more. Of course the en/decoder only works for the video formats it was built for, nothing else. You can also get processors to help with things like disk operations, high end SCSI and IDE RAID cards have their own processor on board to take care of all those calculations.
      • You know - sound and graphic is something COMPLETELY different when we talk about CPU power. Now even standard 1 GHz CPU is powerfull enough to calculate and handle absolutely realistic sound (I don't talk about in/out devices). But near-reality graphic is still a way far.
        • by Sycraft-fu ( 314770 ) on Sunday May 18, 2003 @02:21PM (#5986537)
          Doesn't mean you can't offload it to a DSP. Also depends on what your definition of handle absolutly realistic sound is. Sure, I can do a perfectly realistic reverb on a sound source by using an impulse based reverb, which actually samples a real concert hall and reproduces it. However that is limited in power. Suppose I have a non-real location, I want to describe it all mathematically and then have multiple different sound sources, all calculated correctly. That sort of thing is much more complex and intense.

          However the real point of a sound DSP is to free up more CPU for other calculations. A game with lots of 3d sounds can easily use up a non-trivial amount of CPU time, even on a P4/AthlonXP class CPU. So no, it isn't critical like a GPU, it can be handled in software, but it does help.
        • As far as I know, you get the most realistic sound by modelling human sound apparatus. I don't think you can do this in real time with a 1 GHz CPU. Feel free to correct me.
      • The good thing about a DSP (which is what a GPU is, after a fashion) is that because it is specialised to a single operation, it can be highly optimised and do it much quicker than a general purpose CPU. However the good is also the bad, the DSP is highly specialised and can ONLY do that operation, or at least onyl do it efficiently.

        What you are referring to is an ASIC, an Application Specific Integrated Circuit. These realy stomp through data like nothing, are cheap to build but unfortunately expensive t
    • I do often wonder why specialised hardware is not used more often for tasks that are often performed. I recall that the Mac used to have some add-on cards that spead some Photoshop operations up to modern levels 3-4 years ago.

      Why buy a big processor when the only intensive computational tasks are video en/decoding and games, tasks that can easily be farmed off to other, cheaper units?

      For the same reason many people buy a $1,000 computer, rather than a $100 VCR, $250 DVD player, two $250 gaming conso

    • by JanneM ( 7445 )
      At our lab we have thought a lot about that. At any point in time, there is specialized hardware that can outperform (sometimes greatly) a general CPU. It can take the form of an accessible GPU, using signal processors, or even specialized vector processing units. On the horizon we start seeing FPGA-based systems for creating specialized computing units on the fly.

      We have found, however, that as long as your system is a one-off creation, or to be used in a limited number of instantiations, it typically doe
  • by anonymous loser ( 58627 ) on Sunday May 18, 2003 @01:48PM (#5986300)
    If these cards are getting so powerful at computations then why do we need a Intel/AMD processor at all? Just make a graphics card with more transistors and drop the traditional processor..


    Because GPUs are specialized processors. They are only good at a couple of things: moving data in a particular format around quickly, and linear algebra. It is possible to do general-purpose calculations on a GPU, but that's not what it is good at, so you'd be wasting your time.

    This is akin to asking why you shouldn't go see a veterinarian when you get sick. Because veterinarians specialize in animals. Sure, they might be able to treat you, but since their training is with animals you might find their treatments don't help as much as going to see a regular doctor.

  • by Libor Vanek ( 248963 ) <libor,vanek&gmail,com> on Sunday May 18, 2003 @01:54PM (#5986347) Homepage
    While games/etc. will become more complex you'll need more and more CPU (not GPU!) power to calculate AI, scene (not render it but dynamically create it in vector source) etc.
  • this reminds me of then they needed to add on a math co-processor to Intel's processors (the -DX versions)
    • I'm not sure when intel created the DX and SX designations. the 486 SX was indeed without onboard math-co, but neither did the 80386 DX nor 80386 SX.

      I know it's nitpicking, but the SX/DX designation doesn't seem to indicate with or without math-co atleast for the 386 series.
      • For the 386 the SX line was differentiated by having a 16bit bus instead of a 32bit external bus. This allowed it to be used as an upgrade to 286 systems because little redesign was needed. Both the 386DX and 386SX have companions coprocessors that sat on a dedicated bus, they were the 387DX and 387SX respectivly. For the 486 Intel re-used the designations, the SX chip was essentially a DX processor where the FPU had failed validation or where it was never tested and simply disabled for marketing reasons. T
  • Because.... (Score:3, Insightful)

    by enigma48 ( 143560 ) * <jeff_new_slashNO@SPAMjeffdom.com> on Sunday May 18, 2003 @01:59PM (#5986384) Journal
    the minute we stop using 'traditional' Intel/AMD CPUs in favour of NVidia/ATI, we'll have to drop 'traditional' NVidia/ATI and go back to Intel/AMD ;).

    Seriously though, the design we have now is a good one. A strong, general-purpose CPU augmented with a specialized GPU for high-cost operations. Depending on how high the cost is (ie: iDCT for playing DVDs) we may want to start moving the work to the specialized cpu - this has been done with ATI cards for a couple years now.

  • by CTho9305 ( 264265 ) on Sunday May 18, 2003 @01:59PM (#5986389) Homepage
    GPUs are highly specialized. In graphics processing, you generally perform the same set of operations over and over again. Also, pixels can be rendered concurrently - as such, graphics hardware can be extremely parallel in nature. Also, in graphics hardware, there isn't much (if any) branching in code. Simple shader code just runs through the same set of operations over and over again.

    "Normal" code, such as a game engine, compiler, word processor or MP3/DivX encoder does all sorts of different operations, in a different order each time, many which are inherently serial in nature and don't scale well with parallel processing. This type of code is full of branches.

    To optimize graphics processing, you can really just throw massively parallel hardware at it. Modern cards do what, 16 pixels/texels per cycle? 4+ pipelines for each stage all doing the EXACT same thing?

    Regular code just isn't like that. Because different operations have to happen each time and in each program, you can't optimize the hardware for one specific thing. In serial applications, extra pipelines just go to waste. Also, frequent branch instructions mean that you have to worry about things like branch prediction (which takes up a fair amount of space). When you do have operations that can happen in parallel (such as make -j 4), the different pipelines are doing differnet things.

    Take your GeForce GPU and P4 and see which can count to 2 billion faster. In a task like this, where both processors can probably do one add per cycle (no parallelizing in this code), the 2GHz P4 will take one second, and the 500MHz GeForce will take four seconds (assuming it can be programmed for a simple operation like "ADD"). Even if you throw in more instructions but the code cannot be parallelized, the CPU will probably win.

    Basically, since you can't target one specific application, a general purpose processor will always be slower at some things - but can do a much wider range of things. Heck, up until recently, "GPUs" were dumb and couldn't be programmed by users at all. I haven't looked at what operations you can do now, but IIRC you are still limited to code with at most 2000 instructions or so.
    • by CTho9305 ( 264265 ) on Sunday May 18, 2003 @02:06PM (#5986439) Homepage
      Sorry to reply to myself, but a really simple example just occured to me.

      Take your 486SX without a coprocessor... you can get an FPU (coprocessor) which does floating point operations MUCH faster than you can emulate them. However, you can't just use an FPU and ditch the 486, since the FPU can't do anything but floating point ops - it can't boot MS-DOS... it can't run Windows 3.1... it can't fetch values from memory... it can't even add 1+1 precisely!
  • Seti@Home (Score:2, Interesting)

    by gspr ( 602968 )
    Not that I know what a "Fast Fourier Transform" is, but I do know that the seti@home client goes "Computing fast fourier transform" a lot. Would be nice to take advantage of all the idle power in the world's GPUs too.
    • Re:Seti@Home (Score:3, Insightful)

      by TeknoHog ( 164938 )
      I seriously doubt that the S@H developers would consider this, judging from the impressions I've got before. For example there's only one binary for all x86 machines (for a given OS), whereas the distributed.net client has several optimizations for different x86 models.

      The explanation that the developers sometimes give against tweaking and opensourcing is scientific integrity. Graphics cards are not designed for exact replication of processes, and they often trade precision for speed. Still, I believe tha

  • by Shelrem ( 34273 ) on Sunday May 18, 2003 @02:01PM (#5986402)
    My question - If these cards are getting so powerful at computations then why do we need a Intel/AMD processor at all? Just make a graphics card with more transistors and drop the traditional processor...

    If you'd really like the answer to this question, try programming anything on the GPU and you'll understand. It's hell to do half this stuff. GPUs are highly specialized and make very specific tradeoffs in favor of graphics processing. Of course, some operations, specifically those that can be modeled using cellular automata, map well to this set of constraints. Others, such as ray-tracing can be shoe-horned in, but if you were to try to write a word processor on the GPU, it'd essentially be impossible. The GPU allows you to do massively parallel computations, but penalizes you heavilly for things such as loops of variable length or reading memory back from the card outside of the once-per-cycle frame update, and the price of interrupting computation is prohibitive. Clearing the graphics pipeline can take a long, long time.

    Furthermore, while there have been a few papers published claiming the orders of magnitude increase in speed in these sorts of computations, none actually demonstrate this sort of speed-up. Everyone's speculating, but when it comes to it, results are lacking.

    b.c
    • by Black Parrot ( 19622 ) on Sunday May 18, 2003 @04:38PM (#5987343)


      > The GPU allows you to do massively parallel computations, but penalizes you heavilly for things such as loops of variable length or reading memory back from the card outside of the once-per-cycle frame update, and the price of interrupting computation is prohibitive. Clearing the graphics pipeline can take a long, long time.

      > Furthermore, while there have been a few papers published claiming the orders of magnitude increase in speed in these sorts of computations, none actually demonstrate this sort of speed-up. Everyone's speculating, but when it comes to it, results are lacking.

      I looked in to using the GPU for vector * matrix multiplications over my Christmas vacation (yep, a Geek), and everywhere I turned I found people saying that whatever you gained in the number crunching you lost in the latency of sending your numbers to the GPU and reading them back when done. In the end I didn't even bother running an experiment on it.

      But maybe conventional wisdom was wrong; elsewhere in the talkbacks I see links to a couple of .edu sites pushing this kind of thing, so I'm going to look at it some more.

  • by jhzorio ( 27201 ) on Sunday May 18, 2003 @02:01PM (#5986405)
    Using the power of the graphic subsystem to handle other kinds of calculations has been done for years, if not decade(s) by Silicon Graphics.
    At least for the demos...
  • Integrated GPU/CPU (Score:5, Insightful)

    by renehollan ( 138013 ) <rhollan@@@clearwire...net> on Sunday May 18, 2003 @02:05PM (#5986433) Homepage Journal
    If these cards are getting so powerful at computations then why do we need a Intel/AMD processor at all? Just make a graphics card with more transistors and drop the traditional processor..."

    You mean like: this [ati.com]?

    Now, that press release was about two years old, and you can bet that ATI has advanced beyond that point (though I can't provide details).

    Also, while not integrating a serious 3D graphics GPU, there's no reason that this can't be done -- except one -- and the same reason that a powerful CPU isn't integrated: heat dissipation.

    But, for a "media processor", it sure is sweet.

    • Sorry, should have noted the fact that I work for ATI, though what I post here (or anywhere, for that matter) is my personal opinion, and does not necessarily reflect the views of ATI.

      Still, I don't think anyone is going to get upset over a link to an existing press release.

  • by moogla ( 118134 ) on Sunday May 18, 2003 @02:08PM (#5986462) Homepage Journal
    You keep hearing this logic every once in awhile.

    Look, for the same price of a $400 graphics engine you can get yourself a dual CPU machine, a cheap graphic card with AGP, and do it in "software" with about the same efficiency, if you know what you're doing.

    Because the extra CPU isn't inheritly multi-core like most modern GPUs, you need to compensate with a higher clock speed, and use whatever multimedia instructions it has to the fullest extent (ie altivec, mmx2, etc.)

    But of course, the GPU is better suited to the actual drudge work of getting your screen to light up. If there's stuff to be computed and forgotten by it (i.e. particle physics), its probably better left decoupled to exploit parallism in that abstraction.

    As you get to a limit in computational efficiency, you start adding on DSPs, and this is where FPGAs and grid computing start looking interesting.

    So it shouldn't be considered suprising that these companies will say that; they can see that trend and they want a piece of that aux. processor/FPGA action. The nForce is a step in the right direction. They don't want to be relegated to just making graphic accelerators when they have the unique position to make pluggable accelerators for anything.

    But to plan on packaging an FPGA designed for game augmentation and calling it a uber-cool GPU is just a marketing trick. This technology is becoming commercial viable, it seems.
  • by Anonymous Coward on Sunday May 18, 2003 @02:10PM (#5986470)
    Aloha!

    You wrote: My question - If these cards are getting so powerful at computations then why do we need a Intel/AMD processor at all? Just make a graphics card with more transistors and drop the traditional processor...

    Congratulations! You have just reinvented Ivan Sutherlands Wheel of reincarnation which is exactly about this: Normal CPU:s are enhanced with specific functions to provide acceleration for a common task, the enhancments are getting so big that farming them out into a separate chip/module seems like a good idea. The separate thingy grows in complexity as more flexilibility and programmability is needed. Finally you end up with a new CPU. And then someone says.... You get the idea.

    Here is a good take [cap-lore.com] on Ivan Sutherlands story. And here [stanford.edu] is Myers and Sutherlands original paper.

    Read, think and learn.
  • The horror! (Score:5, Funny)

    by Ridge ( 37884 ) on Sunday May 18, 2003 @02:11PM (#5986474)
    "One research group is looking to break the Linpack benchmark world record using a cluster of 256 PCs with GeForce FXs!"


    Unfortunately, the researchers have all inexplicably been rendered deaf.
  • If these cards are getting so powerful at computations then why do we need a Intel/AMD processor at all? Just make a graphics card with more transistors and drop the traditional processor...

    Do you understand why these so-called GPUs are so fast at doing graphics and mathetmatics geared towards graphics? Because they are Graphics Processing Units. They are not general computers. They are designed to do one thing and one thing really well: the math for 3D graphics. They would be terribly slow at general
  • Why have n m-bit parallel systems? Just have n 1-bit parallel systems. You won't need all these specialized processing units anymore. The future is in an extremely flexible CPU.
    • > 1-bit processors...

      The two historical examples of this that I am aware of are the
      AMD 29k bit-slice microprocessor series, and the Connection
      Machine model 1. The CM-1 was unique in that it had
      commercial sales at scales up to 64K bits wide and used a
      1-bit wide distributed memory. The reasons these highly
      customizable architectures did not persist are twofold: Economy
      of scale favored the standardized microprocessor (on the hardware
      end) and they wouldn't run pre-existing software and it was
      hard to find
  • Just make a graphics card with more transistors and drop the traditional processor...

    No matter how good you made the card, assuming it would be a dual video card/processor, you would be stuck in a situation like if you were to buy a motherboard with tons of onboard stuff on it, like a video card, for example. No matter how much ram you put in it, the video card's power will never be quite as good as if you were to buy a separate video card of comparable power and plug it into the motherboard. The same wo
  • Just as the subject says: our current GPUs can (easily) simulate any turing machine [everything2.com], and thus any other CPU, and in turn run all programs you may imagine.

    It is done via fragment (pixel) programs (for the arithmetic instructions) and multiple 'rendering' passes (for program control). Ask Göögle [google.com] if you want to know more about this interesting subject. :-)

    Just my 2 cc.

    Best regards,
    Daniel
  • by wfmcwalter ( 124904 ) on Sunday May 18, 2003 @02:23PM (#5986558) Homepage
    Just make a graphics card with more transistors and drop the traditional processor

    There's a lot of work being done on reconfigurable computing, which imagines replacing the CPU, GPU, DSP, soundcard, etc., with a single reconfigurable gate array (like an RAM-FPGA). You'd probably have a small control processor that manages the main array. On this array one could build a CPU (or several) of whatever ISA you needed, and GPU, DSP, whatever functionality was called for by the program(s) you're running at the current moment. Shutdown UnrealTournament 2009 and open Mathlab, and DynamicLinux will wipe out its shader code and vector pipelines, and grow a bunch of FP units instead. Run MAME and it will install appropriate CPUs and other hardware.

    In the initial case, this would be controlled statically, a bit like the way a current OS's VM manages physical and virtual memory. Later, specialist "hardware" could be created, compiled, and optimised, based on an examination of how the program actually runs (a bit like a java dynamic compiler). So rather than running SETI-at-home your system would have built a specialist seti-ASIC on its main array. There will be lots of applications where most of the work is done in such a soft ASIC, and only a small proportion is done on a (commensuately puny) soft-CPU.

    This all sounds too cool to be true, and at the moment it is. Existing programmable gate hardware is very expensive, of limited size (maybe enough to hold a 386?), runs crazy hot, and doesn't run nearly quickly enough.

  • by f97tosc ( 578893 ) on Sunday May 18, 2003 @02:27PM (#5986571)
    My question - If these cards are getting so powerful at computations then why do we need a Intel/AMD processor at all?

    A development this extreme is unlikely. However, what is very real is the fact that GPUs and CPUs are at least partially competitors.

    If you are doing a lot of graphics then you the best computer for your money may be with a great graphics card and a so-so CPU. The better and cheaper GPUs Nvidia can make, the smaller the demand for state of the art Pentium's.

    But unless there is a revolutionary development somewhere, we will probably see computers with both kinds of processors for a good while.

    Tor
  • from 10 to 100 times faster than a Pentium 4 at Scientific computations

    Also, Wired had an article on this, with the main gist "NVidia plans to make the CPU obsolote" [wired.com].

  • I was at an NVIDIA presentation, and the NVIDIA rep jokingly referred to "CPU" as "Co-Processing Unit". He elaborated on that, by sketching the new computer architecture, as envisioned by NVIDIA, with the GPU forming the heart of the system and the CPU taking care of the "lesser" non-multimedia functions. A good example of this is NVIDIA's Nforce(2)-chipset, with both the graphics core, sound logic and north- and south-bridges all having been developed by NVIDIA. All that's still needed is a CPU and memory.
  • Oh great... (Score:2, Funny)

    by ZorMonkey ( 653731 )
    "If these cards are getting so powerful at computations then why do we need a Intel/AMD processor at all? Just make a graphics card with more transistors and drop the traditional processor..."

    So, now I should start putting high-end graphics cards in my servers? Has Apache been compiled for an nVidia GPU yet? I wonder how well a Geforce FX runs Linux or Windows 2000. I bet the 2.0 Pixel Shader spec helps a lot with database speed.
  • If these cards are getting so powerful at computations then why do we need a Intel/AMD processor at all?

    Exactly. And if the Playstation 2 can do over 6 GFLOP/s, why doesn't Cray just make a cluster of Playstations instead of buying a shitload of Opterons? Really, someone should give these guys a clue...

    RMN
    ~~~
  • Why not use a really powerfull setup such as dsp to do it all for you. It plugs into your pci slot and allows you to do extremely complex calculations quite quickly. If you had one of those puppies all you need your p4 for is for book-keeping.
  • And once again, the great Wheel of Reincarnation [astrian.net] comes full circle. Nice to have seen it happen twice in my lifetime.
  • We need a traditional processor because these are basically specialized floating point (and vector to some extent) processors. The majority of the work done by the typical user is going to integer kind of stuff. Or maybe I'm just biased, because I've been mud programming again. Anyway, point is, these types of processors would have to simulate ints as floats. It burns us.
  • ... for this kind of image horsepower. If you want to look 10 years out, then you may as well hypothesize a helmet that will allow the then-contemporary GPU to send sensory inputs directly into the brain.

    After all, images are merely optical sensory input data. If the bandwidth of the device doubles every year, you should be somewhere in the neighborhood of being able to produce a data stream comparable to a human's normal sensorium.

    I'm looking for someone who knows more about this than I do (which shoul
  • Sure you could probably make a computer that runs entirely on a GPU, but what is the point? I much prefer having two powerful processors in my system to just one, especially when they are designed for different purposes. It only makes sense to make more use of the power of GPUs when they are otherwise sitting idle (as in pretty much everything that is not graphics-intensive). I think GPUs will become more and more flexible as time goes on, but that doesn't mean they will replace CPUs - they fulfill difer
  • ...if your video card cost more than your CPU. Mine did.
  • I saw a hotshot from Los Alamos give a seminar (at a conference on forest fire modeling), and he informed the audience, in complete seriousness, that they're going to connect a huge cluster of Playstation 2's, because they're high power and incredibly cheap, thanks to the price war.
  • by master_p ( 608214 ) on Monday May 19, 2003 @04:05AM (#5990021)
    I haven't seen any graphics company produce a solution where the graphics processing power is increased by adding more GPUs to it. I know that tiled rendering exists: the Kyro cards, the Dreamcast and the Naomi coin-ops all use PowerVR-based technology which scales rather nicely. So, instead of completely replacing the graphics card, one could add small GPU chips to provide additional power to the graphics. It would me more cost effective and would allow Doom III and HL 2 without too much hussle.

    The same goes for CPUs. The CPU power would increase if more CPU cores could be added on the fly. There was a company called InMOS technology that produced the Transputer chips that were able to perform in a grid: each chip had 4 interconnects for connecting other Transputers to it.

    Of course, there are advances in buses, memory etc that all require total upgrades.

    I think that the industry has overlooked parallelism as a possible solution to computation-intensive problems. Of course, there is a class of problems that can't be solved in parallel, as one computation step is fed with the results of the previous step...but many tasks can be parallelized: graphics rendering, searching, copying, compression/decompression, etc...anything that has to do with multiple data. It's a wasted opportunity. Instead, companies go for raw power. I guess its more profitable and less technologically challenging...Introducing parallelism in the hardware would require a new bunch of programming languages and techniques to become mainstream, too.

    Finally, I would like to say that if quantum computers become a reality, then we will see pretty good reality simulators inside a computer, since their speed would be tremendous, many times the speed of todays top hardware.
  • Better pixels (Score:3, Insightful)

    by rpiquepa ( 644694 ) on Monday May 19, 2003 @04:06AM (#5990026) Homepage
    In all the comments, I haven't found what was the most important point in the whole article. with all this GPU horsepower, Nvidia Chief Scientist David Kirk said that "the question becomes not how many more pixels can be drawn per second, but how much better pixels can be made to look, by synthesizing them in very different ways." He added that "it's the end of graphics as we know. Many new things will soon be possible with large scale streaming processors, which will create a whole new revolution in graphics." You can read this summary [weblogs.com] of the long ExtremeTech article for more details.

To write good code is a worthy challenge, and a source of civilized delight. -- stolen and paraphrased from William Safire

Working...