Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Graphics Software Hardware

Twilight of the GPU — an Interview With Tim Sweeney 286

cecom writes to share that Tim Sweeney, co-founder of Epic Games and the main brain behind the Unreal engine, recently sat down at NVIDIA's NVISION con to share his thoughts on the rise and (what he says is) the impending fall of the GPU: "...a fall that he maintains will also sound the death knell for graphics APIs like Microsoft's DirectX and the venerable, SGI-authored OpenGL. Game engine writers will, Sweeney explains, be faced with a C compiler, a blank text editor, and a stifling array of possibilities for bending a new generation of general-purpose, data-parallel hardware toward the task of putting pixels on a screen."
This discussion has been archived. No new comments can be posted.

Twilight of the GPU — an Interview With Tim Sweeney

Comments Filter:
  • For once ... (Score:5, Informative)

    by Qbertino ( 265505 ) <moiraNO@SPAMmodparlor.com> on Monday September 15, 2008 @05:37PM (#25018063)

    For once I'm reading an 'xzy is going to die' article that doesn't sound like utter rubbish. Could it be that, for once, the one stating this actually knows what he's talking about?

    My last custom realtime GPU was a Geforce Ti4200. I'm now using a Mac Mini with GT950. Mind you, Blender *is* quite a bit slower on the 950, even though it runs with twice the sysclock, but I'm not really missing the old Geforce. I too think it highly plausible that the GPU and the CPU merge within the next few years.

    • by Anonymous Coward on Monday September 15, 2008 @05:48PM (#25018213)

      On a day when Lehman Brothers and Merrill Lynch, the backbone of our economy, "died", to make these kinds of heartless statements is just pandering to the prejudices of the liberal elite. Instead of buying a Mac you should buy a Dell (which is far more "American" in the truest sense of the word) and instead of irresponsible talk of combining CPU and GPU you should keep them well separated for the good of our nation.

      Our country is built on a strong consumer market; it is "progressives" like you who are causing this crisis, which might even result in a president with an Afro. Shame on you.

    • For once I'm reading an 'xzy is going to die' article that doesn't sound like utter rubbish.

      Look, over there! An iPod killer!
      *giggles*
      -Taylor

    • by aliquis ( 678370 )

      Not that weird considering that's how it all was done before ... Even on the Amiga you don't have that much to help you get things done once you don't draw windows in Workbench.

      The obvious benefit is of course that you do only what you want/need to do and not a bunch of other stuff.

      I doubt we'll see specialized chips (in the same physical chip or not remains to be seen I guess) removed totally though. It's not like someone is saying "oh well let's remove this special purpose stuff which can run a few other

    • by elwinc ( 663074 ) on Monday September 15, 2008 @06:26PM (#25018655)
      See Ivan Sutherland's Wheel of Reincarnation. [cap-lore.com] The idea is that CPUs get faster and graphics move there; then busses get faster and graphics moves to dedicated hardware; rinse and repeat. http://www.anvari.org/fortune/Miscellaneous_Collections/56341_cycle-of-reincarnation-coined-by-ivan-sutherland-ca.html [anvari.org]
    • Re:For once ... (Score:5, Interesting)

      by Mad Merlin ( 837387 ) on Monday September 15, 2008 @07:02PM (#25019029) Homepage

      So wait, your integrated graphics is *slower* than a low end discrete graphics card that's now 6 generations (or ~6 years) behind... and you find that acceptable?

      At the current pace, it'll take them a decade to match a high end card from today with an integrated card, and video cards aren't standing still in the mean time.

      You can point at the current integrated market for video cards, but that's not terribly interesting due to the fact that a) current integrated video cards barely do more than send signals to monitor(s), and (as a corollary to a) b) current integrated video cards aren't very fast.

      Wake me up when integrated video cards are faster than discrete video cards.

      • Re: (Score:3, Insightful)

        by Guspaz ( 556486 )

        You're exaggerating. Modern Intel integrated graphics fully support DirectX 10, and full onboard video decoding. They're comparable to discrete GPUs from about three years ago.

        Do these things scream when it comes to games? No, but they don't need to. For the vast majority of people, serious gaming isn't in the cards. These GPUs need to display a 3D accelerated UI with decent performance, and help out with media decoding. Casual gaming is a plus, although businesses aren't interested in that.

        By these account

    • Re:For once ... (Score:4, Interesting)

      by LoRdTAW ( 99712 ) on Monday September 15, 2008 @11:20PM (#25021185)

      He does make a good point and its a throwback to the old days before GPU's existed. Look back to the days of Doom or Duke3D, You had to hand code and entire rendering engine for the game. No API's or 3D hardware was available, only a single CPU and a frame buffer and compiler. With current and upcoming technology like multi-core and stream processors, there will be plenty of new options for developers to take advantage of. IBM Cell, Intel's Terrascale and Larabee are three of those technologies but I still highly doubt the GPU is done for yet.

      3D rendering isn't the only thing going on in games or other programs. You have physics which is used in both games and simulators. We also have AI, which in games is still severely lacking. 3D sound is still simply how far is player from object and attenuate the sound as necessary. So those areas could definatly use those multi core CPU's or Stream processors.

      We cant forget about really exciting stuff like real time ray tracing that is well suited for multi core and stream processing but its ways off. I don't think the GPU will disappear in the next five years but I do see it evolving to adapt to new rendering technologies. Instead of a very discreet GPU we will have a very fast accelerator chip that can do more general work. It not only will handle the 3D rendering pipe but also lend itself to tasks like video processing, DSP, audio processing and other compute intense tasks like physics. It is already happening with Nvidia's CUDA and ATI's Stream SDK. We still need a general purpose CPU to manage the OS and I/O but like the IBM cell it could very well move on die. Multi-core X86 CPU's are not the future. Instead we need one or two cores to manage resources and run legacy code.Then you let the stream processors (for lack of a better term) do all the compute intense dirty work. Hell we could even get rid of X86 and go with a more efficient and compact CPU like ARM or something else entirely.

      Funny as I am typing this I realize I am pretty much describing the IBM cell processor and Intel Larabee. But still I doubt developers will be the ones holding the ball for interfacing with this new hardware. Hardware vendors and developers will (or rater should) come together and standardize a new set of API's and tools to deal with the new tech. If developers have a breakthrough that is better than ray tracing, they will definitely have the hardware to do it. Welcome to the future.

    • Re: (Score:3, Informative)

      by Dahamma ( 304068 )

      I'm sorry to be blunt, this post is almost entirely inaccurate! Informative, jeesh.

      1) There is no GT950 - Mac Minis use the Intel GMA 950.

      2) the GMA 950 has NOTHING to do with merging the CPU and GPU - it merges the motherboard chipset with the GPU.

      3) the GMA 950 is the same old special-purpose GPU concept as anything from NVidia or ATI (er, AMD) - just slower and using system memory instead of dedicated RAM.

      The article is talking about rendering graphics on a high-performance, parallelized general purpose

    • Imagine That (Score:4, Interesting)

      by nick_davison ( 217681 ) on Tuesday September 16, 2008 @02:16AM (#25022223)

      In other news...

      A man whose company makes its money writing game engines says, "APIs are going to go away. It's going to be very, very hard to build a game engine in the future when you can't rely on the APIs anymore. So everyone'll have to switch to the few companies that build game engines instead. Like mine. I recommend you start now and save yourself the headache."

      Hmm, I detect no bias whatsoever.

      Well, almost as little as when nVidia tells the world that they have seen the future and it's in GPGPUs replacing CPUs. Amazing how everyone has seen the future, it supports their business model and the rest of us can save ourselves a lot of pain if we jump on what pays them well.

  • Wrong summary (Score:5, Informative)

    by Anonymous Coward on Monday September 15, 2008 @05:38PM (#25018075)

    He talks about the impending fall of the fixed function GPU.

  • I hope not! (Score:5, Interesting)

    by jhfry ( 829244 ) on Monday September 15, 2008 @05:42PM (#25018127)

    I just browsed the article and it looks like what he's saying is that as GPU's become more like highly parallel cpu's we will begin to see API's go away in favor of writing compiled code for the GPU itself. For example, if I want to generate an explosion, I could write some native GPU code for the explosion, and let the GPU execute this program directly... rather than being limited to the API's capabilities.

    So essentially, we will go back to game developers needing to make hardware specific hacks for their games... some games having better support for some cards, etc.

    API's are there for a reason... lets keep em and just make them better.

    • Re:I hope not! (Score:5, Interesting)

      by MarkvW ( 1037596 ) on Monday September 15, 2008 @05:58PM (#25018317)

      I read it differently than you did. He's projecting a world where everything is standardized and faster (less 'bus plumbing' GPUs). In such a world you won't need APIs because you'll have libraries that you can include in the compile process.

      APIs reduce code bulk at the cost of reduced code speed, don't they?

      • Re:I hope not! (Score:5, Informative)

        by Anonymous Coward on Monday September 15, 2008 @06:16PM (#25018535)
        In such a world you won't need APIs because you'll have libraries that you can include in the compile process.

        A library you include in the compile process is an implementation of an API.

        APIs reduce code bulk at the cost of reduced code speed, don't they?

        No.
      • Re:I hope not! (Score:5, Insightful)

        by GleeBot ( 1301227 ) on Monday September 15, 2008 @08:24PM (#25019881)

        I read it differently than you did. He's projecting a world where everything is standardized and faster (less 'bus plumbing' GPUs). In such a world you won't need APIs because you'll have libraries that you can include in the compile process.

        No, you won't need APIs because you'll have a standard instruction set. Current graphics APIs are oriented around things like pushing triangles and running shaders on them. Why bother with the complexity of the graphics APIs, if you can create a standardized instruction set for GPUs?

        We essentially already have that, but it's wrapped in the paradigm of shader programming. Stop treating the GPU as a graphics processor, and as just this really parallel coprocessor, and you get what Sweeney is talking about.

        The basic point is that GPUs aren't really for graphics anymore, and as we move away from a graphics fixation, we'll come to realize that they're just specialized computers. Using graphics APIs to program them would be as stupid as using OpenGL to write a word processor. Sure, with modern capabilities, it's actually possible, but it's a bad conceptual fit.

      • by jokkebk ( 175614 ) on Tuesday September 16, 2008 @03:30AM (#25022553) Homepage

        It's also interesting to note that the guy being interviewed is in the business of making 3D engines.

        Now ask the question: would 3D engine makers perhaps have something to gain if OpenGL and DirectX would be scrapped, as the interview suggests?

        Most game dev labs wouldn't have the resources to build their own engines from the scratch using a C++ compiler, making them to - wait for it - licence a 3D engine like the one this guy is selling.

        So in summary, the article paints a picture from the future which would be very beneficial to interviewee, so I'd take it with a grain of salt. Either we'd get some de facto 3D engines replacing OpenGL and DirectX, or the game developers will waste time recreating each new graphics technology advancement into their own engines.

    • Re: (Score:2, Insightful)

      Actually, he's saying the opposite:

      TS: I expect that in the next generation we'll write 100 percent of our rendering code in a real programming language-not DirectX, not OpenGL, but a language like C++ or CUDA. A real programming language unconstrained by weird API restrictions. Whether that runs on NVIDIA hardware, Intel hardware or ATI hardware is really an independent question. You could potentially run it on any hardware that's capable of running general-purpose code efficiently.

      • by jhfry ( 829244 )

        There is no such thing as "hardware that's capable of running general-purpose code"... in fact there is no such thing as "general-purpose code". Code is nothing but a language that is either interpreted by a processor, or can be compiled such that can be interpreted by a processor (or virtual machine). Either way, all code is highly dependant upon the capabilities of the hardware.

        It's easy to throw Intel and AMD out there as though they provide an example of how it might work... however that analogy only

    • No what he was saying is that currently you have to with a different set set of explosion code depending on your app running on a DirectX system an OpenGL system or a console. If they they all just had CPUs with lots of cores you could write a general explosion with a few minor tweaks needed for consoles with specific CPUs.

      More cores means more general CPU instead of some GPU that has does a few limited things fast.

      • by jhfry ( 829244 )

        yeah the the binary "program" that generates the explosion, must then be compiled for each different processor... or processors must standardize on one instruction set... the first way makes it difficult for game developers... the second slows hardware innovation because it limit's hardware designers (make a processor that doesn't conform and it fails because no older software can use it).

        API's solve both weaknesses with very little trade off. As long as the API is upgraded frequently to support new featur

    • by jd ( 1658 )

      I'm more reading it that if you have a heterogeneous set of ultra high-performance specialized CPUs, you don't need to separate them out into CPU, GPU, or whatever. You have a cluster, where sub-tasks are migrated to the CPUs best designed for those sub-tasks, with no artificial designation of what goes where. There is no central control or central data store, and no transfer of thread control. Messages containing data would be passed, but you wouldn't be looking at a traditional API in which flow control p

      • by jhfry ( 829244 )

        This all assumes that all of the different players' PU's are binary compatible... at which point there can be very little innovation in hardware except to make things smaller or add more of them. Think x86.

        • Re: (Score:3, Informative)

          by jd ( 1658 )

          At the deep RISC level, they probably wouldn't be. In fact, they'd certainly not be, or you'd simply have an SMP cluster with some emulation on top. If you're going for the migrating code, you'd need binary compatibility at the emulated mode (think Transmeta or IBM's DAISY project) but the underlying specialization would give you the improvement over a homogeneous cluster. If you're going for the totally heterogeneous design - basically the Cell approach but on a far, far larger scale - you need endian comp

    • by dgatwood ( 11270 )

      I just browsed the article and it looks like what he's saying is that as GPU's become more like highly parallel cpu's we will begin to see API's go away in favor of writing compiled code for the GPU itself.

      Translation: in the future, OpenGL will be supplanted by OpenCL [wikipedia.org] or similar. Okay, maybe I might buy that in theory, but... not soon. I would think that the abstraction in things like OpenGL provides some benefits over writing raw C code in terms of making it easier to do graphical tasks.

  • by KalvinB ( 205500 ) on Monday September 15, 2008 @05:48PM (#25018219) Homepage

    Somehow I don't think there's going to be lack of a standardized API much like OpenGL or DirectX even if it's possible to write code for the GPU as easily as the CPU.

    The APIs at the most basic level allow Joe and Bob to build their own system, throw whatever graphics card they can find in it and have the same game run on both systems.

    As soon as you start coding for a specific GPU you're going to be treating PCs like consoles. I don't care to have to buy multiple graphics cards to play various games.

    APIs are for compatibility and general purpose use. The option of flexibility is great for industry use where you're running rendering farms all running identical hardware and custom software.

    • Could you get around this by "compiling" the game for individual PUs? Looking at most games, the .exe is relatively small, with lots and lots of level/zone/equipment/sound files. The later should be independent of the PU, and only a small subset would need to be PU specific. I'm aware that "compiling" is the wrong word here, since they won't distribute source code without a breakthrough in DRM technology. But it could be as simple as buying a game, and on install getting a PU specific file either from d
      • Could you get around this by "compiling" the game for individual PUs?

        And having it failing to run on newly released architectures? GPU instruction sets change much more often than even the Mac platform's m68k to PowerPC to x86 to x86-64.

        But it could be as simple as buying a game, and on install getting a PU specific file either from disc, or for newer iterations from a download server.

        Until the publisher goes out of business or just decides to discontinue porting an older game's PU specific files to newer video cards in order to encourage users to buy^W license the sequel. And then you get into microISVs that don't have the money to buy one of each make and model of video card against which to compile and test a game's PU

    • by Pedrito ( 94783 ) on Monday September 15, 2008 @06:08PM (#25018431)

      As soon as you start coding for a specific GPU you're going to be treating PCs like consoles. I don't care to have to buy multiple graphics cards to play various games.

      I got the impression that they're expecting C++ compilers for all the GPUs, eventually, so then they'd simply have rendering libraries for each GPU. I also got the impression that they'd be waiting until most gamers had one of the compatible GPUs. Let's face it, most gamers usually don't buy the cheapest graphics card and now the two major players, ATI and nVidia have GPUs that are easily accessible. It won't be too long until you can't buy a video card from them that doesn't support CUDA and whatever the ATI equivalent is.

      I think it's a pretty safe gamble for video game developers, certainly the 1st Person 3D game developers.

    • by Hortensia Patel ( 101296 ) on Monday September 15, 2008 @06:10PM (#25018451)

      You're missing the point entirely. GPUs in the near future will essentially be general-purpose massively-multicore processors. (Not entirely so, but close enough for "GPU" to become a misnomer.)

      You don't need separate binaries for Intel versus AMD processors, and you don't have to write to an API to achieve portability between such different CPUs. Given sufficiently general hardware, there's no inherent reason why GPUs should be any different. There will still be a place for libraries, but their role will be more like C libraries than like GL/DX.

    • by Firehed ( 942385 )

      As soon as you start coding for a specific GPU you're going to be treating PCs like consoles. I don't care to have to buy multiple graphics cards to play various games.

      And yet somehow, there's a tremendous amount of GPU-specific optimization code present in every game out there. It may not be a complete codebase rewrite, but it's not as simple as $quality = ($gpu == 'fast') ? 'high' : 'low'; regardless of brand, generation, speed, etc.

  • by compumike ( 454538 ) on Monday September 15, 2008 @05:52PM (#25018255) Homepage

    For the last decade or so, it seems like the rendering side was abstracted away into either DirectX or OpenGL, but if the author is correct, those abstractions are no longer going to be a requirement.

    While I don't know a lot more about the various other rendering techniques that the article mentions, it seems like there might be an opportunity emerging to develop those engines and license them to the game companies.

    I suspect that game companies won't want to get into the graphics rendering engine design field themselves, but there's real possibility for a whole new set of companies to emerge to compete in providing new frameworks for 3D graphics.

    --
    Hey code monkey... learn electronics! Powerful microcontroller kits for the digital generation. [nerdkits.com]

    • by mikael ( 484 ) on Monday September 15, 2008 @07:12PM (#25019153)

      Before the advent of 3D accelerators, game companies wrote their own low-level renderers. These did the vertex transformation, lighting, clipping and texture-mapped triangle and line rasterization (some companies even explored the use of ellipsoids). Wolfenstein 3D, Quake and Descent as examples.

      The low-level graphics rendering part is a very small part of the game engine - rasterizing a textured primitive with some clipping, Z-buffering and alpha blending. But getting this as fast as possible requires a good deal of profiling and analysis to get it as optimized as possible (Brian Hook did a software version of OpenGL for SGI).

      3D chip makers gradually took over this area by designing hardware that could do this task far faster than the CPU. First they took over the rasterizing part (3Dfx piggyback boards), then took over the vertex transformation, lighting and clipping through the use of high performance parallel processing hardware (Nvidia TNT). There are other optimisations such as deferred rendering which optimise the order of rendering primitives to save on framebuffer writing.

      Initially, all stages of the pipeline were fixed functionality, but this was replaced by programmable vertex transformation (vertex programs in assembler, then vertex shaders in a shading language) needed for matrix-blending in character animation. Fixed functionality for pixel processing was replaced by register combiners (for baked shadows), then by fragment programs and fragment shaders. Geometry shaders were also introduced to handle deformation effects.
      There are also feedback features where the output of the rendering can be made to a texture, and thus used as an input texture for the next frame.

      All the latest DirectX and OpenGL extensions relate to setting up the geometry/vertex/fragment shaders for this functionality.

      That is what Intel and software renderers have to compete against. They would be to implement a set of 3D CPU instructions that allow textured triangles, line and points to be rendered with fully programmable shaders from one block of memory to another (specified by pixel format, width, height, etc...). They could use the memory cache for this purpose, but would have to replace the FPU with hundreds of DSPs to achieve this. Otherwise they would have to provide free access to the framebuffer with hundreds of threads or cores.

  • you wish (Score:4, Insightful)

    by heroine ( 1220 ) on Monday September 15, 2008 @05:52PM (#25018263) Homepage

    GPUs are going to be driven by the same things that drive game consoles & set top boxes. You can copy pure software, but you can't copy dedicated chips. You can copy video rendered by a CPU but not video rendered on a dedicated chip. Dedicated chips are going to stay cheaper than CPUs for many years. Just because you can decode HDTV in software doesn't mean there still isn't huge momentum behind set top boxes and DTCP enabled GPU's.

  • by heroine ( 1220 ) on Monday September 15, 2008 @06:03PM (#25018385) Homepage

    Standards based programming isn't going anywhere. That's crazy. We need Direct X, OpenGL, JMF, & MHP if only to outsource large chunks of the programming to cheaper divisions. How great it would be if everyone could base their career on hand optimizing ray tracing algorithms for SSE V, but the economy would never support it. These things have to be outsourced to select groups, firewalled behind a standard & higher paid programming done to the standard.

    • I think he was implying that it would be rolled up for most games in their engine. So developers would pay a licensing fee for the engine which sort of defined an api for the developers to use.
      • by tepples ( 727027 )

        So developers would pay a licensing fee for the engine which sort of defined an api for the developers to use.

        Right now, computer graphics programming students use OpenGL. What would these students be able to afford should OpenGL become irrelevant?

        • by Eskarel ( 565631 )
          Unfortunately, for a number of reasons, OpenGL is already relatively irrelevant.

          If it weren't, linux gaming wouldn't be in the situation that it is.

          DirectX is better established, better run, and better performing.(Some of this is because Microsoft didn't upgrade versions of OpenGL for a long time, but a lot of it is the fact that the OpenGL folks, like most standards bodies, spend so much time deciding on anything whatsoever that they get left in the dust).

          If what this guy is suggesting becomes the case, ra

          • by spitzak ( 4019 )

            I think you missed the words "graphics programming students" in the parent post.

            They use OpenGL, whether or not it is better or worse than DirectX.

  • by spoco2 ( 322835 ) on Monday September 15, 2008 @06:14PM (#25018507)

    I just don't get how they can be saying this at all.

    Ok... so from the article we have:

    Take a 1999 interview with Gamespy, for instance, in which he lays out the future timeline for the development of 3D game rendering that has turned out to be remarkably prescient in hindsight:

            2006-7: CPU's become so fast and powerful that 3D hardware will be only marginally beneficial for rendering, relative to the limits of the human visual system, therefore 3D chips will likely be deemed a waste of silicon (and more expensive bus plumbing), so the world will transition back to software-driven rendering. And, at this point, there will be a new renaissance in non-traditional architectures such as voxel rendering and REYES-style microfacets, enabled by the generality of CPU's driving the rendering process. If this is a case, then the 3D hardware revolution sparked by 3dfx in 1997 will prove to only be a 10-year hiatus from the natural evolution of CPU-driven rendering.

    Which they say is remarkably true.

    HUH?

    So, all these new GPUs really don't speed up your machine? So I can take my nvidia 8800 OUT of my box, leave it to an onboard graphics chipset and I'll be getting the same performance will I?

    Yeah.

    Right.

    I don't see AT ALL how they're getting this right?

    Please someone enlighten me, as I'm not seeing it from where I'm sitting.

    • Re: (Score:2, Insightful)

      by Antitorgo ( 171155 )

      It says it right there in the article.

      Current generation GPUs are to the point where you can basically run arbitrary code on the GPU (as shaders) in high vectorizable code. So instead of the monolithic APIs such as OpenGL/DirectX, we'll see custom rendering engines that take advantage of the parallel cores running in GPUs and future CPUs (if Intel has their way).

      So in a sense, he was correct. If you take CPUs in his 1999 interview to mean current GPU processors. Which is becoming somewhat of a misnomer nowa

      • by GameMaster ( 148118 ) on Monday September 15, 2008 @06:36PM (#25018775)

        And, I would argue that even that part of his statement is absurd. Most game developers don't even want to have to write their own game engines much less do it from scratch. Even the ones that do write engines (id, Epic, etc.) will, for the most part, never give up their graphics APIs. APIs don't just give programmers access to specialized hardware, they also provide the developer with a raft of low level tools that he/she has no, reasonable, justification or wanting to spend the time re-developing. What will happen, is that APIs like Direct3D and OpenGL will change (as they have been all along) and simplify, as they won't have to handle all the quirky hardware anymore. The idea that engine writers will completely throw out APIs and will go back to writing everything from scratch in C/C++ is almost as absurd as suggesting that they'll go back to writing everything in Assembly language.

        • Re: (Score:3, Insightful)

          by Antitorgo ( 171155 )

          Actually, I think he says in the article that you'd see APIs like Direct3D and OpenGL staying around and leveraging the CUDA/FireStream APIs instead of relying on hardware specific features (at least, that is what is already happening at the driver level).

          I think he envisions that new engines/APIs will come about (such as those that could be Voxel based or based on Ray Shaders, etc.) once the CUDA/FireStream APIs become more mature and the hardware gets faster.

          Also, by breaking out of the hardware specific

        • "The idea that engine writers will completely throw out APIs and will go back to writing everything from scratch in C/C++ is almost as absurd as suggesting that they'll go back to writing everything in Assembly language."

          I agree that it's unlikely that APIs will be thrown out, but if they were, using assembly might actually be a better idea than C/C++. Were there really state-of-the-art games written in C in the pre-API era?

        • Re: (Score:3, Insightful)

          by jjohnson ( 62583 )

          Part of Sweeney's point that you're missing is that programmers are already writing raw shader code for DX10, and making more general use of CUDA. But more than that, DX and OpenGL are APIs to a particular model of 3D rendering that was, ten years ago, a good match to the benefits of the 3D hardware presented. Now that the GPU is a general purpose processor, the triangles pipeline model for renderers need no longer limit programmers, and they can implement different engine models (like Voxels) because it'

    • by Ilgaz ( 86384 )

      No, don't you dare to do such a thing buying Intel's claims.

      Forget 8800, my Quad G5's NV6600 which is a scandal at Apple's part outperforms any 2008 integrated graphics.

      I know you are being a bit sarcastic, just in case if anyone tries to do such thing :)

      Looking at game boards really explains a lot about how integrated graphics (doesn't) perform. Even "integrated sound" itself is a huge joke taking down entire system performance.

      • Re: (Score:3, Insightful)

        But integrated graphics aren't about gaming. They're about driving a desktop display with minimal power, heat, and cost. Believe it or not, those three things are actually pluses for non-gamers.

        So no shit, a Geforce 6600 (which is what, three years old?) outperforms Intel GMA. No one at Intel cares.

        • by Ilgaz ( 86384 )

          Guess which vendor uses OpenGL functionality extensively, offloading whatever possible to GPU when possible? Even including text drawing. Apple OS X, since early 10.2

          I have a serious question of the real power benefit of running a complete CPU powered junk rather than a low power Nvidia/ATI optimised for portable devices. If your machine draws a window causing 40% percent of CPU peak, you aren't saving power, you are wasting it since a GPU could do it almost requiring fraction of power.

  • by Brynath ( 522699 ) <Brynath@gmail.com> on Monday September 15, 2008 @06:21PM (#25018581)

    If we are going back to the "old" days...

    Why can't we skip all this OS nonsense, and just boot the game directly?

    After all that will make sure that you get the MOST out of your computer.

  • Er, no (Score:3, Interesting)

    by Renraku ( 518261 ) on Monday September 15, 2008 @06:36PM (#25018771) Homepage

    I disagree.

    Direct hardware programming has always been the best in terms of performance. However, it is the worst in terms of compatibility. If you're programming consoles, this is just fine. If you're programming for PCs, not so much.

    It will never go back to programming for specific pieces of graphical hardware. I'd say that each vendor MIGHT make a major chipset, and that those chipsets would be coded for, and everything else gets API'd, but even this is unlikely. If a company had to have two or three sets of programmers for their graphics, each team for a different major chipset, we'd see more expensive games or prettier games with crappier gameplay.

    Even the OpenGL/DirectX split takes a heavy toll on programming resources for game developers.

    • Re: (Score:2, Insightful)

      by Narishma ( 822073 )
      You missed the point entirely. You don't write different code depending on whether the CPU is from AMD or Intel. In the same way, you won't be writing different code depending on whether the processor (CPU or GPU) is from AMD or Intel or nVidia. You will be writing in a high level language like C++ and it will be compiled to run on the CPU/GPU, not unlike CUDA. So you won't need API's like OpenGL or DirectX anymore.
    • It will never go back to programming for specific pieces of graphical hardware.

      I agree with you. Contrast with the article, which seems to be trying to make approximately the same point by overstatement: "Graphics APIs only make sense in the case where you have some very limited, fixed-function hardware underneath the covers."

      But of course graphics APIs continue to make sense - in the same way as operating systems continue to make sense - regardless of how rich or lean, slow or fast, or standardized o

  • Anyone bought a hardware encryption processor?
    Anyone bought a Dialogic voice board?
    No probably not. You see eventually all hardware collapses into the CPU(s).

    But the problem is that Games and Graphics are the Lamborghinis of computing. Performance uber alles. So I'm not convinced that anything short of the biggest quad core machines has the general purpose in software performance sufficient to away with these big honking graphics cards. Maybe not and smaller machines can do it but it's just as likely you bu

  • Assuming this does in fact happen in the future... when will it happen?

    First let's be clear that by "it" I mean the complete disappearance of the GPU market.

    To answer, first ask: who will be among the last ones to be willing to part with their GPU?

    I think the answer to this question is: gamers.

    Which leads me to the condition of the 'when' (not the date of course).

    And that condition is: when the integrated GPU/CPU is capable of performing under the JND (just noticeable difference) threshold of what the se

  • What I got from the article wasn't so much the 'fall of the GPU' as a move away from the fixed APIs e.g. DirectX and OpenGL.
    In which case, would this be a huge bonus for Linux ?

    A lot of people say they use Linux for work but have Windows installed on the same machine so they can run games. If the games programmers are going to be writing C/C++ code that runs on the CPU or GPU, then in theory this cuts out the proprietary graphics APIs and makes the games much more portable. Could this mean that more of

  • by Sponge Bath ( 413667 ) on Monday September 15, 2008 @07:17PM (#25019213)

    ...a C compiler, a blank text editor, and a stifling array of possibilities

    Assuming he means Emacs, then this is the way God intended it to be.

  • I want to hear what John Carmack thinks about this.
    Does he agree/disagree and why?

    I always like seeing two giants in the industry debate on high level topics. It gives some insight into trends... and I just plain dig gaming anyway...
  • Special vector unit? (Score:3, Interesting)

    by marcovje ( 205102 ) on Tuesday September 16, 2008 @02:45AM (#25022361)

    Call me stupid, but from what I saw from Larrabee it centers around a new specialized very wide vector unit to do most of the work. So far for a any plain old C compiler

Keep up the good work! But please don't ask me to help.

Working...