Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Graphics Software Hardware

The Future According To nVidia 132

NerdMaster writes "Last week nVidia held their Spring 2008 Editor's day, where they presented their forthcoming series of graphics processing units. While the folks at Hardware Secrets couldn't tell the details of the new chips, they posted some ideas of what nVidia is seeing as the future of computing. Basically more GPGPU usage, with the system CPU losing its importance, and the co-existence of ray-tracing and rasterization on future video cards and games. In other words, the 'can of whoop-ass' nVidia has promised to open on Intel."
This discussion has been archived. No new comments can be posted.

The Future According To nVidia

Comments Filter:
  • by pembo13 ( 770295 ) on Monday May 26, 2008 @04:48AM (#23542553) Homepage
    That's my main influence when I purchase video cards.
    • by Rosco P. Coltrane ( 209368 ) on Monday May 26, 2008 @06:18AM (#23542997)
      I fail to see how this is redundant. I too choose video cards based on how well they are supported under Linux. Or rather, I choose the ones with the less shitty support. Any Linux users who's ever tried to use any OpenGL app more complex than glxgears knows the pain, so I reckon Linux (or any OS other than Windows I suppose) support isn't a trivial, or a fanboy issue.

      So no, the post isn't redundant, because this issue isn't yet solved (not to mention, how can a first post be redundant?).
      • Re: (Score:3, Insightful)

        by morcego ( 260031 )

        so I reckon Linux (or any OS other than Windows I suppose) support isn't a trivial


        Considering how many problems I have always seen, I would say that even on Windows it is anything but trivial.

        Video drivers suck. On whatever platform you choose.
      • by Khyber ( 864651 )
        First posts are redundant because moderators are too fucking STUPID and have little understanding of the language they supposedly speak.
      • I fail to see how this is redundant. I too choose video cards based on how well they are supported under Linux. Or rather, I choose the ones with the less shitty support. Any Linux users who's ever tried to use any OpenGL app more complex than glxgears knows the pain, so I reckon Linux (or any OS other than Windows I suppose) support isn't a trivial, or a fanboy issue. So no, the post isn't redundant, because this issue isn't yet solved (not to mention, how can a first post be redundant?).

        Hmmm I do too linux support is my sole criterion for buying any hardware

    • by darthflo ( 1095225 ) on Monday May 26, 2008 @06:19AM (#23543007)
      nVidia will probably continue their controversial blob model (i.e. you get a binary object plus the source to a kernel module that, with the help of said object, works as a driver). Purists rage against it because it's against freedom and-so-on, pragmatists tend to like the full 3D acceleration that comes with it.
      Intel is going the Open Source road, trying to be as open as possible. Unfortunately, from a performance PoV their hardware sucks. Their products are intended as consumer-level, chipset integrated solutions and, considering that, work nicely. Don't try any 3D games, though.
      ATi opened a lot of specs, so community-developed and completely open drivers are on the horizon. Unfortunately the horizon is quite far away and the movement towards it is similar to a kid on a tricycle. The situation is prone to improve though. Performance-wise, ATi may be a good choice if you'd like to play the occasional game, but they don't really compare to nVidia (which is unlikely to change soon).
      In the end, I'm going to stick to nVidia in the near future, using intel wherever low energy consumption is strongly desired (i.e. notebooks and similar). ATi just ain't my cup of tea, I wouldn't be putting a red card in a Windows box either, but my preference of nVintel is just such -- a preference. Go with whatever suits you best.
      • Comment removed based on user account deletion
        • Nice, didn't know that. I'm planning to wait till about Q3/2009 for a new performance rig, but if ATi manages to catch up to nVidia's performance 'till then, I may just opt for the really open option. Thanks.
      • Re: (Score:2, Interesting)

        by Anonymous Coward

        Purists rage against it because it's against freedom and-so-on, pragmatists tend to like the full 3D acceleration that comes with it.

        Bullshit.
        Closed drivers suck for pragmatic reasons.
        Just because YOU haven't paid the price yet doesn't mean it isn't true.

        I bought two top-end nvidia cards (spent $350+ each on them) only to find out that because my monitors don't send EDID information their binary-blob drivers wouldn't work. The problem was that my monitors required dual-link DVI and even though these top-of-the-line cards had dual-link transceivers built into the chip (i.e. every single card of that generation had dual-link transceivers

        • You're obviously right about the difficulty of fixing problems, where the openness of Open Source really comes to play. It certainly sucks to know that only a single bit needed to be flipped to make something work that doesn't; and not getting any kind of support from the manufacturer sucks even worse.
          Though, in defense of nVidia your problem does seem rather unique. Until very recently it was my understanding that most any screen made in the past decade ought to provide an EDID -- the standard's fifteen b
        • So for $600 or $700 plus expenses nVidia could have this bug fixed for their Linux drivers. They must be shortsighted in not taking you up on your offer to sign an NDA and go out and fix them up real quick.
      • This is precisely how I feel. ATI can't write a driver to save their life (I hear they're getting better, but most drivers get better over their lifetime; I want to know when it's good and you don't need special workarounds to make things like compiz not explode) and the OSS drivers are still not where they need to be, and won't be for a long time. If there are good OSS drivers for ATI before nVidia's drivers go OSS, I'll go ATI. Until then, I'm in nVidia land. Only my servers need nothing better than intel
      • I love how the Linux world has become a place in which ignoring the long term results of a decision is called "being pragmatic".
    • Not only that. The graphic card is one of the few components for which we don't have any clue about its internals (yes, we've heard about its shaders, the pipelines, etc, but only in powerpoints released by the company). Memory? If I'm interested I can find info about DDR voltage, timing diagrams and everything else. Processor? Intel can send you a 400-pages printed copy of their manual. BIOS? Hard disk? Same. Sure, if you use their drivers everything will work smoothly, but those drivers are given as a plu
    • by Hal_Porter ( 817932 ) on Monday May 26, 2008 @10:02AM (#23544651)

      That's my main influence when I purchase video cards.
      Hi!

      I'm the CEO of NVidia and I spend all day reading slashdot. Despite that I hadn't noticed that Linux was popular until I read your post.

      I'll tell the driver developers to start fixing the drivers now.

      Thanks for the heads up

      Jen-Hsun Huang
      CEO, NVidia Inc
      • Given that nVidia already makes Linux drivers, it seems to me that the only way they could spend less money on them short of not supporting Linux at all would be to open specs and source, thus getting the Linux community to write their drivers for them.

        And those drivers would actually be better. Better Linux support for less money.

        So what's the holdup?
        • Because they consider the register spec for the card a trade secret. They don't want ATI/AMD to get hold of it. Actually it's worse than that. If they published a spec then Chinese companies would probably clone their cards.
          • Re: (Score:3, Insightful)

            I doubt very much that it's either of these. Remember, we only need specs for an interface, it doesn't have to be schematics for the whole card.

            No, the real reason very likely has to do with the geForce/Quadro scam. Specifically, the fact that you can take a geForce (typically, what, $200?) and soft-mod it into a Quadro (at least $500, and most are $1k and up).
            • I doubt very much that it's either of these. Remember, we only need specs for an interface, it doesn't have to be schematics for the whole card.

              Well if you had the register specs you could get a bunch of Chinese VHDL hackers to make a compatible card. Actually I suspect that most hardware has an 'obvious' implementation from the register spec, and that obvious implementation is rather good. And example would be ARM processors.

              Ok x86 implementations these days are seriously non obvious. But I'd bet that graphics cards are more like an ARM than an x86. And that is why they don't want to release the spec.

              No, the real reason very likely has to do with the geForce/Quadro scam. Specifically, the fact that you can take a geForce (typically, what, $200?) and soft-mod it into a Quadro (at least $500, and most are $1k and up).

              Well that's another reason. They're also prob

              • Well if you had the register specs you could get a bunch of Chinese VHDL hackers to make a compatible card.

                Maybe I'm missing just how crucial "register specs" are, but we already have something like that -- we already have an API spec. Two, at least. It would now take Chinese VHDL and software hackers to do it, but it could be done.

                They're also probably worried that someone would sue them for patent infringement if the released specs allowed ATI to find something.

                Possibly, but they could check that themselves -- after all, ATI has released specs.

                • Maybe I'm missing just how crucial "register specs" are, but we already have something like that -- we already have an API spec. Two, at least. It would now take Chinese VHDL and software hackers to do it, but it could be done.

                  By API you mean DirectX, right? DirectX can be implemented in a lot of ways, some fast some slow. The register level spec would be something like

                  "Register at base address+0x1010 is a command register. Write these commands to draw these polygons"

                  Someone at NVidia said "The register spec if very neat. Essentially we do object orientation in hardware".

                  Which is intriguing. I can imagine that the registers would be a linked list of interfaces. Each one would have a GUID. So you'd have an IFrameBuffer interface

                  • By API you mean DirectX, right?

                    Or OpenGL, yes.

                    "Register at base address+0x1010 is a command register. Write these commands to draw these polygons"

                    Still not sure I see how that's a trade secret. Not disputing it, just over my head at this point.

                    If you know the instruction set of a Risc chip, an in order implementation is rather obvious.

                    Wouldn't the same hold, though? There are fast implementations, and there are slow ones.

                    nVidia is a hardware company. I kind of wish they stuck to hardware.

        • So what's the holdup?

          This small thing called trade secrets. nVidia's drivers (and I'm assuming hardware specs) contain trade secrets that they'd rather not make freely available to their competitors. The fact that no one else can create a product that can compete with them still tells me that this trend of keeping their trade secrets locked up in a proprietary format isn't going to change anytime soon. To be honest, I'm happy that nVidia even puts out Linux drivers that work with minimal hassle. Sure they may sometimes con

          • The fact that no one else can create a product that can compete with them
            Have I missed something? What happened to ATI?
            • ATI is a joke compared to nVidia's offerings.
              • Re: (Score:3, Informative)

                Down, fanboy.

                The last time I looked at the graphics scene, they were actually neck and neck. There were reviews for new cards from each, and depending on the publisher, they might go one way or another.

                At no point do I remember ATI no longer being relevant.

                So, do you have anything to back that statement up, or are you just going to keep parroting the nVidia party line?
    • For me, fullscreen TV output. There's none in 8xxx series and it's broken in older card drivers. See as an example [nvidia.com].
    • Not nVidia. (Score:4, Insightful)

      by SanityInAnarchy ( 655584 ) <ninja@slaphack.com> on Monday May 26, 2008 @03:00PM (#23547799) Journal
      Possibly Intel, possibly ATI.

      But nVidia is the last to publish specs, or any sort of source code. ATI and Intel already do one of the two for pretty much all of their cards.

      So, in the long run, nVidia loses. It's possible they'll change in the future, but when you can actually convert a geForce to a Quadro with a soft mod, I very much doubt it'll be anytime soon.
  • Yawn (Score:5, Insightful)

    by gd23ka ( 324741 ) on Monday May 26, 2008 @04:49AM (#23542561) Homepage
    The future according to Sun or IBM.. faster CPUs. The future according to Nvidia... more GPUs .. the future according to Seagate.. exabytes and petabyes, the future according to Minute Maid.. , the future according to Blue Bonnet .. lower cholesterol, the future according to ATT "more bars in more places", the future according to ...

    Another paid for article. Yawn.
    • It doesn't have to be paid for, it's just a report on what nVidia is saying, it doesn't say that's definitely what will happen. After reading the summary I thought exactly what you are thinking though. The main CPU may lose a little 'importance' when it comes to games and physics simulations, but it's not going away anytime soon..
      • I don't think that games are main drivers of computing. I think that business apps are. nVidia might think that they are going to rule computing, but they won't. They'll be a dominant player, but they should remember S3 graphics and Vesa. You're only as good as your last product, and given that they aren't particularly open they might lose their market at any time.
        • Re: (Score:3, Informative)

          by Scoth ( 879800 )
          Business apps might be the drivers of the most sales, but I tend to think games are the drivers of "progress". There are very few business apps that need more than NT4 on a decent sized screen with a fast enough processor to run Office. I think even Office 2007's minimum requirements say something about a 500mhz processor. Heck, a large number of companies could probably get away with Windows 3.1, Word 1.1A, Eudora Light for e-mail, and maybe some sort of spreadsheet/accounting software. You really don't ne
          • You really don't need a dual core with 2GB of RAM and Vista Ultimate to send e-mail, write letters, track expenses, and surf the web a bit.

            You're right about the average business user's need for desktop horsepower, but you overlooked the main consumer of business MIPS today, and that's Symantec Anti Virus. We used to depend on Windows version updates to slow everything down so we could upgrade our hardware, but now we just have to upgrade Symantec. I wonder if any of that work could be off-loaded to the GP

            • by Scoth ( 879800 )
              Hmm, now that you mention it, I did have a friend who asked about a computer upgrade specifically because "Norton ran slow". I think you're onto something!
            • by hurfy ( 735314 )
              So true. We only need a minimal computer for the office tasks...however it takes a 3GHZ HT computer to do them at the same time as AV/Firewall :(

              The terminal emulator runs slow on anything less :(

              Heck with physics processors and GPU, i need an AV card and i could go back to Pentium 3...
          • Well, general purpose cores aren't the best for graphics processing, you want massively parallel floating point calculation and very fast buffers for graphics stuff - specialised graphics hardware will likely always give better bang for your buck.
      • Comment removed based on user account deletion
        • by vlm ( 69642 )

          Slightly OT,but does anyone know where I can find a micro that has at least one USB and preferably runs Linux? I have to fit the CPU into a 4in diameter rocket and so far most of the ones I'm finding require daughter boards that won't fit.

          http://www.gumstix.com/ [gumstix.com]

          or

          http://gumstix.com/waysmalls.html [gumstix.com]

          As they say "linux computers that fit in the palm of your hand"

          I believe the verdex boards are 2cm by 8cm.

          Price is about the same as desktop gear, figure you'll drop about $250 on a basic working system.

        • I agree overall, I used to think that upgrading my GPU would be the most important thing back when I had my 1GHz Athlon, though eventually when that machine fried, I found out that a faster CPU enabled me to get the most out of my GPU. It's back to the stage again where any 2GHz dual core CPU should be fast enough for anyone with the current generation of games and apps, but I'm sure they'll find some more interesting uses for CPU power in the next few years. The way things are going, maybe everything will
    • Re:Yawn (Score:5, Funny)

      by Anonymous Coward on Monday May 26, 2008 @05:06AM (#23542649)
      The future according to Goatse.. P1st fR0st!
      The future according to anonymous coward.. more trolling, offtopic, flamebait-ness, with the odd insightful or funny.
      The future according to Ballmer.. inflated Vista sales (it's his job, damnit!).
      The future according to Microsoft shill 59329.. "I hate Microsoft as much as the next guy, but Vista really is t3h w1n! Go and buy it now!"
      The future according to Stallman.. Hurd.

      BTW The promo video for HURD is going to feature Stallman as a Gangsta rapper, and features the phrase: "HURD up to ma Niggaz."
    • Re:Yawn (Score:5, Funny)

      by Anonymous Coward on Monday May 26, 2008 @05:24AM (#23542715)
      The future according to the past...the present.
    • Re:Yawn (Score:4, Insightful)

      by darthflo ( 1095225 ) on Monday May 26, 2008 @06:24AM (#23543035)
      Three things: - None of the futures you mentioned contradicts any of the others. Quite obviously Blue Bonnet won't predict the future of the storage market and Minute Maid won't be the first companyto know about new processes in CPU manufacturing.
      - What's the future according to Minute Maid anyways? Really, I'm intrigued!
      - Did you notice the interesting parallel between the future according to ATT and what the american government seems to be steering to? More bars in more places (and as many people behind them as possible(?))? What a strange coincidence...
      • Re:Yawn (Score:5, Funny)

        by ceoyoyo ( 59147 ) on Monday May 26, 2008 @09:06AM (#23544107)
        Now what's wrong with more bars with more bartenders behind them so you get your drinks faster? Really, all this criticism of the American government when they're really trying to do something quite noble. :P
      • by gd23ka ( 324741 )
        Less sugar, a lot less vitamins and more corn syrup. High amounts of sugar are a bad thing, no doubt about that
        but corn syrup really takes the cake, pun intended, in getting people fat.

    • by mikael ( 484 )
      The future according to Sun or IBM - faster CPU's using more cores per processor.

      The future according to Nvidia - faster GPU's using more stream processors.

    • Where does IBM/Sony's Cell processor fall in the CPU/GPU battle? IBM most certainly plans to use it in PCs as a CPU, but wasn't most of the initial development focused on making it a better GPUing CPU?
      • The Cell would suck in a PC. There's an underpowered, in order PowerPC core and a bunch of SPEs. But SPEs are for signal processing, not general computation. They only have 256K of memory, smaller than the cache inside a desktop CPU. They don't have access to main memory or an MMU. Even if you added an MMU and paged from main mem to the SPEs it still wouldn't help. The bus would be saturated by page misses and the SPEs would spend all their time waiting.

        Even in a games console it's probably hard to keep all
    • Conan: "... It's time, once again, to look into the future."

      Guest: "The future, Conan?"

      Conan: "That's right, Let's look to the future, all the way to the year 2000!"

      and then ... La Bamba's high falsetto ... "In the Year 2000"

      "In the Year 2000"
    • In a way nVidia's message is the same as that of the Cell ship. There will be more and more use of parallelism, with the CPU (Or a particular CPU on symmetric multi-core systems) acting as a kind of foreman for a troop of processors working in parallel.

      Not all of the uses for the gobs of cheap parallel processing power are apparent yet. But people will find cool things to use it for, just as they found cool things to use home computers for. In a way, we are now going through a home supercomputing revolut
  • by Rog7 ( 182880 ) on Monday May 26, 2008 @04:50AM (#23542565)
    I'm all for it.

    The more competition the better.

    Anyone that worries too much about the cost a good GPU adds to the price of a PC, doesn't remember much what it was like when Intel was the only serious player in the CPU market.

    This kind of future, to me, spells higher bang for the buck.
  • "the 'can of whoop-ass' nVidia has promised to open on Intel."

    Yep, I'm sure the Intel Devs have all taken a sabbatical.
  • by allcar ( 1111567 ) on Monday May 26, 2008 @04:50AM (#23542575)
    The leading manufacturer of GPUs wants GPUs to become ever more important.
    • Re: (Score:2, Interesting)

      by Anonymous Coward
      I am a bit skeptical. If AMD's experimentation with combining the CPU and GPU bears fruit it might actually mean the end for the traditional GPU's. nVidia doesn't have a CPU that can compete with AMD and Intel so I think nVidia is the one in trouble here. But I suppose nVidia has to keep up appearances to keep the stocks from plummeting.
      • If AMD's experimentation with combining the CPU and GPU bears fruit it might actually mean the end for the traditional GPU's.
        Until the wheel of reinvention turns once again and people start realising that they can find performance gains by splitting the graphics processing load out onto special hardware.
        • Re: (Score:2, Interesting)

          by pdusen ( 1146399 )
          Right. The special hardware being separate graphics-optimized cores, in this case.
        • by chthon ( 580889 )

          This answer is very interesting because I seem to remember that MMX was introduced because Philips planned to create specialty co-processor(s) (boards) (around 96/97) to off-load multi-media tasks, so that sound processing would take less CPU cycles and to introduce video processing. Intel did not like this idea and added MMX just to cut off such things.

      • I am a bit skeptical. If AMD's experimentation with combining the CPU and GPU bears fruit it might actually mean the end for the traditional GPU's. nVidia doesn't have a CPU that can compete with AMD and Intel so I think nVidia is the one in trouble here. But I suppose nVidia has to keep up appearances to keep the stocks from plummeting.

        I would concur with that. But add nVidia is also missing an OS and applications. While it is an extension of not having a traditional binary compatible CPU to Intel and AMD, nVidia is totally void here.

        My guess is Intel has the weakest video, perhaps is talking to nVidia and nVidia is trying to pump the value. While AMD is trying to see how to best integrate the CPU and GPU. That is, this is about politics and price for nVidia.

        Some problems today's GPUs have are: they run too hot, take too much powe

  • Sounds like BS (Score:1, Flamebait)

    by gweihir ( 88907 )
    And nVidia has been spouting a lot of this lately. Is the company in trouble and the top executives are now trying to avoid that impression by constantly talking aboit how bright the future is for their company? Quite possible, I would say.

    As to the claim that the GPU will replace the CPU: Not likely. This is just the co-processor idea in disguise. Eventually this idea will fade again, except for some very specific tasks. A lot of things cannot be done efficiently on a CPU. I have to say I find the idea of
    • I agree. In TFA they refer to the GPU taking over more and more specifically in gaming applications, but even then the more you free up the main CPU to do other things like AI hopefully the better games will get. If you recall Weitek when they made a much faster (albeit single precision) math coprocessor than Intel's own 80387, it would be like Weitek saying "We forsee that you'll need that 386 far less in the future."

      If nVidia or any other GPU manufacturer tries to get too generalized they run the risk o
    • But nVidia didn't claim that the GPU would replace the CPU. They even went to lengths to deny it in the article. The version that the poster linked to was void of details, a better description is available at the Inquirer [theinquirer.net] who weren't under NDA.

      The hi-light of the press conference seems to be the censored part revealing that nVidia will be fab'ing ARM-11s in the near future in direct competition with the Intel Atom. Looks like they're not planning to go down without a fight...
  • Competing (Score:4, Insightful)

    by Yetihehe ( 971185 ) on Monday May 26, 2008 @04:53AM (#23542593)
    FTFA:

    basically more GPGPU usage (i.e. the use of the graphics chip to process regular programs) and the co-existence of "competing" technologies like ray tracing and rasterization
    Hmm, they aren't really competing technologies. Raytracing CAN be an extension of rasterization, some RT algorithms even use some form of rasterization for visibility testing... But if nVidia don't embrace RT, they risk going to second position (no, not extinct, as you can do RT on nvidia cards today, but it would be better with some native api and better hardware support).
    • by Sycraft-fu ( 314770 ) on Monday May 26, 2008 @05:08AM (#23542661)
      nVidia doesn't do the APIs for their cards. They have no properitary API, their native APIs are DirectX and OpenGL. In fact, the advances in those APIs, more specifically DirectX, often determines the features they work on. The graphics card companies have a dialogue with MS on these matters.

      This could be an area that OpenGL takes the lead in, as DirectX is still rasterization based for now. However it seems that while DirectX leads the hardware (the new DX software comes out usually about the time the hardware companies have hardware to run it) OpenGL trails it rather badly. 3.0 was supposed to be out by now, but they are dragging their feet badly and have no date when it'll be final.

      I imagine that if MS wants raytracing in DirectX, nVidia will support it. For the most part, if MS makes it part of the DirectX spec, hardware companies work to support that in hardware since DirectX is the major force in games. Until then I doubt they'll go out of their way. No reason to add a bunch of hardware to do something if the major APIs don't support it. Very few developers are going to implement something that requires special coding to do, especially if it works on only one brand of card.

      I remember back when Matrox added bump mapping to their cards. There was very few (like two) titles that used it because it wasn't a standard thing. It didn't start getting used until later, when all cards supported it as a consequence of having shaders that could do it and it was part of the APIs.
      • by ardor ( 673957 ) on Monday May 26, 2008 @06:06AM (#23542933)

        nVidia doesn't do the APIs for their cards.
        Wrong. [nvidia.com]
        GPGPU absolutely demands specialized APIs - forget D3D and OGL for it. These two don't even guarantee any floating point precision, which is no big deal for games, but deadly for GPGPU tasks.
        • by Gernot ( 15089 ) *
          With OpenGL 2.0 shaders and ARB_float_textures, you have full floating support throughout the pipeline.

          And GPGPU can be done with OpenGL 2.0 - approx. 10 months ago, we presented a Marching Cubes implementation in OpenGL 2. 0 that even outperforms its CUDA competitor ... and the algorithms can just as well be ported to Direct 3D 9 (_nine_...).

          http://www.mpii.de/~gziegler

          So donÂt throw out the GPGPU baby with the floating point bathwater ;)
  • by sznupi ( 719324 ) on Monday May 26, 2008 @05:01AM (#23542621) Homepage
    I was wondering about this...now that nVidia wants CPU to loose its importance _and_ they started to cooparate with Via on chipsets for Via CPUs (which perhaps aren't the fastest...but I've hard the latest Isaiah core is quite capable), will we see some kind of merge?
    • I was wondering about this...now that nVidia wants CPU to loose its importance _and_ they started to cooparate with Via on chipsets for Via CPUs (which perhaps aren't the fastest...but I've hard the latest Isaiah core is quite capable), will we see some kind of merge?

      Wouldn't that be great! It's about time that graphics processing, IO, an other things are sent to their own processors. Anyway, wasn't that done before - Amiga?

      • by sznupi ( 719324 )
        Uhmmm...I was thinking more about what AMD and Intel are doing when it comes to owning all components for their platform...
    • Via Isaiah isn't fast enough for games. And NVidia target gamers. So no, unless Via are about to announce an uber x86 implementation.
      • by sznupi ( 719324 )
        Nvidia doesn't target solely gamers, otherwise GF 6100/6200/7300/7600/8200/8300/8500 wouldn't be available at the moment. Add to that that possibly the most profitable market segment now consists of cheap laptops, where it's certainly better to have full platform available to OEMs.

        Also, those early tests
        http://techreport.com/discussions.x/14584 [techreport.com]
        suggest that Isaiah, when it comes to performance per clock, is finally comparable with AMD/Intel. Who knows what we'll see later...

        PS. Games are _the_ only thing (an
        • In a very cheap laptop a la OLPC you'd better off with Intel integrated graphics.

          In fact I think even that is overkill - you could add a framebuffer, hardware cursor and a blitter to the core chipset and steal some system RAM for the actual video memory. Negligable die area and low power consumption.

          • by sznupi ( 719324 )
            I'm thinking more about cheap 15,4" "desktop replacement" laptops with pathetic/"doesn't matter" battery life that, from what I see everywhere, dominate the sales

            (nvm that I absolutelly hate them - I prefer something more portable, but economy of scale doesn't work for my advantage)
            • I dunno. It seems to me that NVidia is good for games. I got a Asus G1S about a year ago and it rocks.

              But the mass market doesn't want laptops like this. They want something which lets them run MS Office at work or a web browser and and an email client at home. In machines like that discrete graphics doesn't really add much performance and it kills battery life. It also adds a few bucks to the build cost. So companies like NVidia don't really have anything which that market needs.
              • by sznupi ( 719324 )
                That's more or less my point - not everywhere fast GFX matters, and Nvidia _does_ have integrated GFX chipsets with "performance doesn't matter" mindset (just few months ago I built for somebody PC on the cheap (vey cheap) based on GF 6100 chipset)

                But OEMs that build cheap laptops supposedly want not only that, they want integrated package, with everything (not only chipset/chips but also CPU) included in one nicely tested/supported solution.

                The whole point - I wonder/suspect that Isaiah and its refinements
  • Hardware accelarated raytracing could also be interesting for speeding non-realtime rendering such as for making movies!
  • It arrived Friday. Wow. 430 to 512 BILLION FLOATING POINT OPERATIONS PER SECOND! Need I say any more. Yummy, now I get to play with it!
  • I made sure that my current machine had an nVidia graphics chip, so that I could play with stuff like CUDA. But my machine also runs Vista and, some 18 months after its release, there still isn't a stable version of CUDA for Vista. Plus, seeing as my machine's a laptop, I doubt that even the beta drivers available from nVidia would install, seeing as how they're prone to playing silly buggers when it comes to laptop chip support.

    So nVidia, instead of spouting off about how great the future's going to be, ho
    • by Ant P. ( 974313 )
      This is the thing wrong with nVidia. They're so obsessed with having the fastest hardware tomorrow that they fail to notice people leaving them in droves today for better hardware (power consumption/drivers/open specs).

      It's a bit strange they'd support CUDA on linux but not vista though.
  • ... the pc architecture is going to be catching up to where the Amiga and various consoles have been for the past 2-3 decades (in terms of basic high level design ideas)?

    :)

    Funny how things work, isn't it.

  • Comment removed based on user account deletion
  • Back in the day, if you ran 'math-intensive' software it would look for an 8087 math co-processor and load special code libraries in Lotus 123 to speed up calc performance. Once Intel had the chip real estate to spare though, this special purpose chip got subsumed into the CPU. As Intel keeps driving the transistor count up, they will be perfectly capable of embbeding a full-featured 'streams' processor into their CPUs. It won't happen right away, but it solves the issue of different code libraries (a soft
    • by m50d ( 797211 )
      Historicaly, it's always been the CPU that takes over special-purpose functions, not the other way around (at least in the Intel space).

      It has indeed, and there's certainly grounds for caution, but there's a chance that this time it's different. There are different silicon processes involved in making a fast vector processor (as one needs for GPUs) compared to what one does to make a CPU, so putting them together isn't simply a matter of finding enough space in the package. Couple this with the fact that C

  • Intel obviously sees the threat of the GPU creators, but their attempts at breaking into the GPU market hasn't been very successful.

    Their next generation effort is called Larrabee [theinquirer.net]. Which uses multiple x86 cores linked with a ring bus.

    It actually reminds me of PS3 SPU setup but Intel is using the GPU functionality as a wedge into the GPU market, instead of pushing it for general computation. But, since standard C code will work on it, you can rewrite the entire stack to be a physics co-processor or fol

  • Why not employ numerous Field Programable Gate Arrays (FPGAs) instead of a CPU? You could program one or more FPGA to be optimized to execute each of the functions the software needs. Need more FLOPs? Program for that. Need scalar computation? program for that. Seven FPGAs running one way and nine running another. At some point, FPGAs may completely replace the CPU as we know it today. The HPC community is already looking at this possibility for some types of computations.
  • nVidia seems to be a litte late to this game [wikipedia.org]. ;-)
  • Focus generally seems to be on "bigger" as opposed to "more efficient." Add more cores, increase the frequency, etc etc.

    Some other tasks focus on "trimmed down and more efficient" but then tend to fail in the power output arena.

    I was wondering how difficult it might be to make a motherboard or graphics card with multi-processors. One small one for general-purpose computing (basic surfing, word-processing, 2d graphics or basic 3d), and a bigger one that could be used to "kick in" when needed, like an ove
    • This is essentially what's been proposed by NVidia and ATI as Hybrid SLI and I believe Hybrid Crossfire. You essentially have an integrated Geforce 6150 (or other power-efficient chipset graphics) and a then a discrete 8800 (or other high end watt sucker). When you launch a game, the video switches over to the 8800 giving you high performance, but then when you quit and go back to your desktop the 8800 is powered off and the 6150 takes over.
      • Actually, with most motherboards coming with onboard video (that is usually less powerful than the add-on GPU), this sounds like a really good idea. Of course, in this case you'd need a compatible card (onboard ATI+addon ATI, or onboard Nvidia+addon NVidia). I wonder if it could be standardized so that the lesser-power onboard GPU's could be switched down and allow a passthrough for the addon AGP card (or vise-versa, since the addon card is more likely to have extra ports such as DVI etc than the onboard/mo
  • by nguy ( 1207026 )
    Vector coprocessors, array processors, and all that have been around for ages. Maybe they'll finally catch on. If they do, you can bet that the manufacturer making them will not be a graphics card manufacturer. In fact, by definition, they won't be a graphics card manufacturer, since they will be making co-processors for non-graphics applications.

    But I don't think they will catch on. It makes little sense for people to stick extra cards into their machines for computation. Instead, you'll probably see
  • Seriously.... Over the past few years Nvidia has shown me that they could really care less if games that aren't brand new will run on their cards. Older games that use palletized textures and 16bit function calls look horrible on the newer cards. This is something they could fix easily in software if they wanted to.
  • I call bullshit (Score:1, Interesting)

    by Anonymous Coward
    According to nVidia the dream gaming system will consist of quad nVidia GPU cores running on top of a nVidia chipset-equipped motherboard, with nVidia-certified "Enthusiast" system components. Meanwhile the company just will not work on LOWERING the power consumption of their graphics cards. Why do we need one-kilowatt power supplies? Because nVidia says so!

    Fuck nVidia.

Think of it! With VLSI we can pack 100 ENIACs in 1 sq. cm.!

Working...