Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Graphics Hardware

Nvidia Discloses Details On Next-Gen Fermi GPU 175

EconolineCrush writes "The Tech Report has published the first details describing the architecture behind Nvidia's upcoming Fermi GPU. More than just a graphics processor, Fermi incorporates many enhancements targeted specifically at general-purpose computing, such as better support for double-precision math, improved internal scheduling and switching, and more robust tools for developers. Plus, you know, more cores. Some questions about the chip remain unanswered, but it's not expected to arrive until later this year or early next."
This discussion has been archived. No new comments can be posted.

Nvidia Discloses Details On Next-gen Fermi GPU

Comments Filter:
  • But does it... (Score:5, Interesting)

    by popo ( 107611 ) on Wednesday September 30, 2009 @08:00PM (#29600369) Homepage

    ... run Linux?

    • But does it run Linux?

      Yes, but only if your main display is connected to a genuine Nvidia graphics card [slashdot.org].

  • They could fit one of Philipp Slusallek's ray-trace processing units in the corner of the chip and never notice the cost in silicon.

  • by Joce640k ( 829181 ) on Wednesday September 30, 2009 @08:40PM (#29600641) Homepage

    ...I'm not sure it means what you think it means.

  • So... (Score:2, Interesting)

    Will they also be announcing support for an underfill material that doesn't cause the chip to die after a fairly short period of normal use? And, if they do, will they be lying about it?
    • OF COURSE, who do you think they are? Apple, Sony, HP and Microsoft?

      Nvidia just makes the cards. It isn't their fault if they're not installed, cooled or properly read bedtime stories after use.
  • Comment removed (Score:5, Insightful)

    by account_deleted ( 4530225 ) on Wednesday September 30, 2009 @08:51PM (#29600711)
    Comment removed based on user account deletion
    • by cjfs ( 1253208 )

      Who really wants/need bleeding edge technology anymore?

      Numbers...must...go...higher... [berkeley.edu]

    • by jpmorgan ( 517966 ) on Wednesday September 30, 2009 @09:09PM (#29600815) Homepage
      Notice the features being marketed: concurrent CUDA kernels, high performance IEEE double-precision floating point performance, multi-level caching and expanded shared memory, high performance atomic global memory operations. NVIDIA doesn't care about you anymore. Excepting a small hardcore, gamers are either playing graphically trivial MMOs (*cough*WoW*cough*) or have moved to consoles.

      They won't want to sell you this chip for a hundred bucks, they want to sell it to the HPC world for a couple thousand bucks (or more... some of NVIDIA's current Tesla products are 5 figures). The only gamers they're really interested in these days are on mobile platforms, using Tegra.
      • by mwvdlee ( 775178 )

        NVIDIA doesn't care about you anymore.

        They never did.
        Nor does any other company.
        Companies care about your money.
        Name me one company that actually cares more about you than your money.

      • That's right. After conquering the competitive but profitable mass-market for their products, where they can make a killing in low-margin high-volume sales, they are going after a tiny niche....

        It would certainly have nothing to do with competition from Larabee and the general realisation that as the GPU becomes more general purpose Games will seek to offload more calculations onto them. For graphics it's rare to hit a case that needs double-precision (it happens in HDR), but when you move your physics code

    • You do realise your card is less than two years old? In the year 2000 they were releasing the Geforce 2 Ultra, I bet you can't play Doom3 in it.

      Graphics are still very far from "realistic", and till then graphic cards will continue to evolve; the 8800GT may seem more than enough for now, but it won't seem that way some years from now, Unless, of course, OnLive takes over the PC gaming market.

    • by Pulzar ( 81031 ) on Wednesday September 30, 2009 @09:22PM (#29600893)

      People have been saying that forever now. I think only the first 2 generations of 3D cards were greeted by universal enthusiasm, while everything else since had a number of "who needs that power to run game X" crowd. The truth is, yes, you can run a lot of games with old cards, but you can run them better with newer cards. So, it's just a matter of preference when it comes to the usual gaming.

      AMD/ATI is at least doing something fun with all this new power. Since you can run the latest games in 5400x2000 resolutions with high frame rate, why not hook up three monitors to one Radeon 58xx card and play it like this [amd.com]? That wasn't something you could do with an older card.

      Similarly, using some of the new video converter apps that make use of a GPU can cut down transcoding from many hours to one hour or less... you can convert your blu-ray movie to a portable video format much easier and quicker. Again, something you couldn't do with an old card, and something that was only somewhat useful in previous generation.

      In summary, I think the *need* for more power is less pressing than it used to, but there's still more and more you can do with new cards.

      • why not hook up three monitors to one Radeon 58xx card and play it like this [amd.com]?

        Because not even in the publicity shot could they get that dirty great inch gap from between the top and bottom tiers of screens. The horizontal looks ok (but you lose definition in the resin overlay between the horizontal monitors), but that joint right in the middle of where you're looking would be very similar to constantly having a piece of masking tape over the middle of your current monitor.

        Why not just output to a high-def TV?

        N.B. Strategy games do not scale well to high-def TVs. The resolution is

      • Plus, it brings down the cost on the low end. I love getting massive power for $100.

        More enthusiast cards, please!

      • by Kjella ( 173770 )

        They've been saying it forever, but it's starting to come true. Let me quote you from Anandtech's HD 5850 review, benchmarks starting at page 3:

        "Crysis Warhead: Warhead is still the single most demanding game in our arsenal, with cards continuing to struggle to put out a playable frame rate with everything turned up."
        "Far Cry 2: Thankfully it's not nearly as punishing as Crysis, and it's possible to achieve a good framerate even with all the settings at their highest."
        "Battleforge: [Has no real one-sentence

    • by Nemyst ( 1383049 )
      The problem is consoles: with releases showing up on a vast array of systems with wildly different capabilities and most games now coming out on consoles first or at the same time as on the PC, it would make no sense for developers to create a game which would be too complex/heavy to be ran on a substantial portion of machines (read: Xbox 360s and PS3s, not even counting the Wii). Thus, games get stale as "old" hardware doesn't get upgraded.

      This generation is noticeably different in that consoles now have
    • by Heir Of The Mess ( 939658 ) on Wednesday September 30, 2009 @10:46PM (#29601361)

      Since then however, the hardware has always been "good enough"

      That's because most games are now being written for consoles and then being ported to PC, so the graphics requirements are based on what's in an X-Box 360. Unfortunately consoles are on something like a 5 year cycle. People are now buying a game console + a cheap PC for their other stuff for cheaper than the ol gaming rig. Makes sense in a way.

      • >so the graphics requirements are based on what's in an X-Box 360.

        I dont think thats such a limiting factor. Lets say they develop the xbox game first, instead of the PC version. They settle on 1080i for resolution and only a certain level of quality for textures. They also tone down the physics and AI to a level it doesnt slow down the xbox cpus.

        Okay, now when you port the PC version, you let the user select the resolution he likes and you up the textures to max and ungimp the physics and AI. Its not

    • Yes and no. There is a danger coming to both Intel nvidea and AMD/ATI within the next 20 years. Computing tech of all kinds will be old hat.... for the average consumer. The human eye can only see so many pixels the human nervous system only has certain response time. Individuals who only use a computer for day to day activities like web/email and a movie or two probably have or just about have hit the point where they dont care any more. There are however those of us who still just "like" computers and t
      • There is a danger coming to both Intel nvidea and AMD/ATI within the next 20 years.

        NEWS FLASH... When you're 2x as old as you are right now, the world will have changed.

        Get off my lawn.

    • Comment removed based on user account deletion
      • even going so far as to eat my own dogfood

        You do realize this implies you work for ATI/AMD? If not, perhaps you should do a quick check on what dogfood means.

        According to your article, you're obviously price sensitive, so why would you *ever* pay full price for a product (from your competitor)?

        Clearly you need to inform yourself before attempting to inform others.

        • by Kjella ( 173770 )

          even going so far as to eat my own dogfood

          You do realize this implies you work for ATI/AMD? If not, perhaps you should do a quick check on what dogfood means.

          Umm... no, it means to use whatever you're selling and recommending. To take a car analogy (surprise) if you sell Fords for a living and drive a Toyota you're not eating your own dogfood. The implied message is that you'd go to a different store and buy a Toyota, even though you don't work for Ford. Same if you're selling clothes you'd never wear yourself, even though you're a retailer and not working for the brands you carry. So if he was recommending AMD systems to everyone but using Intel systems himself

          • by RMH101 ( 636144 )
            Not quite. To eat your own dogfood means that you use something that you or your company have developed, which isn't quite ready for general consumption - it's in beta, it's buggy, the GUI's not quite there and you're using it before it's available on sale or to the public.
            • by Kjella ( 173770 )

              Answers.com has this:

              An expression describing the act of a company using its own products for day-to-day operations. (...) This slang was popularized during the dotcom craze when some companies did not use their own products and thus could "not even eat their own dog food".

              In other words, to NOT eat your own dog food means you SELL it but you're not using it yourself. If you are using it too, you are eating your own dog food. I'm not sure when that expression changed, if ever, to mean that you're not selling it yet. I guess your use is more a result of the other, because there's so much focus on "eating your own dogfood" you end up eating crap for marketing purposes.

      • Most of my customers just rave about how "blazing fast" the new dual and quad AMDs I've been building them are, [...]

        That surely puts a whole new perspective on the next generation of GPUs.

    • Or maybe you are just getting old/mature, and are no longer drooling over toys for toys' sake? Just saying...

    • The desire for better CPUs is somewhat plateauing, too. I can't recall seeing a game with min reqs higher than a 2ghz Athlon X2 or 3ghz Pentium D. That's despite much faster dual and quad cores becoming the norm.

      For a while, I actually played Left4Dead on my old Athlon XP 2400+ and 7800GS. Got about 30fps with VSYNC on, and settings set somewhat low. But now I've got an Athlon II X2, and 8800GS (which cost $30 brand new, one year ago!)

      I don't have any upgrade plans at the moment. If a game runs too slowly,

  • I wonder when a GPU will be able to directly access a network of some sort. Right now, you would need glue code on the CPU to link multiple GPUs in different systems together. I imagine that some HPC applications would run quite well with 100 GPUs spread over 25-50 machines with a QDR InfiniBand link between them.

    • by ceoyoyo ( 59147 )

      Probably not, without major architectural changes. Currently when you want something to run fast on a GPU you really want to have an algorithm where you load everything up and then just let it run.

      You could potentially improve that by cutting out the CPU, PCI, etc., but then you're not really talking about a graphics card anymore and you might as well just market it as a stream processor or a competitor to Cell blades.

  • Another article here (Score:4, Informative)

    by Vigile ( 99919 ) * on Wednesday September 30, 2009 @09:13PM (#29600837)

    http://www.pcper.com/article.php?aid=789 [pcper.com]

    Just for a second glance.

  • by mathimus1863 ( 1120437 ) on Wednesday September 30, 2009 @09:15PM (#29600855)
    I work at a physics lab, and demand for these newer NVIDIA cards are exploding due to general-purpose GPU programming. With a little bit of creativity and experience, many computational problems can be parallelized, and then run on the multiple GPU cores with fantastic speedup. In our case, we got a simulation from 2s/frame to 12ms/frame. It's not trivial though, and the guy in our group who got good at it... he found himself on 7 different projects simultaneously as everyone was craving this technology. He eventually left b/c of the stress. Now everyone and their mother either wants to learn how to do GPGPU, or recruit someone who does. This is why I bought NVIDIA stock (and they have doubled since I bought it).

    But this technology isn't straightforward. Someone asked why not replace your CPU with it? Well for one, GPUs didn't use to be able to do ANY floating or double-precision calculations. You couldn't even program calculations directly -- you had to figure out how to represent your problem as texel- and polygon-operations so that you could trick your GPU into doing non-GPU calculations for you. With each new card released, NVIDIA is making strides to accommodate those who want GPGPU, and for everyone I know those advances couldn't come fast enough.
    • by six11 ( 579 )

      I'm curious to know how you go about writing code for GPUs. I've been thrown into a project recently that involves programming multicore architectures, so I've been reading about StreamIt (from MIT). It looks really interesting. But they don't mention GPUs in particular (just multicores), probably because the current batch of GPUs don't have a lot of candy that CPUs have (like floats).

      • Re: (Score:2, Informative)

        by Anonymous Coward

        Start looking at OpenCL as soon as possible if you want to learn gpgpu, cuda is nice but opencl is portable between vendors and hardware types :)

      • GPUs (of the past) are basically just massively parallel floating point units. I think the OP meant to say that they lacked integer operations and double precision floats in the beginning.

      • Go and have a look at GPGPU [gpgpu.org]. There's tons of material on there about techniques, some tutorials and a busy forum.

    • I work at a physics lab, and demand for these newer NVIDIA cards are exploding due to general-purpose GPU programming. With a little bit of creativity and experience, many computational problems can be parallelized, and then run on the multiple GPU cores with fantastic speedup. In our case, we got a simulation from 2s/frame to 12ms/frame. It's not trivial though, and the guy in our group who got good at it... he found himself on 7 different projects simultaneously as everyone was craving this technology. He eventually left b/c of the stress.

      That sounds like a physics lab alright!

  • Embedded x86? (Score:2, Interesting)

    by Doc Ruby ( 173196 )

    What I'd like to see is nVidia embed a decent x86 CPU, (maybe like a P4/2.4GHz) right on the chip with their superfast graphics chips. I'd like a media PC which isn't processing apps so much as it's processing media streams, pic-in-pic, DVR, audio. Flip the script of the fat Intel CPUs with "integrated" graphics, for the media apps that really need the DSP more than the ALU/CLU.

    Gimme a $200 PC that can do 1080p HD while DVR another channel/download, and Intel and AMD will get a real shakeup.

    • All I know is I want to be able to move, normal map, and texture 10M Triangles in real time. When they can do that.. I'll buy beer for everyone!
    • Your proposal sounds similar to the IBM/Sony cell architecture: one general purpose processor core with a collection of math crunching cores. The enhanced double precision FP in this latest Nvidia chip also maps the progression of cell with the PowerXCell 8i over the original cell processor.

      • Indeed, I have two (original/80GB) PS3s exclusively for running Linux on their Cells. The development of that niche has been very disappointing, especially SPU apps. And now Sony has discontinued "OtherOS" on new PS3s (they claim they won't close it on legacy models with new firmware, but who knows).

        I'd love to see a $200 Linux PC with a Cell, even if it had "only" 2-6 (or 7, like the PS3) SPUs, and no hypervisor, but maybe a separate GPU (or at least a framebuffer/RAMDAC, but why not a $40 nVidia GPU?). Th

    • Re: (Score:2, Informative)

      by Anonymous Coward

      What I'd like to see is nVidia embed a decent x86 CPU,

      They did, its called Tegra. Except its not using the x86 hog, but way more efficent ARM architecture

      • Re: (Score:3, Interesting)

        by Doc Ruby ( 173196 )

        That's better than nothing. But I want all the x86 packages, especially the Windows AV codecs. That requires an x86.

        Though that requirement suggests an architecture of ARM CPU for OS/apps, little x86 coprocessor for codecs, and MPP GPU cores doing the DSP/rendering. If Linux could handle that kind of "heterogenous multicore" chip, it would really kill Windows 7. Especially on "embedded" media appliances.

  • The Fermi (great name, was it after Erico Fermi the italian nuclear pioneer), want be out until next year, and early on it will be in the $400 top range cards, more that what most of us spend). So ATI has the lead for next 4 months and the christmas sales. The Fermi might be quicker than the current 5870 Radeon, but although ATI aren't ready for a new archicture or process bump. Chip Tweaking will probably get a usual 20% boost for later versions of the Radeon. ATI will be a lead of a bit. This a just as we
  • Hi thar. We gave you some useful hardware to support general purpose calculations in your graphics accelerator so you can compute while you compute.

    The only thing we can't support is decent graphics in games without resorting to special, NVIDIA-specific patches.

I tell them to turn to the study of mathematics, for it is only there that they might escape the lusts of the flesh. -- Thomas Mann, "The Magic Mountain"

Working...